Skip to content

Containerizing Networks for Efficiency and Performance

As new technologies continue to evolve, it is becoming increasingly difficult for network infrastructure and systems to maintain their pace. While Software Defined Networking (SDN) has been deployed for over a decade, hosting networking functions on x86 servers presents several challenges:

  1. Sharing of CPU and memory resources between software-defined networking and storage functions vs. actual workload delivery.
  2. Scaling metadata as functions are distributed across more and more nodes.
  3. Network path efficiency and reducing the number of hops.


These problems are exacerbated in 5G and edge computing deployments. First, the standards are evolving very rapidly to enable high demand capabilities and secondly, these deployments are physically in thousands to tens if not hundreds of thousands of locations. In this environment, refreshing physical hardware is a non-trivial challenge.

 Choosing Containerization

 Many current and prior startups have embraced modern concepts like providing software that can be run on multiple hardware platforms (disaggregation) and embracing open-source software like Linux. However, Kaloom has done two unique things:

 We built our solution using a modern language, Go, and built our application with a containerized approach so the exact same code can run on both switches and servers via SmartNICs. Kaloom is the only vendor in the world that has this capability. Major switch vendors like Cisco and Juniper cannot run their code on a server and hope to deliver the same performance and efficiency as on a switch, while major SDN vendors like VMware can leverage SmartNICs but can’t run on a switch. By enabling the Kaloom software to run on both switches and SmartNICs, we address all three of the challenges mentioned above:

  1. Server resources are not shared by network functions.
  2. Only functions that make sense from an application topology standpoint are run locally; other functions are run centrally in the fabric.
  3. Large amounts of inter-instance state data are not generated, nor does traffic need to be sent to a specific device to be handled.


In addition, by being containerized, different sets of containers can be instantiated for each network slice. Not only can we support our containerized network functions, but we can also support partner apps, functions and actual customer workloads. Users have access to this under a common, unified platform. Compare this to VMs, which have significant OS (Operating System) overhead or segmentation within a monolithic software stack, which is complex and can lead to many security holes and noisy/malicious neighbor issues.

The slicing goes beyond the data plane, our control plane and management plane are both containerized, so each slice can function completely independently. This is what gives us hardware independence, enables us to run on switches and servers, and makes it so we can easily port to newer chips with better energy efficiency, performance, and capabilities.

 Containerization Benefits

 Kaloom's Unified Edge solution leverages an off-the-shelf Linux distribution, Red Hat Enterprise Linux (RHEL) CoreOS, without any customization. Further, Kaloom leverages Red Hat’s enterprise container management platform, OpenShift. This combination brings several benefits: 

  1. Kaloom benefits from a large community of users who are providing security and stability enhancements to the base OS.
  2. Kaloom’s customers can use tools available to all OpenShift customers to manage the Kaloom solution.
  3. Unified Edge, Kaloom's customers can run virtualized and containerized network functions, along with VM and container-based applications, right alongside the Kaloom containers. This is critical in simplifying the management of thousands of edge sites, as it avoids the need to manage switches, servers, OS, and virtualization layers separately.

The Edge is being driven by applications like AR/VR, gaming, autonomous vehicles, and industrial automation. These apps are driving performance and efficiency requirements, so we needed to include as many functions as possible at the Edge. And with 5G, since we’re using OpenShift, we can use it for controlling workloads.

We have been working closely with Red Hat to develop and now deliver a joint Unified Edge solution to control workloads via OpenShift. Unified Edge does 5G termination for the wireless side and workload orchestration; this is much more efficient than putting together different components separately, which won’t scale across dozens or hundreds of sites.

Of course, the move from VMs to containers is going to be a long one for the industry, so we leverage KubeVirt in Unified Edge to allow customers to run VMs and containers side by side. Because the network is containerized, you don't have to worry about data separation within the network software and the resulting complexity – we simply deploy a separate set of containers per network slice.

 By eliminating much of the duplication that VMs entail, you only need to run one operating system. In contrast, if you need to spin out a separate VM for each slice, you will quickly use up all the available processing, network, and storage resources.

 Future Enhancements

Kaloom’s focus for the remainder of 2022 is to deepen our integration with the OpenShift ecosystem and broaden the number of validated hardware platforms. Thus far, Kaloom has primarily focused on the Intel Tofino1 chipset. We will also expand our capabilities by supporting Intel Tofino2 and Stratix DX chipsets, alongside SmartNICs from NVIDIA and Intel. Watch this space for further developments.