It’s all about applications…..on the Edge
5G, Artificial Intelligence (AI), Cloud, and Edge computing are the fundamental building blocks for a suite of emerging applications such as: interactive gaming, augmented reality, virtual reality, remote medicine, connected cars, smart cities, smart building, industrial IoT.
Edge computing provides execution resources (compute, storage, and networking) for applications close to the end users, typically within or at the boundary of operator networks. Edge computing can also be placed at enterprise premises, for example inside factory buildings, in offices and vehicles, including trains, and planes. The edge infrastructure can be managed or hosted by Services Providers (SP). The main benefits that edge solutions provide include low latency, high bandwidth, data processing and data offload as well as trusted computing and storage.
An Edge datacenter can substantially vary in size going from 2 or 5 servers at the far edge to 1-3 racks at the edge. Edge data centers in North America and Europe will often initially consist of a single rack. Power, space, and cooling are among the biggest constraints at the edge, specifically for SPs looking to leverage their current central offices that are already crowded with previous generation infrastructures to begin with.
A private 5G network allows large enterprise and public sector customers to bring a custom-tailored 5G experience to indoor or outdoor facilities where high-speed, high-capacity, low-latency connectivity is crucial. It also addresses the need for dedicated bandwidth capacity and range, security, high-quality connections, and consistent, always-on service to help reduce downtime. Private 5G Networks can be deployed over both licensed and unlicensed spectrum.
Deloitte predicts that over the next decade hundreds of thousands of companies will deploy private 5G networks, considering increased reliance on wireless devices, sensors and artificial intelligence to connect people, machines and processes.
Both Public and Private 5G Network provide optimized connections to public cloud providers which therefore provide an ideal hybrid cloud infrastructure for the enterprise customers.
5G-Edge is about Software Defined Everything...
Edge Requirements
Before the end of this decade, billions of devices will be connected to the Internet and served by Cloud Native applications running from centralized, core data centers to decentralized edge deployments. Unfortunately, the existing IT infrastructure is unable to scale and inadequate to sustain new applications and devices in a business-viable fashion. The cost per connected device and per Gbps of IP traffic provided by the current IT infrastructure is an order of magnitude higher than required by emerging applications. For example, an HD camera in the context of a public safety application will generate significantly more data per month than a traditional smartphone user but only generate $1 or $2 of revenue per month versus the $50 currently the case for mobile broadband subscribers.
SPs are planning to deploy Edge Datacenters with the following characteristics and constraints:
- Capacity of 1+ Tbps per Edge DC
- Single rack floor-space
- 15KW with associated cooling
- Un-manned site (e.g., remotely managed)
- Support for slicing (e.g., fully isolated, and secure slices)
- Optimized for Cloud Native applications based on Kubernetes
- Support for SRv6 for Service Function Chaining
There is a limited number of servers which could be deployed at the edge. For example, based on the requirements stated above, many more servers would be needed to only sustain the User Plane functions (UPF) at the 1+ Tbps throughput target than possible within the given power, cooling, and space constraints. Consequently, a new approach is required. SPs must be capable of maximizing the number of customer-revenue generating applications while fulfilling the expected characteristics in terms of throughput, latency, and availability. It is obvious that it has become essential to focus the compute resources on these customer applications while offloading cloud infrastructure services toward the IPU/DPUs and switches.
Kaloom’s Unified Edge Solution
Kaloom has developed a fully programmable and automated networking solution that is reimagining how edge and data center networks are built, managed, and operated. Kaloom networking software is currently running over multi-Terabits/second Programmable white boxes switches, and the same software solution will also be available on IPU/DPU later in 2023. These white box switches and IPU/DPUs are managed at-scale in an identical fashion as servers, using Redfish and OpenBMC.
Kaloom has closely collaborated with Red Hat from its inception. We envisioned and developed the Unified Edge Solution in strategic collaboration with Red Hat.
Red Hat OpenShift is a container application platform that enables developers to build, deploy, and manage applications with greater speed, security, and scalability. Red Hat OpenShift is the leading hybrid cloud application platform powered by Kubernetes, an open-source container orchestration system, that provides familiar tools to create a consistent, agile, and manageable environment from core to cloud to the edge. In addition, Red Hat OpenShift helps developers accelerate application development and delivery processes, while also providing enterprise-grade security-focused capabilities and improved scalability. Red Hat OpenShift can run on a variety of hardware architectures, including CPUs, GPUs, switches and FPGAs across Far Edge to Edge and Data Center.
Kaloom helps extend the capabilities of Red Hat OpenShift to deliver a complete network-optimized application container platform that includes compute, storage, and networking support. Kaloom runs Red Hat OpenShift, as well as Red Hat Enterprise Linux CoreOS, on the actual switches and IPU/DPUs, thus providing a single, consistent operating system and Kubernetes engine environment across server, IPU/DPUs and switches.
Kaloom’s networking software provides a complete separation between its control plane and data plane. Its control plane consists of a collection of cloud-native network functions (CNFs) deployed as containers over a Kubernetes clustering framework. Examples of cloud-native functions (CNFs) are Kaloom virtual Router (KVR), Kaloom VXLAN Gateway (KVG), and Kaloom 5G User Plane function (UPF). All Kaloom virtual network functions (VNFs) can sustain Terabits/second throughput at or near sub-microsecond latency.
One of the key cloud-native network function (CNFs) provided by Kaloom which leverages these various hardware targets is a 5G UPF. The UPF is a significant element of a 5G network. It provides IP connectivity, mobility, QoS, policy control, and charging. Along with a 5G Session Management function (SMF), it permits collection of critical information about the subscribers and devices connected to a 4G and/or 5G network (e.g., location, Session ID, classes of applications, etc.). Interoperability testing between Kaloom User Plane function (UPF) and several 5G infrastructure vendors has been achieved with leading 5G vendors. With the ability to sustain millions of sessions while carrying Terabits/second of IP Traffic, Kaloom offers the highest performance User Plane function (UPF) in the industry.
In summary, Kaloom envisioned and developed its Unified Edge solution in collaboration with Red Hat to help:
- Reduce total cost of ownership (TCO) by up to 85% and increasing energy efficiency
- Increase revenue by freeing up servers for revenue generating applications
- Simplify operations by delivering a single execution platform for compute, storage, networking, applications and lifecycle management
- Increase agility and flexibility in accelerating time to market for new applications and services.