Networking

Deploying Time-Sensitive 5G Networks at the Dawn of AI for Telcos

Telecommunication (telco) providers are undergoing a business transformation. They’re replacing the traditional network infrastructure that lacks agility, flexibility, and efficiency with commercial off-the-shelf (COTS) white box servers to assist in implementing 5G and modernizing data centers. 5G is the foundation for boosting network capacity and bandwidth but will overwhelm current network architectures.

A considerable challenge in moving to 5G is in the traditional radio access networks (RANs). Current RAN architectures cannot handle the additional capacity required by 5G, do not have the necessary agility to deliver new services, and cannot meet new scalability requirements. The solution to this is virtualization, and cloudification of the RAN to allow for the use of COTS servers and the layering of software-defined networking (SDN) and network function virtualization (NFV) to enable dynamic reconfiguration of services.

Software-defined antenna systems, or virtual RANs (vRANs), bring cellular network operators the kind of operational efficiencies that cloud service providers provide to their clients. Carriers can program network functions in high-level software, using AI to add new revenue-generating services while deploying capacity instantaneously when and where needed. NVIDIA is uniquely positioned to deliver the tools necessary to build a high-performance 5G network with innovations across the full stack.

5T for 5G for time-sensitive networks

In the transition to 5G, many of the necessary architectural changes must occur at the network edge and specifically in the vRAN or cloud-native RAN (CloudRAN). At these junctures, clock synchronization poses a long-standing problem. The front-haul access network between radio resource units (RRUs) and baseband units (BBUs) requires precise time synchronization for the management of radio resources and radio signal processing.

The current CPU– and FPGA-based alternatives for time synchronization have significant drawbacks. Running time-synchronization software on general-purpose CPUs is not precise enough. FPGAs provide flexibility but are inefficient in terms of CapEx and OpEx due to their high-power budget and price. As a result, FPGAs are typically well-suited for niche point functions but not necessarily for mainstream network functions demanding high efficiency and performance.

Virtualizing the RAN provides the following benefits:

  • Coordination, centralization, and virtualization of mobile networks
  • Enablement of new services at the network edge
  • Support for resource pooling (more cost-effective processor sharing) and load balancing
  • Scalability (flexible hardware capacity expansion) from high-capacity cells to low-capacity cells
  • Layer interworking (tighter coupling between the application layer and the RAN)

A breakthrough technology offered by NVIDIA is the Time-Triggered Transmission Technology for Telco, also known as 5T for 5G. 5T for 5G can deliver incredibly accurate time synchronization across front-haul and mid-haul networks, providing telco providers with higher performance, more accurate timing, and reduced costs in their 5G CloudRAN rollouts.

5T for 5G uses the NVIDIA ConnectX-6 Dx SmartNICs and BlueField-2 Data Processing Unit (DPU). ConnectX-6 Dx is the industry’s first SmartNIC to offer this super precise time synchronization for enhanced Common Public Radio Interface (eCPRI). It provides a 5-in-1 solution:

  • High network throughput
  • Low-latency network connectivity
  • Low power demand
  • A broad set of in-hardware acceleration capabilities
  • 5T for 5G time synchronization

As a result, ConnectX-6Dx and BlueField-2 render expensive and power-hungry FPGA devices unnecessary.

5G CloudRAN architecture combines AI and smart networking

The NVIDIA EGX A100 edge server platform, which includes ConnectX-6 Dx with 5T for 5G technology, provides the ideal reference architecture for software-defined, hardware-accelerated 5G radio access networks to keep all your connections on-time. The EGX A100 includes the NVIDIA Ampere architecture with a ConnectX-6 DX SmartNIC. Ampere can perform a range of compute-intensive workloads, including AI inference and 5G applications, to transform servers large and small into secure AI-enabled supercomputers.

AI is being used to power 5G CloudRAN and applications at the edge and powers many deep learning application algorithms. AI gathers and analyzes customer and machine data to predict what customers want, helps manage value transactions securely, and responds quickly with personalized offers. This requires the ability to perform tasks in parallel and thus needs an architecture that can exploit massive parallelism, something that GPUs are better suited for than CPUs.

Similarly, 5G is designed to operate under a broad range of frequencies to support new applications and overcome latency-sensitive conditions. For example, it could be necessary to perform scheduling with a precise time accuracy of just 16 ns. The ConnectX-6Dx, along with the NVIDIA EGX A100 platform, helps solve the complex scheduling problem in a tight 100-microsecond window. AI can automatically find and solve issues in real-time, thereby optimizing 5G networks. For instance, AI could uncover new ways to provide multiple services onto a single frequency band, improving the wireless spectrum.

The ConnectX-6 Dx network card enables up to 200 Gbps of data throughput that can be sent directly to the GPU memory for AI and 5G signal processing. Simultaneously, the 5T for 5G technology works with the EGX A100 as a cloud-native, software-defined accelerator that can handle the most latency-sensitive use cases that 5G can throw at it. This provides the ultimate AI and 5G platform for making intelligent real-time decisions at the point of action.

NVIDIA Aerial SDK accelerates NVIDIA GPUs

As bandwidth increases and vRANs are deployed, x86 cores struggle to keep up. They start demanding impractical levels of power consumption. Hardware acceleration is needed for the compute-heavy physical layer (PHY) and scheduling workloads. While there are alternative paths to hardware acceleration, those usually involve customization using FPGAs or ASICs. This requires specific programming and therefore rules out using COTS. GPUs, on the other hand, devote a larger fraction of their chip area to arithmetic than CPUs. As technologies evolve, GPUs optimize high-performance computing and AI workloads, whereas CPUs focus on more diverse workloads, such as databases and office applications.

Telco operators need a new network architecture that offers high performance and the ability to make intelligent real-time decisions at the network edge. Traditional 4G wireless solutions cannot be reconfigured quickly enough. This will become an even more significant challenge with the promise of network slicing in 5G. Network slicing allows telcos to dynamically—on a session-by-session basis—offer unique services to customers. VRANs run in the wireless infrastructure closest to customers, at the edge, and are critical to building a modern 5G infrastructure capable of running a range of applications that are dynamically provisioned on a common platform.

To address these increasing demands, the NVIDIA Aerial SDK offers an application framework for building high-performance, cloud-native 5G applications by optimizing parallel processing on GPUs for baseband signals and data flow. Aerial provides two critical SDKs to simplify the task of building highly scalable and programmable, software-defined 5G RAN networks using off-the-shelf servers with NVIDIA GPUs:

  • CUDA Virtual Network Function (cuVNF)—Provides optimized input/output and packet processing, sending 5G packets directly to GPU memory from ConnectX-6Dx SmartNICs.
  • CUDA Baseband (cuBB)—Provides a GPU-accelerated 5G signal processing pipeline, including cuPHY for L1 5G PHY. It delivers unprecedented throughput and efficiency by keeping all physical layer processing within the GPU’s high-performance memory.

The use of the NVIDIA EGX Edge AI platform with the A100 GPUs offload workloads from the CPU and takes an extreme multi-threaded approach to data processing. Because effective core counts per chip area tend to increase faster than clock rates, this sets up an ongoing scale-up effect over time. GPUs are already deployed in all the major cloud platforms and run countless AI workloads across all industries and are proven for use within COTS hardware designs.

The NVIDIA Aerial SDK running on the NVIDIA EGX Edge platform can efficiently handle virtual BBU function traditionally done by inefficient FPGA NICs. Their combination of CPU and GPU acceleration has the potential to handle even the most demanding 5G use cases. With this approach, the GPUs could reside anywhere, and they could orchestrate the whole process using Kubernetes. This enables flexible, time-bound services on a fully software-defined, hardware-accelerated platform with high performance and lower costs for 5G rollouts. After all, installing GPUs into every server doesn’t make much sense when the premise of 5G is to use disaggregation. Disaggregating the GPU is logical.

ConnectX-6 Dx SmartNICs accelerate cloudRANs

As telcos transition more network elements into cloud-native functions, they can create an open, agile platform that supports the introduction of software-driven services to accelerate innovation. With this new architecture inclusive of microservices and containers, the data center can be composed from disaggregated computing elements and tailored to fit the shape and size of independent workload requirements. This is made possible by the network fabric.

NVIDIA ConnectX-6 Dx SmartNICs enable disaggregation in cases where East-West traffic becomes intensely high. Due to high-speed networking, powerful offloads, and precise time synchronization, a fabric built on ConnectX-6 Dx NICs becomes much easier to compose. In addition, the utilization goes up as well as throughput. ConnectX SmartNICs provide GPUDirect capability for better packet placing and pacing than traditional FPGAs. The GPU Data Plane Development Kit (DPDK) can be used to bypass the OS, fill the DPDK queue, and accelerate GPU communications to CPUs and other GPUs.

The advanced 5T for 5G technology embedded in ConnectX-6 Dx SmartNICs exceeds stringent industry-standard timing specifications for eCPRI-based RANs by ensuring clock accuracy of 16ns or less. This enables packet-based, virtualized Ethernet RANs to provide precise timestamping of packets and deliver highly accurate time references. In turn, that allows networks to handle time-sensitive network traffic efficiently. Unique features, such as eCPRI windowing, enable the transmission of eCPRI Ethernet packets from the distributed unit (DU) to radio unit (RU) accurately and precisely within the 1 uSec transmission window specified in the O-RAN specification.

The Accelerated Switching and Packet Processing (ASAP2) time-bound packet flow engine enables software-defined, hardware-accelerated, virtual network functions (VNFs) and containerized network functions (CNF) to precisely steer traffic in the ingress and egress directions, as desired by networks services and applications. Thus, timing reference, accuracy, and precision are extended to ASAP2 and all other acceleration engines supported by the ConnectX-6 Dx.

NVIDIA enables flexible, time-bound services on a fully software-defined, hardware-accelerated platform with high performance and eliminates the need to use FPGAs for time synchronization. This provides cloud service providers (CSPs) with improved service agility, network extensibility, and integration with cloud applications at lower cost points. Furthermore, the ability to place GPUs anywhere helps acceleration and compounds the utilization and capacity. This is all made possible by a composable network.

Conclusion

5G is pushing boundaries for all service provider networks. The change is not just happening at the core data centers. It is also happening at the network edge and in the radio access networks. With billions of devices connecting to 5G networks, CloudRAN is designed to ensure the ability to deploy and adapt quickly in an on-demand fashion. This requires network changes to occur on the fly. Cloudification of the RAN is the most significant transformation since the introduction of mobile devices.

Telcos require fast, time-synchronized, precise, affordable, and secure networking for 5G rollouts. The key to this is a solution that offers high programmability, scalability, and performance combined with intelligent accelerators and offloads with low-latency and fast packet processing capabilities and GPU acceleration at the edge. It leverages the best from the open-source community, reducing latency and maximizing throughput while providing acceleration and offloading CPUs. Ultimately, this enables the vRAN to drive the performance of wireless communication services far beyond those achieved by conventional RAN, significantly reducing operational expenses.

Discuss (0)

Tags