Data Center / Cloud

Turbocharging Multi-Cloud Security and Application Delivery with VirtIO Offloading

F5 Accelerates Security and App Delivery

The incredible increase of traffic within data centers along with increased adoption of virtualization is placing strains on the traditional data centers.

Customarily, virtual machines rely on software interfaces such as VirtIO to connect with the hypervisor. Although VirtIO is significantly more flexible compared to SR-IOV, it can use up to 50% more compute power in the host, thus reducing the servers’ overall efficiency.

Similarly, the adoption of software-defined data centers is on the rise. Both virtualization and software-defined workloads are extremely CPU-intensive. This creates inefficiencies that reduce overall performance system-wide. Furthermore, infrastructure security is potentially compromised as the application domain and networking domain are not separated.

F5 and NVIDIA recently presented on how to solve these challenges at NVIDIA GTC. F5 discussed accelerating its BIG-IP Virtual Edition (VE) virtualized appliance portfolio by offloading VirtIO to the NVIDIA BlueField-2 data processing unit (DPU) and ConnectX-6 Dx SmartNIC. In the session, they discuss how the DPU provides optimal acceleration and offload due to its onboard networking ASIC and Arm processor cores, freeing CPU cores to focus on application workloads.

Offloading to the DPU also provides domain isolation to secure resources more tightly. Support for VirtIO also enables dynamic composability, creating a software-defined, hardware-accelerated solution that significantly decreases reliance on the CPU while maintaining the flexibility that VirtIO offers.

Virtual switching acceleration

DPUs accelerating Virtio in hardware avoiding poor network performance from software implementations.
Figure 1. Offloading VirtIO moves virtual datapath out of software and into the hardware of the SmartNIC or DPU where it can be accelerated

Virtual switching was born as a consequence of server virtualization. Hypervisors need the ability to enable transparent traffic switching between VMs and with the outside world.

One of the most commonly used virtual switching software solutions is Open vSwitch (OVS). NVIDIA Accelerated Switching and Packet Processing (ASAP2) technology accelerates virtual switching to improve performance in software-defined networking environments.

ASAP2 supports using vDPA to offload virtual switching (the OVS data plane) from the control plane. This permits flow rules to be programmed into the eSwitch within the network adapter or DPU and allows the use of standard APIs and common libraries such as DPDK to provide significantly higher OVS performance without the associated CPU load.

ASAP2 also supports SR-IOV for hardware acceleration of the data plane. The combination of the two capabilities provides a software-defined and hardware-accelerated solution that resolves performance issues associated within virtual SDN vSwitching solutions.

Accelerated networking

Earlier this year, NVIDIA released NVIDIA DOCA, a framework that simplifies application development for BlueField DPUs. DOCA makes it easier to program and manage the BlueField DPU. Applications developed using DOCA for BlueField will also run without changes on future versions, ensuring forward compatibility.

DOCA consists of industry-standard APIs, libraries, and drivers. One of these drivers is the DOCA VirtIO-net, which provides virtio interface acceleration. When using BlueField, the virtio interface is run on the DPU hardware. This reduces the CPU’s involvement and accelerates VirtIO’s performance while enabling features such as live migrations.

Bar chart of performance testing done with VirtIO offloading shows a dramatic increase in performance and improvements in processing time and packets processed
Figure 2. Performance advantages available with VirtIO offloading [VirtIO INCORRECTLY CAPITALIZED IN CHART TITLE]

BIG-IP VE results

During the joint GTC session, F5 demonstrated the advantages of hardware acceleration versus running without hardware acceleration. The demonstration showed BIG-IP VE performing SSL termination for NGINX. The TSUNG traffic generator is used to send 512K byte packets through multiple instances of BIG-IP VE.

With VirtIO running on the host, the max throughput reached only 5 Gbps and took 187 seconds to complete, with only 80% of all packets processed.

The same scenario using hardware acceleration resulted in 16 Gbps of throughput in only 62 seconds and 100% of the packets were processed.

Summary

Increasing network speeds, virtualization, and software-defined networking are adding strain on data center systems and creating a need for efficiency improvements.

VirtIO is a well-established I/O virtualization interface but has a software-only framework. SR-IOV technology was developed precisely to support high performance and efficient offload and acceleration of network functionality, but it requires a specific driver in each VM. By accelerating VirtIO-net in hardware, you can avoid poor network performance while maintaining transparent software implementation, including full support for VM live migration.

The demonstration with F5 Networks showed a 320% improvement in throughput, a 66% reduction in processing time, and 100% of packets were processed. This is evidence that the evolving way forward is through hardware vDPA that combines the out-of-the-box availability of VirtIO drivers with the performance gains of DPU hardware acceleration.

This session was presented simulive at NVIDIA GTC and can be replayed. For more information about the F5-NVIDIA joint solution that demonstrates the benefits of reduced CPU utilization while achieving high performance using VirtIO, see GTC session titled, Multi-cloud Security and Appllicaiton Delivery with VirtIO.

Discuss (0)

Tags