Networking / Communications

NVIDIA GTC: Top Sessions for Optimizing Performance and Securing Network Infrastructure

GTC images representing data networking

Mark your calendars for November 8 – 11, 2021 and get ready to build onto the knowledge you’ve learned from our spring GTC conference. With so many insights to gain from breakout sessions, panel talks and the latest technical content geared towards data center infrastructure topics, we thought we’d point out a few top sessions to ensure you don’t miss them.

GTC top executives speakers covering data infrastructure.
Figure 1. GTC data center session are packed with networking and virtualization content covering accelerated computing, CPU efficiency and network security.

5 Things CIOs Need to Know About DPUs 

Data Processing Units (DPUs) deliver important capabilities to accelerate and secure modern workloads, however have left many CIOs scratching their heads when, how, and why to deploy these powerful new components in their data centers. This session details critical problems that DPUs solve and reveal how they actually reduce complexity; while simultaneously improving flexibility, security, performance, and scalability of modern distributed workloads. Covering the major functions of the DPU, the material will inform CIOs on how to cope with the risks and complexity caused by remote work, virtualization, containerization, digital transformation, and software defined data centers. DPUs offload, accelerate, and isolate critical network, storage, and security infrastructure and thus solve the five most important problems facing CIO and system architects today, building state of the art software-defined, hardware-accelerated data centers.

Kevin Deierling, Senior Vice President of marketing at NVIDIA

Powering Unprecedented Data Center Transformation with DPUs 

Cloud computing and AI are driving fundamental changes in the architecture of modern data centers. Just as GPU-accelerated solutions have transformed data science and AI, DPUs are now transforming the data center stack, so it can run the next wave of applications. This session will introduce the key pillars of this transformation journey and explore some of the prominent, real-world applications for DPUs today.

Yael Shenhav, Vice President, Ethernet NIC and DPU Business, NVIDIA

Universal Architecture for Efficient AI Scaling from Core to Edge

The modern data center needs a new universal architecture that supports AI efficiently for both core and edge deployments. Traditional data centers have many silos with multiple server, network, and storage configurations to support different application workloads including database, product design, cloud infrastructure, AI, HPC, and big data. Within the increasingly-important AI realm, silos also appear to manage different AI and machine learning workloads. The siloed approach is performance-efficient for each workload originally designed but sacrifices flexibility and operational efficiency as soon as workloads change. This session will discuss how to design a Universal AI data center architecture that is composable, flexible, scalable and efficient, reconfigured instantly to handle different types of AI including training, inference, conversational and recommender systems.  Learn how deploying the proper server designs, accelerator chips, AI models, and networking products creates a data center that can handle any AI workload efficiently at any location: core data center, cloud, or edge.

Michael Kagan, Chief Technology Officer, NVIDIA

Chris Lamb, VP. GPU Computing Software Platforms, NVIDIA 

Programming the Data Center of the Future Today NVIDIA DOCA

Data center builders and innovators are challenged to meet the increasing demands for scale and performance, all while protecting data confidentiality. To address these challenges, The NVIDIA DOCA framework provides a comprehensive SDK for programming the NVIDIA BlueField DPU and a runtime package of accelerated data center services. We’ll cover DOCA features and services for developers and IT operations.

Ariel Kit, Director Product DPU, NVIDIA

Pete Lumbis, Director of DPU and DOCA Technical Marketing, NVIDIA

DPU-Based Acceleration for Enterprise Next-Generation Firewalls – Palo Alto Networks

To keep up with the explosion of data, data centers are deploying high-speed networks from 25G to 100G. While network speed has been increasing, security functions such as next-generation firewalls (NGFW) need to keep up with higher traffic loads. Software-defined NGFW offers the flexibility and agility to build modern data centers; however, scaling them for performance, efficiency, and economics has been a challenge. The need for a software-defined, hardware accelerated NGFW is more pressing than ever. NVIDIA and Palo Alto Networks have developed a DPU-accelerated NGFW solution for the enterprise and cloud-scale data centers. In this session, they’ll jointly present the PAN VM-Series NGFW solution and discuss availability and customer trials. We’ll show a live demonstration of PAN NGFW using Bluefield-2 DPU’s rich set of network offloads to improve performance by up to 5x while addressing the security needs of data centers and enterprise deployments.

Mukesh Gupta, Vice President of Product Management, Palo Alto Networks

John McDowall, Senior Distinguished Engineer, Palo Alto Networks

Ash Bhalgat, Senior Director of Cloud, Telco and Cybersecurity Market Development, NVIDIA

Redefining Cybersecurity at the Distributed Cloud Edge with AI and Real-time Telemetry

Modern applications are built in a highly distributed manner, with service or microservice having multiple instances for scale-out and offer no single point to observe all data. With the vast number of distributed applications, it’s challenging to provide a holistic view and detect threats. Solving this requires a robust and highly scalable telemetry collection and analytics strategy agnostic to the source and location. The NVIDIA Morpheus AI framework offers pretrained AI models that provide powerful tools to simplify workflows and help detect and mitigate security threats. Coupled with NVIDIA BlueField-2 DPUs and certified NVIDIA EGX servers, F5 is able to accelerate cybersecurity through real-time telemetry and AI-powered analytics for applications distributed across the cloud and the edge. In this session, F5 presents how their Shape app security product portfolio and distributed cloud will use the AI-based preprocessing of telemetry data to optimize security and delivery of applications.

Renuka Nadkarni, VP and CTO of Security, F5

Ken Arora, System Architect and Developer, F5

Ash Bhalgat, Sr Dir of Cloud, Telco and Cybersecurity Market Development, NVIDIA

Check Point and NVIDIA Deliver Firewall Security at the Speed of the Network

Enterprises need network security that performs at the speed of business to securely transfer hundreds of terabytes of data in minutes, provide low latency for high-frequency financial transactions, while also scaling security on demand to support hypergrowth businesses like online commerce. Learn how Check Point and NVIDIA partnered to deliver firewall security at the speed of the network, creating the industry’s fastest cybersecurity solution with up to 3 Tbsp of throughput. Secure 400 TB file transfers that used to take hours can now be done in minutes. Financial institutions can now securely process millions of high-frequency trades with microsecond latency. We’ll review real customer use cases across industries, including how they benefited from a tenfold increase in firewall security performance.

Gera Dorfman, VP Network Security Products, Check Point Software Technologies

InfiniBand HPC Cloud Best Practices: Implementing Performance Isolation on Microsoft Azure

High performance computing and artificial intelligence have evolved to be the primary data processing engines for wide commercial use. HPC clouds host a growing number of users and applications, and therefore need to carefully manage the network resources and provide performance isolation between workloads. Microsoft Azure HPC cloud leverages the NVIDIA InfiniBand In-Network Computing acceleration engines and its enhanced congestion control mechanisms to enable bare-metal HPC and AI performance, while eliminating network hot spots and maintaining highly efficient network performance. We’ll explore best practices for optimizing the network activity and supporting a variety of applications and users on the same network.

Gilad Shainer, SVP Networking, NVIDIA

Jithin Jose, Principal Software Engineer, Microsoft

This is only a small sampling of the session available and I’d like to encourage you to visit the GTC website to register for free and explore the full agenda in the GTC sessions catalog. And be sure not to miss the always inspiring and insightful keynote from our CEO and Founder Jensen Huang.

Discuss (0)

Tags