NVIDIA ConnectX SmartNICs
Designed to Address Modern Data Center Challenges

Performance and Versatility to Improve Data Center Efficiency and Scale
NVIDIA Mellanox ConnectX® SmartNICs utilize stateless offload engines, overlay networks, and native hardware support for RoCE and GPUDirect™ technologies to maximize application performance and data center efficiency. Developers can use ConnectX custom packet processing technologies to accelerate server-based networking functions and offload datapath processing for compute-intensive workloads including network virtualization, security, and storage functionalities.
Accelerate Packet Processing and Latency-Sensitive Workloads

Advanced accelerators and network offloads can be implemented on the ConnectX SmartNIC to free expensive CPU cycles for user application
Accelerate The Delivery of New Services and Capabilities
ConnectX-6 200GbE Adapters
Enable New Workloads While Improving Efficiency
ConnectX-6 is the world's first 200Gb/s SmartNIC Ethernet adapter, offering world-leading performance, smart offloads and In-Network Computing, leading to the highest return on investment for Cloud, Web 2.0, Big Data, Storage and Machine Learning applications.
ConnectX-6 EN provides two ports of 200Gb/s for Ethernet connectivity and 215 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications.

ConnectX-5 100GbE Adapters

Scale to Meet the Demands of Your Network
Intelligent ConnectX-5 SmartNIC Ethernet adapters offer new acceleration engines that optimize performance of Web 2.0, Cloud, Data Analytics, high-performance and storage platforms.
ConnectX-5 supports two ports of 100Gb/s Ethernet connectivity, very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing high performance and cost-effective solutions for a wide range of applications and markets.
Latest Product News

Designing an Efficient Scale-Out Deep Learning Cloud
The basic challenge in Deep Learning is that just combining a whole bunch of compute nodes into a large cluster – and making them work together seamlessly – is not simple. In fact, if it is not done properly, the performance could become worse as you increase the number of GPUs, and the cost could become unattractive.