Elite Partner

NVIDIA Networking InfiniBand Solutions

NVIDIA InfiniBand Products

InfiniBand Switches

Looking for InfiniBand Switches? Here you will find the right IB models for every requirement

InfiniBand Adapters

We offer you the right IB network adapters up to 400 Gb/s

Cables, Modules & Transceivers

For suitable IB and GbE cables, modules or trancivers, please visit the corresponding subpage.

You are looking for Gigabit Ethernet solutions? Then check out our selection of Nvidia GbE Switches and Adapters.

InfiniBand Switch Systems

The NVIDIA's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Nvidia switches includes a broad portfolio of Edge and Director switches supporting 40, 56, 100, 200 and 400Gb/s port speeds and ranging from 8 to 800 ports.

Edge Switches

The Edge family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 51Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.

manufacturer Logo
Nvidia MSB7880Nvidia MSB7890
Switching Capacity7.2Tb/s7.2Tb/s
Link Speed100Gb/s100Gb/s
Interface TypeQSFP28QSFP28
ManagementYes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x USB1x RJ45
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System Memory4GB RAM DDR3
Storage16GB SSD
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

manufacturer Logo
Nvidia MQM8700Nvidia MQM8790Nvidia MQM9700Nvidia MQM9790
Switching Capacity16Tb/s16Tb/s51.2Tb/s51.2Tb/s
Link Speed200Gb/s200Gb/s400Gb/s400Gb/s
ManagementYes (internally)No (only externally)Yes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x micro USB1x USB3.0, 1x USB for I2C, 1x RJ45, 1x RJ45(UART)
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System MemorySingle 8GBSingle 8GB DDR4 SO-DIMM
Storage---M.2 SSD SATA 16GB 2242 FF
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

Modular Switches

Nvidia’s director modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand modular switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.

manufacturer LogoNvidia CS7520Nvidia CS7510Nvidia CS7500Nvidia CS8500
managed 324 - 800 portsCS7520CS7510CS7500CS8500
Switching Capacity43.2Tb/s64.8Tb/s130Tb/s320Tb/s
Link Speed100Gb/s100Gb/s100Gb/s200Gb/s
Interface TypeQSFP28QSFP28QSFP28QSFP28
Management2048 nodes2048 nodes2048 nodes2048 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules691820
Leaf Modules (max.)691820
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyYesYesYesWater cooled

Benefits Nvidia Switch Systems

  • Built with Nvidia's 4th and 5th generation InfiniScale® and SwitchX™ switch silicon
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Real-Time Scalable Network Telemetry
  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

InfiniBand VPI Host-Channel Adapters

Nvidia continues to lead the way in providing InfiniBand Host Channel Adapters (HCA) - the most powerful interconnect solution for enterprise data centers, Web 2.0, cloud computing, high performance computing and embedded environments.

ConnectX-7 VPI Cards ConnectX 7 VPI

As a key component of the NVIDIA® Quantum-2 InfiniBand platform, the NVIDIA ConnectX®-7 smart host channel adapter (HCA) provides the highest networking performance available to take on the world’s most challenging problems. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.

manufacturer Logo
Nvidia MCX75310AAS-NEATNvidia MCX75310AAS-HEATNvidia MCX755106AS-HEATNvidia MCX75343AAS-NEACNvidia MCX753436AS-HEAB
ASIC & PCI Dev IDConnectX®-7
Form FactorPCIe Standup Form Factor (HHHL)OCP 3.0 (TSFF)OCP 3.0 (SFF)
Connectors1x QSFP1x QSFP2x QSFP1x QSFP2x QSFP *5
PCIPCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0
Lanesx16 *1*2x16 *3*4x16 With option for extensionx16x16
RDMA message rate330-370 million messages per second
Dimensionswithout brackets: 167.65mm x 68.90mm. All adapters are shipped with the tall bracket mounted and a short bracket as an accessory
*1PCEe 4.0/5.0 x16 with option for extension is available with model: MCX75510AAS-NEAT
*2PCEe 4.0/5.0 2x8 in a row is available with model: MCX75210AAS-NEAT
*3PCEe 4.0/5.0 x16 with option for extension is available with model: MCX75510AAS-HEAT
*4PCEe 4.0/5.0 2x8 in a row is available with model: MCX75210AAS-HEAT
*5This card supports one port of InfiniBand, and a second port as either InfiniBand or Ethernet

ConnectX-6 VPI Cards ConnectX 6 VPI

ConnectX-6 with Virtual Protocol Interconnect® (VPI) supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. Delivering one of the highest throughput and message rate in the industry with 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds it is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. Supported speeds are HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet. All card speeds are backwards compatible.

manufacturer Logo
Nvidia MCX654105A-HCATNvidia MCX654106A-HCATNvidia MCX653105A-HDATNvidia MCX653106A-HDATNvidia MCX683105AN-HDAT
ConnectX-6 (200 Gb/s)MCX654105A-HCATMCX654106A-HCATMCX653105A-HDATMCX653106A-HDATMCX683105AN-HDAT
ASIC & PCI Dev IDConnectX®-6ConnectX®-6 DE
SpeedHDR IB (200Gb/s) & 200GbE, HDR100, EDR, FDR, QDR, DDR, SDRHDR IB (200Gb/s)
PCI2x PCIe 3.0 (Socket Direkt)2x PCIe 3.0 (Socket Direkt)PCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0
Brackettall bracket*
DimensionsPCIe Standup form factor. w/o Bracket: 167.65mm x 8.90mm
manufacturer Logo
Nvidia MCX654106A-ECATNvidia MCX653105A-ECATNvidia MCX653106A-ECAT
ConnectX-6 (100 Gb/s)MCX654106A-ECATMCX653105A-ECATMCX653106A-ECAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR100 (100Gb/s), EDR IB & (100GbE), FDR, QDR, DDR, SDR
PCI2x PCIe 3.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx16x16 (x8*1) (2x8*2)x16 (2x8*3)
Form FactorPCIe Standup. w/o Bracket: 167.65mm x 8.90mm
*1PCEe 4.0 x8 available with model: MCX651105A-EDAT
*2PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653105A-EFAT
*3PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653106A-EFAT

manufacturer Logo
Nvidia MCX653435A-HDAINvidia MCX653436A-HDAINvidia MCX653435A-EDAINvidia MCX653435A-HDAE
ASIC & PCI Dev IDConnectX®-6
SpeedHDR IB (200Gb/s) & 200GbEHDR100 (100Gb/s) & 100GbEHDR IB (200Gb/s) & 200GbE
PCIPCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0
Form FactorOCP 3.0 Small Form Factor 

ConnectX-5 VPI Cards ConnectX 5 VPI

ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.

manufacturer Logo
Nvidia MCX555A-ECATNvidia MCX556A-ECATNvidia MCX556A-EDATNvidia MCX545A-ECANNvidia MCX545M-ECAN
ASIC & PCI Dev IDConnectX®-5ConnectX®-5 for OCP
SpeedEDR IB (100Gb/s) & 100GbE
PCIPCIe3.0PCIe3.0PCIe4.0PCIe3.02x PCIe3.0
Brackettall bracket*no bracket
Dimensions14.2cm x 6.9cm (Low Profile)OCP 2.0 type 2**
* All tall-bracket adapters are shipped with the tall bracket mounted and a short bracket as an accessory.
** For more details, please refer to the Open Compute Project 2.0 Specifications.
All card types listed use RoHS

The Strengths of Mellaonox VPI Host-Channel Adapters

The benefits
  • World-class cluster performance
  • High-performance networking and storage access
  • Efficient use of compute resources
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Increased VM per server ratio
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Scalability to tens-of-thousands of nodes
Target Applications
  • High-performance parallelized computing
  • Data center virtualization
  • Public and private clouds
  • Large scale Web 2.0 and data analysis applications
  • Clustered database applications, parallel RDBMS queries, high-throughput data warehousing
  • Latency sensitive applications such as financial analysis and trading
  • Performance storage applications such as backup, restore, mirroring, etc.

NVIDIA DPU (Data Processing Unit)

The NVIDIA® BlueField® Data Processing Unit (DPU) is a system on a chip, a hardware accelerator specialized for complex tasks like fast data processing and data-centric computing. This is primarily intended to relieve the CPU of network and communication tasks and thus save CPU resources by outsourcing and taking over application-supporting tasks such as data transfer, data reduction, data security and analyses. The use of a DPU is particularly suitable for all workloads with supercomputing tasks such as AI, cloud and big data. A dedicated operating system on the chip can be combined with the primary operating system and offers functions such as encryption, erasure coding and compression or decompression.

manufacturer Logo
Nvidia DPU MBF2M345ANvidia DPU MBF2H516ANvidia DPU MBF2H516CNvidia DPU MBF2M516ANvidia DPU MBF2M516C
BlueField®-2 IB DPUsMBF2M345AMBF2H516AMBF2H516CMBF2M516AMBF2M516C
Series / Core SpeedE-Series / 2.0GHzP-Series / 2.75GHzE-Series / 2.0GHz
Form FactorHalf-Height Half-Length (HHHL)Full-Height Half-Length (FHHL)
Ports1x QSFP562x QSFP562x QSFP562x QSFP562x QSFP56
Speed200GbE / HDR100GbE / EDR / HDR100
PCIPCIe 4.0 x16
On-board DDR16GB
On-board eMMC64GB64GB128GB64GB128GB
Secure Boot- (*1)
Crypto (*1) (*1) (*1) (*1) (*1)
Integrated BMC--
External Power---
*1depending on the specific postfix model name (see full list)
BlueField® IB DPUs (first Gen.) 
Model NameDescriptionCrypto
MBF1L516A-ESCATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket
MBF1L516A-ESNATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket-