Elite Partner

NVIDIA Networking InfiniBand Solutions

NVIDIA InfiniBand Products

InfiniBand Switches

Looking for InfiniBand Switches? Here you will find the right IB models for every requirement

InfiniBand Adapters

We offer you the right IB network adapters up to 400 Gb/s

Cables, Modules & Transceivers

For suitable IB and GbE cables, modules or trancivers, please visit the corresponding subpage.

You are looking for Gigabit Ethernet solutions? Then check out our selection of Nvidia GbE Switches and Adapters.

InfiniBand Switch Systems

The NVIDIA's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Nvidia switches includes a broad portfolio of Edge and Director switches supporting up to 800Gb/s port speeds and ranging from 8 to 800 ports.

Quantum-X800 Switches

NVIDIA Quantum-X800 InfiniBand switches deliver 800 gigabits per second (Gb/s) of throughput, ultra-low latency, advanced NVIDIA In-Network Computing, and features that elevate overall application performance within high-performance computing (HPC) and AI data centers.

manufacturer Logo NEW! Nvidia Qunatum Q3200-RA Switch Nvidia Qunatum Q3400-RA Switch Nvidia Qunatum Q3401-RD Switch Nvidia Qunatum Q3450-LD Switch
Quantum-X800
Q3200-RAQ3400-RAQ3401-RDQ3450-LD
Rack mount2U4U4U4U
Ports36144144144
Speed800Gb/s800Gb/s800Gb/s800Gb/s
PerformanceTwo switches each of 28.8Tb/s throughput115.2Tb/s throughput115.2Tb/s throughput115.2Tb/s throughput
Switch radixTwo switches of 36 800Gb/s non-blocking ports144 800Gb/s non-blocking ports144 800Gb/s non-blocking ports144 800Gb/s non-blocking ports
Connectors and cablingTwo groups of 18 OSFP connectors72 OSFP connectors72 OSFP connectors144 MPO connectors
Management portsSeparate OSFP 400Gb/s InfiniBand in-band management port (UFM)Separate OSFP 400Gb/s InfiniBand in-band management port (UFM)Separate OSFP 400Gb/s InfiniBand in-band management port (UFM)Separate OSFP 400Gb/s InfiniBand in-band management port (UFM)
ConnectivityPluggablePluggablePluggableMPO12 (Optics-only)
CPUIntel CFL 4 Cores i3-8100H 3GHzIntel CFL 4 Cores i3-8100H 3GHzIntel CFL 4 Cores i3-8100H 3GHzIntel CFL 4 cores i3-8100H 3GHz
SecurityCPU/CPLD/Switch IC based on IRoTCPU/CPLD/Switch IC based on IRoTCPLD/Switch IC based on IRoTCPU/CPLD/Switch IC based on IRoT
SoftwareNVOSNVOSNVOSNVOS
Cooling mechanismAir cooledAir cooledAir cooled>Liquid cooled (85%)
>Air cooled (15%)
EMC (emissions)CE, FCC, VCCI, ICES, and RCMCE, FCC, VCCI, ICES, and RCMCE, FCC, VCCI, ICES, and RCMCE, FCC, VCCI, ICES, and RCM
Product safety compliant/certifiedRoHS, CB, cTUVus, CE, and CURoHS, CB, cTUVus, CE, and CURoHS, CB, cTUVus, CE, and CURoHS, CB, cTUVus, CE, and CU
Power Feed200-240V AC200-240V AC48-54V DC48V DC
WarrantyOne yearOne yearOne yearOne year

Edge Switches

The Edge family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 51Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.


manufacturer Logo
Nvidia MSB7880Nvidia MSB7890
NVIDIA SWITCH-IB 2 SERIESMSB7880MSB7890
TypeRouterSwitch
SHARPSHARP v1SHARP v1
Ports3636
Height1U1U
Switching Capacity7.2Tb/s7.2Tb/s
Link Speed100Gb/s100Gb/s
Interface TypeQSFP28QSFP28
ManagementYes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x USB1x RJ45
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System Memory4GB RAM DDR3
Storage16GB SSD
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

manufacturer Logo
Nvidia MQM8700Nvidia MQM8790Nvidia MQM9700Nvidia MQM9790
NVIDIA QUANTUM SERIESMQM8700MQM8790MQM9700MQM9790
SeriesQuantumQuantumQuantum-2Quantum-2
SHARPSHARP v2SHARP v2SHARP v3SHARP v3
Ports40406464
Height1U1U1U1U
Switching Capacity16Tb/s16Tb/s51.2Tb/s51.2Tb/s
Link Speed200Gb/s200Gb/s400Gb/s400Gb/s
Interface TypeQSFP56QSFP56QSFPQSFP
ManagementYes (internally)No (only externally)Yes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x micro USB1x USB3.0, 1x USB for I2C, 1x RJ45, 1x RJ45(UART)
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System MemorySingle 8GBSingle 8GB DDR4 SO-DIMM
Storage---M.2 SSD SATA 16GB 2242 FF
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

Modular Switches

Nvidia’s director modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand modular switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.

manufacturer LogoNvidia CS7520Nvidia CS7510Nvidia CS7500Nvidia CS8500
managed 324 - 800 portsCS7520CS7510CS7500CS8500
Ports216324648324
Height12U16U28U29U
Switching Capacity43.2Tb/s64.8Tb/s130Tb/s320Tb/s
Link Speed100Gb/s100Gb/s100Gb/s200Gb/s
Interface TypeQSFP28QSFP28QSFP28QSFP28
Management2048 nodes2048 nodes2048 nodes2048 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules691820
Leaf Modules (max.)691820
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyYesYesYesWater cooled

Benefits Nvidia Switch Systems

  • Built with Nvidia's 4th and 5th generation InfiniScale® and SwitchXâ„¢ switch silicon
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Real-Time Scalable Network Telemetry
  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

InfiniBand VPI Host-Channel Adapters

Nvidia continues to lead the way in providing InfiniBand Host Channel Adapters (HCA) - the most powerful interconnect solution for enterprise data centers, Web 2.0, cloud computing, high performance computing and embedded environments.

ConnectX-7 SmartNIC Adapter Cards Connect X7

Providing up to four ports of connectivity and 400Gb/s of throughput, the NVIDIA ConnectX-7 SmartNIC provides hardware-accelerated networking, storage, security, and manageability services at data center scale for cloud, telecommunications, AI, and enterprise workloads. ConnectX-7 empowers agile and high-performance networking solutions with features such as Accelerated Switching and Packet Processing (ASAP2), advanced RoCE, GPUDirect Storage, and in-line hardware acceleration for Transport Layer Security (TLS), IP Security (IPsec), and MAC Security (MACsec) encryption and decryption.

manufacturer Logo Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC
ConnectX-7
MCX75310AAS-HEATMCX75310AAS-NEATMCX753436MC-HEAB
MCX753436MS-HEAB
MCX755106AC-HEAT
MCX755106AS-HEAT
MCX715105AS-WEATMCX755106AS-HEAT
MCX755106AC-HEAT
MCX75310AAC-NEAT
MCX75343AMC-NEAC
MCX75343AMS-NEAC
ASIC & PCI Dev IDConnectX®-7
Speed200GbE / NDR200400GbE / NDR200GbE / NDR200400GbE / NDR200GbE / NDR200400GbE / NDR
Technology VPI*3 VPI*2 VPI VPI VPI VPI
Ports112121
ConnectorsQSFPQSFPQSFP112OSFP112OSFP112OSFP
PCIePCIe 5.0 x16PCIe 5.0 x16PCIe 5.0 x16PCIe 5.0 x16PCIe 5.0 x16PCIe 5.0 x16
Secure Boot✔✔✔✔✔✔
Crypto--(✔)*1-(✔)*1(✔)*1
Form FactorHHHLHHHLMCX753436: SFF
MCX755106: HHHL
 HHHLHHHLMCX75310: HHHL
MCX75343: SFF

manufacturer Logo Nvidia Nvidia ConnectX-7 NICNvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC Nvidia Nvidia ConnectX-7 NIC
ConnectX-7
MCX713104AC-ADAT
MCX713104AS-ADAT
MCX713114TC-GEATMCX75510AAS-HEATMCX753436MC-HEAB
MCX753436MS-HEAB
MCX75343AMC-NEAC
MCX75343AMS-NEAC
ASIC & PCI Dev IDConnectX®-7
Speed50/25GbE50/25GbENDR200200GbE / NDR200/HDR400GbE / NDR
Technology Ethernet Ethernet IB VPI VPI
Ports44121
ConnectorsSFP56SFP56OSFPQSFP112OSFP
PCIePCIe 4.0 x16PCIe 4.0 x16PCIe 5.0 x16PCIe 5.0 x16PCIe 5.0 x16
Secure Boot✔✔✔✔✔
Crypto(✔)*1✔-(✔)*1(✔)*1
Form FactorHHHLFHHLHHHLSFFTSFF
*1Crypto is only enabled on OPN models: MCX753436MC-HEAB, MCX755106AC-HEAT, MCX75310AAC-NEAT, MCX75343AMC-NEAC, MCX755106AC-HEAT, MCX713104AC-ADAT, MCX753436MC-HEAB, MCX75343AMC-NEAC
HHHL (Tall Bracket) = 6.6" x 2.71" (167.65mm x 68.90mm); FHHL = 4.53" x 6.6" (115.15mm x 167.65mm); TSFF = Tall Small Form Factor; OCP 3.0 SSF (Thumbscrew Bracket) = 4.52" x 2.99" (115.00mm x 76.00mm)
*2The MCX75310AAS-NEAT card supports InfiniBand and Ethernet protocols from hardware version AA and higher.
*3The MCX75310AAS-HEAT card supports InfiniBand and Ethernet protocols from hardware version A7 and higher.

ConnectX-6 VPI Cards Connect X7

ConnectX-6 with Virtual Protocol Interconnect® (VPI) supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. Delivering one of the highest throughput and message rate in the industry with 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds it is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. Supported speeds are HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet. All card speeds are backwards compatible.

manufacturer Logo
Nvidia MCX653436A-HDAINvidia MCX653105A-HDATNvidia MCX653106A-HDAT
ConnectX-6MCX653436A-HDAB*1MCX653105A-HDAT*1MCX653106A-HDAT*1
ASIC & PCI Dev IDConnectX®-6
Technology VPI VPI VPI
SpeedHDR/200GbEHDR/200GbEHDR/200GbE
Ports212
ConnectorsQSFP56QSFP56QSFP56
PCIPCIe 4.0PCIe 4.0PCIe 4.0
Lanesx16x16x16
Crypto---
Form FactorPCIe Standup. w/o Bracket: 167.65mm x 8.90mm. Tall bracket
*1 Will by EOL on 10/31/2025 (31.10.2025)
manufacturer Logo
Nvidia MCX654106A-ECATNvidia MCX653105A-ECATNvidia MCX653106A-ECAT
ConnectX-6MCX651105A-EDATMCX653105A-ECATMCX653106A-ECAT
ASIC & PCI Dev IDConnectX®-6
Technology VPI VPI VPI
SpeedHDR100/EDR
IB/100GbE
HDR100/EDR
IB/100GbE
HDR100/EDR
IB/100GbE
Ports112
ConnectorsQSFP56QSFP56QSFP56
PCIPCIe 4.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx8x16x16
Crypto---
Form FactorPCIe Standup. w/o Bracket: 167.65mm x 8.90mm. Tall bracket

The Strengths of Mellaonox VPI Host-Channel Adapters

The benefits
  • World-class cluster performance
  • High-performance networking and storage access
  • Efficient use of compute resources
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Increased VM per server ratio
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Scalability to tens-of-thousands of nodes
Target Applications
  • High-performance parallelized computing
  • Data center virtualization
  • Public and private clouds
  • Large scale Web 2.0 and data analysis applications
  • Clustered database applications, parallel RDBMS queries, high-throughput data warehousing
  • Latency sensitive applications such as financial analysis and trading
  • Performance storage applications such as backup, restore, mirroring, etc.

NVIDIA DPU (Data Processing Unit)

The NVIDIA® BlueField® Data Processing Unit (DPU) is a system on a chip, a hardware accelerator specialized for complex tasks like fast data processing and data-centric computing. This is primarily intended to relieve the CPU of network and communication tasks and thus save CPU resources by outsourcing and taking over application-supporting tasks such as data transfer, data reduction, data security and analyses. The use of a DPU is particularly suitable for all workloads with supercomputing tasks such as AI, cloud and big data. A dedicated operating system on the chip can be combined with the primary operating system and offers functions such as encryption, erasure coding and compression or decompression.

manufacturer Logo
Nvidia DPU MBF2M345ANvidia DPU MBF2H516ANvidia DPU MBF2H516CNvidia DPU MBF2M516ANvidia DPU MBF2M516C
BlueField®-2 IB DPUsMBF2M345AMBF2H516AMBF2H516CMBF2M516AMBF2M516C
Series / Core SpeedE-Series / 2.0GHzP-Series / 2.75GHzE-Series / 2.0GHz
Form FactorHalf-Height Half-Length (HHHL)Full-Height Half-Length (FHHL)
Ports1x QSFP562x QSFP562x QSFP562x QSFP562x QSFP56
Speed200GbE / HDR100GbE / EDR / HDR100
PCIPCIe 4.0 x16
On-board DDR16GB
On-board eMMC64GB64GB128GB64GB128GB
Secure Boot✓-✓✓ (*1)✓
Crypto✓ (*1)✓ (*1)✓ (*1)✓ (*1)✓ (*1)
1GbE OOB✓✓✓✓✓
Integrated BMC--✓✓✓
External Power-✓✓--
PPS IN/OUT---✓✓
*1depending on the specific postfix model name (see full list)
BlueField® IB DPUs (first Gen.) 
Model NameDescriptionCrypto
MBF1L516A-ESCATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket✓
MBF1L516A-ESNATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket-