Elite Partner

NVIDIA Networking InfiniBand Solutions

NVIDIA Mellanox InfiniBand Products

Please select a category for further product selection.

InfiniBand Switches

If you are looking for InfiniBand switches, please visit the Nvidia ethernet subpage.

InfiniBand Adapters

If you are looking for IB adapter, please visit the Nvidia adapter subpage.

Cables, Modules & Transceivers

For suitable IB and GbE cables, modules or trancivers, please visit the corresponding subpage.

You are looking for Gigabit Ethernet solutions? Then check out our selection of Nvidia GbE Switches and Adapters.

InfiniBand Switch Systems

The NVIDIA's Mellanox family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Mellanox switches includes a broad portfolio of Edge and Director switches supporting 40, 56, 100, 200 and 400Gb/s port speeds and ranging from 8 to 800 ports.

Edge Switches

The Edge family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 51Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.


NVIDIA SWITCH-IB 2 SERIESNvidia MSB7800Nvidia MSB7880Nvidia MSB7890
Model NameMSB7800MSB7880MSB7890
TypeSwitchRouterSwitch
SHARPSHARP v1SHARP v1SHARP v1
Ports363636
Height1U1U1U
Switching Capacity7.2Tb/s7.2Tb/s7.2Tb/s
Link Speed100Gb/s100Gb/s100Gb/s
Interface TypeQSFP28QSFP28QSFP28
ManagementYes (internally)Yes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x USB1x RJ45
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System Memory4GB RAM DDR3
Storage16GB SSD
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

NVIDIA QUANTUM SERIESNvidia MQM8700Nvidia MQM8790Nvidia MQM9700Nvidia MQM9790
Model NameMQM8700MQM8790MQM9700MQM9790
SeriesQuantumQuantumQuantum-2Quantum-2
SHARPSHARP v2SHARP v2SHARP v3SHARP v3
Ports40406464
Height1U1U1U1U
Switching Capacity16Tb/s16Tb/s51.2Tb/s51.2Tb/s
Link Speed200Gb/s200Gb/s400Gb/s400Gb/s
Interface TypeQSFP56QSFP56QSFPQSFP
ManagementYes (internally)No (only externally)Yes (internally)No (only externally)
Management Ports1x RJ45, 1x RS232, 1x micro USB1x USB3.0, 1x USB for I2C, 1x RJ45, 1x RJ45(UART)
Power1+1 redundant and hot-swappable, 80 Gold+ and ENERGY STAR certified
System MemorySingle 8GBSingle 8GB DDR4 SO-DIMM
Storage---M.2 SSD SATA 16GB 2242 FF
CoolingFront-to-rear or rear-to-front (hot-swappable fan unit)

Modular Switches

Mellanox director modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand modular switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.

managed 324 - 800 portsMellanox CS7520Mellanox CS7510Mellanox CS7500Mellanox CS8500
Model NameCS7520CS7510CS7500CS8500
Ports216324648324
Height12U16U28U29U
Switching Capacity43.2Tb/s64.8Tb/s130Tb/s320Tb/s
Link Speed100Gb/s100Gb/s100Gb/s200Gb/s
Interface TypeQSFP28QSFP28QSFP28QSFP28
Management2048 nodes2048 nodes2048 nodes2048 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules691820
Leaf Modules (max.)691820
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyYesYesYesWater cooled

Benefits Nvidia Mellanox Switch Systems

  • Built with Mellanox's 4th and 5th generation InfiniScale® and SwitchX™ switch silicon
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Real-Time Scalable Network Telemetry
  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

InfiniBand VPI Host-Channel Adapters

Nvidia Mellanox continues to lead the way in providing InfiniBand Host Channel Adapters (HCA) - the most powerful interconnect solution for enterprise data centers, Web 2.0, cloud computing, high performance computing and embedded environments.

ConnectX-7 VPI Cards ConnectX 7 VPI

As a key component of the NVIDIA® Quantum-2 InfiniBand platform, the NVIDIA ConnectX®-7 smart host channel adapter (HCA) provides the highest networking performance available to take on the world’s most challenging problems. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.

ConnectX-7 (400 Gb/s)Nvidia Mellanox MCX75310AAS-NEATNvidia Mellanox MCX75310AAS-HEATNvidia Mellanox MCX755106AS-HEATNvidia Mellanox MCX75343AAS-NEACNvidia Mellanox MCX753436AS-HEAB
Model NameMCX75310AAS-NEATMCX75310AAS-HEATMCX755106AS-HEATMCX75343AAS-NEACMCX753436AS-HEAB
ASIC & PCI Dev IDConnectX®-7
Form FactorPCIe Standup Form Factor (HHHL)OCP 3.0 (TSFF)OCP 3.0 (SFF)
Ports12121
Speed400Gb/s200Gb/s200Gb/s400Gb/s200Gb/s
Connectors1x QSFP1x QSFP2x QSFP1x QSFP2x QSFP *5
PCIPCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0PCIe 4.0/5.0
Lanesx16 *1*2x16 *3*4x16 With option for extensionx16x16
RDMA message rate330-370 million messages per second
Dimensionswithout brackets: 167.65mm x 68.90mm. All adapters are shipped with the tall bracket mounted and a short bracket as an accessory
*1PCEe 4.0/5.0 x16 with option for extension is available with model: MCX75510AAS-NEAT
*2PCEe 4.0/5.0 2x8 in a row is available with model: MCX75210AAS-NEAT
*3PCEe 4.0/5.0 x16 with option for extension is available with model: MCX75510AAS-HEAT
*4PCEe 4.0/5.0 2x8 in a row is available with model: MCX75210AAS-HEAT
*5This card supports one port of InfiniBand, and a second port as either InfiniBand or Ethernet

ConnectX-6 VPI Cards ConnectX 6 VPI

ConnectX-6 with Virtual Protocol Interconnect® (VPI) supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. Delivering one of the highest throughput and message rate in the industry with 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds it is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. Supported speeds are HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet. All card speeds are backwards compatible.

ConnectX-6 (200 Gb/s)Nvidia Mellanox MCX654105A-HCATNvidia Mellanox MCX654106A-HCATNvidia Mellanox MCX653105A-HDATNvidia Mellanox MCX653106A-HDATNvidia Mellanox MCX683105AN-HDAT
Model NameMCX654105A-HCATMCX654106A-HCATMCX653105A-HDATMCX653106A-HDATMCX683105AN-HDAT
ASIC & PCI Dev IDConnectX®-6ConnectX®-6 DE
SpeedHDR IB (200Gb/s) & 200GbE, HDR100, EDR, FDR, QDR, DDR, SDRHDR IB (200Gb/s)
Ports12121
ConnectorsQSFP56QSFP56QSFP56QSFP56QSFP
PCI2x PCIe 3.0 (Socket Direkt)2x PCIe 3.0 (Socket Direkt)PCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx16x16x16x16x16
Brackettall bracket*
DimensionsPCIe Standup form factor. w/o Bracket: 167.65mm x 8.90mm
ConnectX-6 (100 Gb/s)Nvidia Mellanox MCX654106A-ECATNvidia Mellanox MCX653105A-ECATNvidia Mellanox MCX653106A-ECAT
Model NameMCX654106A-ECATMCX653105A-ECATMCX653106A-ECAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR100 (100Gb/s), EDR IB & (100GbE), FDR, QDR, DDR, SDR
Ports212
ConnectorsQSFP56QSFP56QSFP56
PCI2x PCIe 3.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx16x16 (x8*1) (2x8*2)x16 (2x8*3)
Form FactorPCIe Standup. w/o Bracket: 167.65mm x 8.90mm
*1PCEe 4.0 x8 available with model: MCX651105A-EDAT
*2PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653105A-EFAT
*3PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653106A-EFAT

ConnectX-6Mellanox MCX653435A-HDAIMellanox MCX653436A-HDAIMellanox MCX653435A-EDAIMellanox MCX653435A-HDAE
Model NameMCX653435A-HDAIMCX653436A-HDAIMCX653435A-EDAIMCX653435A-HDAE
ASIC & PCI Dev IDConnectX®-6
SpeedHDR IB (200Gb/s) & 200GbEHDR100 (100Gb/s) & 100GbEHDR IB (200Gb/s) & 200GbE
Ports1211
ConnectorsQSFP56QSFP56QSFP56QSFP56
PCIPCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx16x16x16x16
Form FactorOCP 3.0 Small Form Factor 

ConnectX-5 VPI Cards ConnectX 5 VPI

ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.

ConnectX-5Mellanox MCX555A-ECATMellanox MCX556A-ECATMellanox MCX556A-EDATMellanox MCX545A-ECANMellanox MCX545M-ECAN
Model NameMCX555A-ECATMCX556A-ECATMCX556A-EDATMCX545A-ECANMCX545M-ECAN
ASIC & PCI Dev IDConnectX®-5ConnectX®-5 for OCP
SpeedEDR IB (100Gb/s) & 100GbE
Ports12211
ConnectorsQSFP28QSFP28QSFP28QSFP28QSFP28
PCIPCIe3.0PCIe3.0PCIe4.0PCIe3.02x PCIe3.0
Lanesx16x16x16x16x16
Brackettall bracket*no bracket
Dimensions14.2cm x 6.9cm (Low Profile)OCP 2.0 type 2**
Disclosures
* All tall-bracket adapters are shipped with the tall bracket mounted and a short bracket as an accessory.
** For more details, please refer to the Open Compute Project 2.0 Specifications.
All card types listed use RoHS

The Strengths of Mellaonox VPI Host-Channel Adapters

The benefits
  • World-class cluster performance
  • High-performance networking and storage access
  • Efficient use of compute resources
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Increased VM per server ratio
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Scalability to tens-of-thousands of nodes
Target Applications
  • High-performance parallelized computing
  • Data center virtualization
  • Public and private clouds
  • Large scale Web 2.0 and data analysis applications
  • Clustered database applications, parallel RDBMS queries, high-throughput data warehousing
  • Latency sensitive applications such as financial analysis and trading
  • Performance storage applications such as backup, restore, mirroring, etc.

NVIDIA DPU (Data Processing Unit)

The NVIDIA® BlueField® Data Processing Unit (DPU) is a system on a chip, a hardware accelerator specialized for complex tasks like fast data processing and data-centric computing. This is primarily intended to relieve the CPU of network and communication tasks and thus save CPU resources by outsourcing and taking over application-supporting tasks such as data transfer, data reduction, data security and analyses. The use of a DPU is particularly suitable for all workloads with supercomputing tasks such as AI, cloud and big data. A dedicated operating system on the chip can be combined with the primary operating system and offers functions such as encryption, erasure coding and compression or decompression.

BlueField®-2 IB DPUsNvidia DPU MBF2M345ANvidia DPU MBF2H516ANvidia DPU MBF2H516CNvidia DPU MBF2M516ANvidia DPU MBF2M516C
Model NameMBF2M345AMBF2H516AMBF2H516CMBF2M516AMBF2M516C
Series / Core SpeedE-Series / 2.0GHzP-Series / 2.75GHzE-Series / 2.0GHz
Form FactorHalf-Height Half-Length (HHHL)Full-Height Half-Length (FHHL)
Ports1x QSFP562x QSFP562x QSFP562x QSFP562x QSFP56
Speed200GbE / HDR100GbE / EDR / HDR100
PCIPCIe 4.0 x16
On-board DDR16GB
On-board eMMC64GB64GB128GB64GB128GB
Secure Boot- (*1)
Crypto (*1) (*1) (*1) (*1) (*1)
1GbE OOB
Integrated BMC--
External Power---
PPS IN/OUT---
*1depending on the specific postfix model name (see full list)
BlueField® IB DPUs (first Gen.) 
Model NameDescriptionCrypto
MBF1L516A-ESCATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket
MBF1L516A-ESNATBlueField® DPU VPI EDR IB (100Gb/s) and 100GbE, Dual-Port QSFP28, PCIe Gen4.0 x16, BlueField® G-Series 16 cores, 16GB on-board DDR, FHHL, Single Slot, Tall Bracket-