Premier Partner

NVIDIA Networking InfiniBand Solutions

Mellanox InfiniBand Products

Please select a category for further product selection.

InfiniBand Switches

If you are looking for InfiniBand switches, please visit the Mellanox ethernet subpage.

InfiniBand Adapters

If you are looking for IB adapter, please visit the Mellanox adapter subpage.

Cables, Modules & Transceivers

For suitable IB and GbE cables, modules or trancivers, please visit the corresponding subpage.

You are looking for Gigabit Ethernet solutions? Then check out our selection of Mellanox GbE Switches and Adapters.

InfiniBand Switch Systems

Mellanox's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Mellanox switches includes a broad portfolio of Edge and Director switches supporting 40, 56,100 and 200Gb/s port speeds and ranging from 8 ports to 800 ports.

Edge Switches

The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.

 Mellanox SB7790/SB7890Mellanox SB7790/SB7890Mellanox QM8700Mellanox QM8790
Model NameSB7800SB7890QM8700QM8790
Switching Capacity7.2Tb/s7.2Tb/s16Tb/s16Tb/s
Link Speed100Gb/s100Gb/s200Gb/s200Gb/s
Interface TypeQSFP28QSFP28QSFP28QSFP28
Management2048 nodes--2048 nodes--
PSU RedundancyYesYesYesYes
Fan RedundancyYesYesYesYes
Integrated Gateway--------

Modular Switches

Mellanox director modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand modular switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.

managed 324 - 800 portsMellanox CS7520Mellanox CS7510Mellanox CS7500Mellanox CS8500
Model NameCS7520CS7510CS7500CS8500
Switching Capacity43.2Tb/s64.8Tb/s130Tb/s320Tb/s
Link Speed100Gb/s100Gb/s100Gb/s200Gb/s
Interface TypeQSFP28QSFP28QSFP28QSFP28
Management2048 nodes2048 nodes2048 nodes2048 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules691820
Leaf Modules (max.)691820
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyYesYesYesWater cooled

Why Mellanox Switch Systems?

The benefits
  • Built with Mellanox's 4th and 5th generation InfiniScale® and SwitchX™ switch silicon
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Real-Time Scalable Network Telemetry
  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

InfiniBand VPI Host-Channel Adapters

Mellanox continues to lead the way in providing InfiniBand Host Channel Adapters (HCA) - the most powerful interconnect solution for enterprise data centers, Web 2.0, cloud computing, high performance computing and embedded environments.

ConnectX-6 VPI Cards ConnectX 6 VPI

ConnectX-6 with Virtual Protocol Interconnect® (VPI) supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. Delivering the highest throughput and message rate in the industry with 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds it is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. Supported speeds are HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet.

ConnectX-6Mellanox MCX654105A-HCATMellanox MCX654106A-HCATMellanox MCX653105A-HDATMellanox MCX653106A-HDAT
Model NameMCX654105A-HCATMCX654106A-HCATMCX653105A-HDATMCX653106A-HDAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR IB (200Gb/s) & 200GbE, HDR100, EDR, FDR, QDR, DDR, SDR
PCI2x PCIe 3.02x PCIe 3.0PCIe 3.0/4.0PCIe 3.0/4.0
Brackettall bracket*
DimensionsPCIe Standup form factor. w/o Bracket: 167.65mm x 8.90mm

ConnectX-6Mellanox MCX654106A-ECATMellanox MCX653105A-ECATMellanox MCX653106A-ECAT
Model NameMCX654106A-ECATMCX653105A-ECATMCX653106A-ECAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR100 (100Gb/s), EDR IB & (100GbE), FDR, QDR, DDR, SDR
PCI2x PCIe 3.0PCIe 3.0/4.0PCIe 3.0/4.0
Lanesx16x16 (x8*1) (2x8*2)x16 (2x8*3)
Form FactorPCIe Standup. w/o Bracket: 167.65mm x 8.90mm
*1PCEe 4.0 x8 available with model: MCX651105A-EDAT
*2PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653105A-EFAT
*3PCEe 3.0/4.0 x16 Socket Direct 2x8 in a row, available with model: MCX653106A-EFAT

ConnectX-6Mellanox MCX653435A-HDAIMellanox MCX653436A-HDAIMellanox MCX653435A-EDAIMellanox MCX653435A-HDAE
Model NameMCX653435A-HDAIMCX653436A-HDAIMCX653435A-EDAIMCX653435A-HDAE*1
ASIC & PCI Dev IDConnectX®-6
SpeedHDR IB (200Gb/s) & 200GbEHDR100 (100Gb/s) & 100GbEHDR IB (200Gb/s) & 200GbE
PCIPCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0PCIe 3.0/4.0
Form FactorOCP 3.0 Small Form Factor 
*1For this card, there is no official pricing, please contact us for the information of their availability.

ConnectX-5 VPI Cards ConnectX 5 VPI

ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.

ConnectX-5Mellanox MCX555A-ECATMellanox MCX556A-ECATMellanox MCX556A-EDATMellanox MCX545A-ECANMellanox MCX545M-ECAN
ASIC & PCI Dev IDConnectX®-5ConnectX®-5 for OCP
SpeedEDR IB (100Gb/s) & 100GbE
PCIPCIe3.0PCIe3.0PCIe4.0PCIe3.02x PCIe3.0
Brackettall bracket*no bracket
Dimensions14.2cm x 6.9cm (Low Profile)OCP 2.0 type 2**

ConnectX-4 VPI Cards ConnectX 4 VPI

ConnectX®-4 adapter cards supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity provide exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today’s corporations to meet the demands of the data explosion

ConnectX-4Mellanox MCX455A-ECATMellanox MCX456A-ECATMellanox MCX455A-FCATMellanox MCX456A-FCATMellanox MCX453A-FCATMellanox MCX454A-FCAT
ASIC & PCI Dev IDConnectX®-4
SpeedEDR IB (100Gb/s) & 100GbEFDR IB (56Gb/s) & 40/56GbE
Brackettall bracket*
Dimensions14.2cm x 6.9cm (Low Profile)
* All tall-bracket adapters are shipped with the tall bracket mounted and a short bracket as an accessory.
** For more details, please refer to the Open Compute Project 2.0 Specifications.
All card types listed use RoHS

The Strengths of Mellaonox VPI Host-Channel Adapters

The benefits
  • World-class cluster performance
  • High-performance networking and storage access
  • Efficient use of compute resources
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Increased VM per server ratio
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Scalability to tens-of-thousands of nodes
Target Applications
  • High-performance parallelized computing
  • Data center virtualization
  • Public and private clouds
  • Large scale Web 2.0 and data analysis applications
  • Clustered database applications, parallel RDBMS queries, high-throughput data warehousing
  • Latency sensitive applications such as financial analysis and trading
  • Performance storage applications such as backup, restore, mirroring, etc.