Premier Partner

Mellanox InfiniBand Solutions

Mellanox InfiniBand Products

Please select a category for further product selection.

InfiniBand Switches

If you are looking for InfiniBand switches, please visit the Mellanox ethernet subpage.

InfiniBand Adapters

If you are looking for IB adapter, please visit the Mellanox adapter subpage.

Cables, Modules & Transceivers

For suitable IB and GbE cables, modules or trancivers, please visit the corresponding subpage.

You are looking for Gigabit Ethernet solutions? Then check out our selection of Mellanox GbE Switches and Adapters.

InfiniBand Switch Systems

Mellanox's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Mellanox switches includes a broad portfolio of Edge and Director switches supporting 40, 56,100 and 200Gb/s port speeds and ranging from 8 ports to 800 ports.

Edge Switches

The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.



externally managedMellanox SX6005Mellanox SX6015Mellanox SX6025Mellanox SB7790/SB7890Mellanox QM8790
Model NameSX6005SX6015SX6025SB7790/ SB7890QM8790
Ports1218363640
Height1U1U1U1U1U
Switching Capacity1.3Tb/s2.016Tb/s4.032Tb/s7.2Tb/s16Tb/s
Link Speed56Gb/s56Gb/s56Gb/s100Gb/s200Gb/s
Interface TypeQSFP+QSFP+QSFP+QSFP28QSFP28
PSU RedundancyNoYesYesYesYes
Fan RedundancyNoYesYesYesYes
Integrated Gateway----------
managedMellanox SX6012Mellanox SX6018Mellanox SX6036Mellanox SB7790/SB7890Mellanox QM8700
Model NameSX6012SX6018SX6036SB7700/ SB7800QM8700
Ports1218363640
Height1U1U1U1U1U
Switching Capacity1.3Tb/s2.016Tb/s4.032Tb/s7.2Tb/s16Tb/s
Link Speed56Gb/s56Gb/s56Gb/s100Gb/s200Gb/s
Interface TypeQSFP+QSFP+QSFP+QSFP28QSFP28
Management648 nodes648 nodes648 nodes2048 nodes2048 nodes
Management Ports12221
PSU RedundancyOptionalYesYesYesYes
Fan RedundancyNoYesYesYesYes
Integrated GatewayOptionalOptionalOptional----

Modular Switches

Mellanox director modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand modular switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.



managed 108 - 324 portsMellanox SX6506Mellanox SX6512Mellanox CS7520Mellanox SX6518
Model NameSX6506SX6512CS7520SX6518
Ports108216216324
Height6U9U12U16U
Switching Capacity12.12Tb/s24.24Tb/s43.2Tb/s36.36Tb/s
Link Speed56Gb/s56Gb/s100Gb/s56Gb/s
Interface TypeQSFP+QSFP+QSFP28QSFP+
Management648 nodes648 nodes2048 nodes648 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules3669
Leaf Modules (max.)612618
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyNoYesYesYes
managed 324 - 800 portsMellanox CS7510Mellanox SX6536Mellanox CS7500Mellanox CS8500
Model NameCS7510SX6536CS7500CS8500
Ports108216216324
Height16U29U28U29U
Switching Capacity64.8Tb/s72.52Tb/s130Tb/s320Tb/s
Link Speed100Gb/s56Gb/s100Gb/s200Gb/s
Interface TypeQSFP28QSFP+QSFP28QSFP28
Management2048 nodes648 nodes2048 nodes2048 nodes
Management HAYesYesYesYes
Console CablesYesYesYesYes
Spine Modules9181820
Leaf Modules (max.)9361820
PSU RedundancyYes (N+N)Yes (N+N)Yes (N+N)Yes (N+N)
Fan RedundancyYesYesYesWater cooled

Why Mellanox Switch Systems?

The benefits
  • Built with Mellanox's 4th and 5th generation InfiniScale® and SwitchX™ switch silicon
  • Industry-leading energy efficiency, density, and cost saving switches company
  • Ultra low latency
  • Real-Time Scalable Network Telemetry
  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

InfiniBand VPI Host-Channel Adapters

Mellanox ist weiterhin führend bei der Bereitstellung von InfiniBand Host Channel Adapters (HCA) - der leistungsfähigsten Interconnect-Lösung für Enterprise Data Center, Web 2.0, Cloud Computing, High Performance Computing und Embedded-Umgebungen.

ConnectX-6 VPI Cards ConnectX 6 VPI

ConnectX-6 with Virtual Protocol Interconnect® (VPI) supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. Delivering the highest throughput and message rate in the industry with 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds it is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. Supported speeds are HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet.



ConnectX-6Mellanox MCX654105A-HCATMellanox MCX654106A-HCATMellanox MCX653105A-HDATMellanox MCX653106A-HDAT
Model NameMCX654105A-HCATMCX654106A-HCATMCX653105A-HDATMCX653106A-HDAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR IB (200Gb/s) & 200GbE
Ports1212
ConnectorsQSFP56QSFP56QSFP56QSFP56
PCI2x PCIe3.02x PCIe3.0PCIe4.0PCIe4.0
Lanes x16x16x16x16
Brackettall bracket*
Dimensionsw/o Bracket: 167.65mm x 8.90mm


ConnectX-6Mellanox MCX654106A-ECATMellanox MCX653105A-ECATMellanox MCX653106A-ECAT
Model NameMCX654106A-ECATMCX653105A-ECATMCX653106A-ECAT
ASIC & PCI Dev IDConnectX®-6
SpeedHDR100 (100Gb/s), EDR IB & 100GbE)
Ports212
ConnectorsQSFP56QSFP56QSFP56
PCI2x PCIe3.02x PCIe3.02x PCIe3.0
Lanesx16x16x16
Brackettall bracket*
Dimensionsw/o Bracket: 167.65mm x 8.90mm

ConnectX-5 VPI Cards ConnectX 5 VPI

ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.



ConnectX-5Mellanox MCX555A-ECATMellanox MCX556A-ECATMellanox MCX556A-EDATMellanox MCX545A-ECANMellanox MCX545M-ECAN
Model NameMCX555A-ECATMCX556A-ECATMCX556A-EDATMCX545A-ECANMCX545M-ECAN
ASIC & PCI Dev IDConnectX®-5ConnectX®-5 for OCP
SpeedEDR IB (100Gb/s) & 100GbE
Ports12211
ConnectorsQSFP28QSFP28QSFP28QSFP28QSFP28
PCIPCIe3.0PCIe3.0PCIe4.0PCIe3.02x PCIe3.0
Lanesx16x16x16x16x16
Brackettall bracket*no bracket
Dimensions14.2cm x 6.9cm (Low Profile)OCP 2.0 type 2**

ConnectX-4 VPI Cards ConnectX 4 VPI

ConnectX®-4 adapter cards supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity provide exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today’s corporations to meet the demands of the data explosion



ConnectX-4Mellanox MCX455A-ECATMellanox MCX456A-ECATMellanox MCX455A-FCATMellanox MCX456A-FCATMellanox MCX453A-FCATMellanox MCX454A-FCAT
Model NameMCX455A-ECATMCX456A-ECATMCX455A-FCATMCX456A-FCATMCX453A-FCATMCX454A-FCAT
ASIC & PCI Dev IDConnectX®-4
SpeedEDR IB (100Gb/s) & 100GbEFDR IB (56Gb/s) & 40/56GbE
Ports121212
ConnectorsQSFP28QSFP28QSFP28QSFP28QSFP28QSFP28
PCIPCIe3.0PCIe3.0PCIe3.0PCIe3.0PCIe3.0PCIe3.0
Lanesx16x16x16x16x8x8
Brackettall bracket*
Dimensions14.2cm x 6.9cm (Low Profile)

Connect-IB Cards connectIB

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High-Performance Computing, Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.



Connect-IBMellanox MCB191A-FCATMellanox MCB192A-FCATMellanox MCB193A-FCATMellanox MCB194A-FCAT
Model NameMCB191A-FCATMCB192A-FCATMCB193A-FCATMCB194A-FCAT
ASIC & PCI Dev IDConnect-IB®
SpeedFDR IB (56Gb/s)
Ports1212
ConnectorsQSFPQSFPQSFPQSFP
PCIPCIe3.0PCIe3.0PCIe3.0PCIe3.0
Lanesx8x8x16x16
Brackettall bracket*
Dimensions14.2cm x 6.9cm (Low Profile)

ConnectX-3 Cards ConnectX 3

ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments.



ConnectX-3 (Pro)Mellanox MCX353A-QCBTMellanox MCX354A-QCBTMCX353A-FCBTMCX354A-FCBTMellanox MCX353A-FCCTMellanox MCX354A-FCCT
Model NameMCX353A-QCBTMCX354A-QCBTMCX353A-FCBTMCX354A-FCBTMCX353A-FCCTMCX354A-FCCT
ASIC & PCI Dev IDConnectX-3®ConnectX-3® PRO
SpeedQDR IB (40Gb/s) & 10GbEFDR IB (56Gb/s) & 40/56GbEFDR IB (56Gb/s) & 40/56GbE
Ports121212
ConnectorsQSFP+QSFP+QSFP+QSFP+QSFP+QSFP+
PCIPCIe3.0PCIe3.0PCIe3.0PCIe3.0PCIe3.0PCIe3.0
Lanesx8x8x8x8x8x8
Brackettall bracket*
Dimensions14.2cm x 5.2cm14.2cm x 6.9cm14.2cm x 5.2cm14.2cm x 6.9cm14.2cm x 5.3cm14.2cm x 6.9cm
Disclosures
* All tall-bracket adapters are shipped with the tall bracket mounted and a short bracket as an accessory.
** For more details, please refer to the Open Compute Project 2.0 Specifications.
All card types listed use RoHS

The Strengths of Mellaonox VPI Host-Channel Adapters

The benefits
  • World-class cluster performance
  • High-performance networking and storage access
  • Efficient use of compute resources
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Increased VM per server ratio
  • Guaranteed bandwidth and low-latency services
  • Reliable transport
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Scalability to tens-of-thousands of nodes
Target Applications
  • High-performance parallelized computing
  • Data center virtualization
  • Public and private clouds
  • Large scale Web 2.0 and data analysis applications
  • Clustered database applications, parallel RDBMS queries, high-throughput data warehousing
  • Latency sensitive applications such as financial analysis and trading
  • Performance storage applications such as backup, restore, mirroring, etc.