We help our customers to look to the future with the latest technology and to overcome the bottlenecks of older technologies.

InfiniBand is mainly used in supercomputing today. Due to the low latency in the nanosecond range and the transmission rate of up to 60Gbit / s (12x DDR), scientific institutions can hardly do without this technology today.

Many of the TOP500 supercomputers already use InfiniBand cluster. But not only in areas such as climate research, this technology ensures wide interest. With increasingly demanding applications, the economy is encountering hurdles that are currently not surmountable with normal cluster and storage technology.

Direct attached storage systems (DAS) currently only manage transfer rates of 320MB / s. Fiber Channel has recently reached 4Gbit / s and is still a long way from completing the 10Gbit / s Fiber Channel standard. 10Gbit / s Ethernet exists, but has to contend with the old problems of Ethernet. Ethernet offers a rather meager performance combined with a poor price / performance ratio.

InfiniBand, on the other hand, already offers everything to satisfy even the most demanding company. A good price-performance ratio with maximum performance. As a de facto standard you can nowadays see InfiniBand 4x with 10Gbit / s and 12x with 30Gbit / s.

The next product generation is already in the starting blocks:

4x (DDR) with 20Gbit/s and 12x (DDR) with 60Gbit/s.

The Double Data Rate (DDR) technology actually doubles the current bandwidth with full backward compatibility to InfiniBand 4x and 12x.

Not only the InfiniBand protocol can be used, but e.g. IP networks via InfiniBand, with IP over InfiniBand (IpoIB). The possibilities appear endless, e.g. an ISCSI SAN (Storage Area Network) that outshines any traditional Fiber Channel SAN. InfiniBand drivers are available for major operating systems such as Microsoft Windows, SUN Solaris, HP-UX, and Linux.