Premier Partner


The flexible and revolutionary HPC management solution from Liqid

You need dynamically-configurable bare-metal servers with a perfect size or performance to provide exactly the physical resources that applications running on them currently require?

Leverage your existing industry-standard data center components to provide a flexible, scalable architecture built from pools of disaggregated resources that can be reconfigured with just a few simple mouse clicks.

By automating processes, further efficiencies can be realized to meet the data demands associated with next-generation applications in DevOps, AI, Cloud and Edge computing, IoT deployment, NVMe and GPU-over-Fabric (NVMe-oF, GPU-oF) support.

  • Eliminate costly overprovisioning -> deploy only what is needed via Liqid's UI, API or CLI.
  • If more resources are needed -> scale them up as needed in seconds.
  • If workloads are to be decommissioned -> quickly move resources to new or existing servers.

The Solution - Liqid Matrix

Take your datacenter from static to dynamic with Liqid Matrix

The solutions and services enable infrastructure to adapt and approach full utilization. By connecting Compute, networking, storage, GPU, FPGA, and Intel® Optane™ memory devices via intelligent high-speed fabrics, the resources can be disaggregated at will with the help of the Liqid Matrix software in order to generate real-time assemble IT solutions.

Orchistration with Liqid Matrix

Management is governed by the Liqid Command Center which, in conjunction with a PCIe Management Fabric Switch (Liqid Grid), enables core system resources to be connected to physical servers over a PCI Express (PCIe) fabric and can be dynamically reconfigured as needed. The physical connection from the fabric to the chassis is realized through the ports on which a 4-way breakout cable leads to the respective chassis. A switch, for example 24 or 48 ports, enables data exchange between the expansion chassis, which can be adjusted as required in terms of number, type and size and together form a resource pool. The hardware chassis from the compound (hosts, GPUs, NIC, storage) are independent of the manufacturer, but their compatibility should be checked beforehand. For smooth operation, Liqid recommends using the in-house hardware for this. The technical specifications and further information about the hardware can be found in the data sheets.

Deploy Large-Scale CDI Environments with Liqid Matrix

Benefits over classic data centers

The resource pool can be scaled as required by simply adding different chassis. Thanks to the simple user interface of the management software, workloads can be put together from this existing pool in just a few seconds, which are precisely tailored to the work requirements that are currently required. The consumption of the necessary work resources is limited to the workload set configured for this. If more computing power or memory is required, additional GPUs/storage units can be allocated with just a few clicks. The unassigned and free resources of the available hardware pool remain inactive, thus significantly reducing unnecessary power consumption and hardware wear and tear. The manual rewiring or reconfiguration of the hardware is practically eliminated.

Another advantage of the software support is the possibility of managing the data center remotely at any time. An API interface also enables quick access via Ansible, VMWare or Slurm. With just a few clicks, ready-made images with different operating systems can be installed on the individual workloads.

Performance - ioDirect

Liqid uses ioDirect technology to avoid the bottleneck problem when exchanging data between different GPUs. GPU units can be connected directly to each other instead of the classic way via a host CPU. The resulting bandwidth is about 474% higher than the normal method without ioDirect. A direct connection between GPUs and SSDs (GDS) is also possible.

  • Classic: GPU-to-HostCPU-to-GPU → Bandwidth: 9 GB/s, Latency: 25µs/li>
  • With ioDirect: GPU-to-GPU → Bandwidth: 49 GB/s (+474%), Latency: 3µs (89% lower latency)

  • Classic: GPU-to-HostCPU-to-SSD → IOPS: 179k, Latency: 712µs
  • With ioDirect: GPU-to-SSD → IOPS: 2900k (+1520%), Latency: 112µs (86% lower latency)

Energy and cost savings

Compared to the conventional solution, the Liqid Matrix offers an overall saving of up to 90% and eliminates costly oversupply.

Liqid Matrix Solution for The Infrastructure Waste

Further advantages at a glance

  • Dynamically configurable servers
  • Independent scaling of resources
  • Decoupling of purchasing decisions
  • Extended product life cycle
  • Improved software license efficiency
  • Pay As You Grow - principle
  • Improved Resource Utilization

Liqid Composable Dynamic Infrastructure (CDI) Demo

The following demo demonstrates how easily the IT Administrators can dynamically orchestrate pools of resources within the Liqid Command Center.