Bull Sequana X1000 supercomputers

Bull sequana supercomputers

The open exascale-class supercomputers

With the Bull Sequana X1000 range of supercomputers, Atos confirms its strategic commitment to the development of innovative high performance computing systems – the systems needed to meet the major challenges of the 21st century.

Designed by the Bull R&D in close cooperation with major customers, the Bull Sequana X1000 supercomputer leverages the latest technological advances, so as to guarantee maximum performance for a minimized operation cost.

sequana, an innovative solution that matches the exascale technological challenges:

More information on sequana:

Open for future technologies

Bull Sequana X1000 is designed to be able to integrate the most advanced technologies in terms of processors, interconnect networks and data storage – both current technologies and future technologies that will make it possible to reach the exaflops level. The sequana supercomputers have an open architecture and are based on industry standards, for both hardware and software. They propose customers a large choice of technologies and will be compatible with successive generations of future processor technologies (CPUs, accelerators, low-power processors…) and different interconnect technologies (BXI, InfiniBand…), thus offering maximum investment protection.

Limit energy consumption

Controlling energy consumption is the main roadblock on the path to exascale. sequana is ultra energy-efficient: it targets a PUE very close to 1. Its energy consumption is 10 times lower compared to the previous generation of supercomputers

100% of the components of sequana – both compute nodes and switches – are cooled using an enhanced version of the Bull Direct Liquid Cooling (DLC) solution. DLC is a proven cooling technology that minimizes the global energy consumption of a system by using warm water up to 40°C.

Sequana is also an energy-aware system that integrates fine-grain energy sensors and a new generation of the High Definition Energy Efficiency Monitoring implemented in previous bullx systems to facilitate optimization.

Handle the data deluge

Exascale is not just about exaflops, it also involves dealing with exabytes of data. How data are organized, moved, stored and accessed can have a considerable impact on performance, especially as the volume of data is increasing exponentially.

Sequana features a hardware and software architecture designed to tackle the most complex data processing, and based on Bull research in distributed systems management and data access.

Accelerate application performance

Exascale application performance requires massive parallelism. Bull Sequana features the Bull Exascale Interconnect (BXI) developed specifically for exascale. BXI introduces a revolution with a hardware acceleration technology that frees up processors from all communication tasks.

Moreover, the software environment provided with Bull Sequana allows for fine-grain management of resources at scale, and offers optimal efficiency in a production environment.

Deliver a resilient platform

The frequency of failures increases with the number of components, so that in an exascale-class supercomputer, the sheer number of components – tens of thousands – is a risk in itself, unless the system incorporates high quality resilience.

The architecture and packaging of sequana were designed with resilience in mind:

  • redundancy of critical components and switch-over capabilities to make of sequana a self-healing system;
  • efficient software suite and management tools, providing hierarchical management, and including an embedded and redundant management server;
  • resilient interconnect with adaptive routing and reliability features;
  • automatic configuration, with node recognition.

Focus on sequana innovation

The Sequana cell

The sequana cell

In Sequana the computing resources are grouped into cells. Each cell tightly integrates compute nodes, interconnect switches, redundant power supply units, redundant liquid cooling heat exchangers, distributed management and diskless support.

Large building blocks to facilitate scaling

This packaging consisting of large building blocks facilitates high-scale deployment – up to tens of thousands of nodes, by optimizing density, scalability and cost-effectiveness

Each Sequana cell is organized across three cabinets: two cabinets contain the compute nodes and the central cabinet houses the interconnect switches.

Compute cabinet

Each compute cabinet houses 48 horizontal compute blades, with the associated power modules at the top of the cabinet and the redundant hydraulic modules for cooling at the bottom of the cabinet.

24 blades are mounted on the front side of the cabinet, while the 24 other blades are mounted on the rear side.

Each cell can therefore contain up to 96 compute blades, i.e. 288 compute nodes, equipped either with conventional processors (such as Intel® Xeon® processors) or accelerators (e.g. Intel® Xeon Phi™ or NVIDIA® GPUs).

In each 1U blade, a cold plate with active liquid flow cools all hot components by direct contact – the sequana compute blades contain no fan.

The following compute blades are initially available:

Bull sequana X1110 blade

 

Bull Sequana X1110 and X1120 blades

The 1U Bull Sequana X1110 blade integrates 3 compute nodes, each powered by 2 Intel® Xeon® processors (code named Broadwell).

The Bull sequana X1120 blade is similar but it is powered by new generation Intel® Xeon® processors (code-named Skylake SP).

Bull Sequana X1210 blade

The 1U Bull Sequana X1210 blade is composed of 3 compute nodes each powered by an Intel® Xeon Phi™ x200 processor (code-named Knights Landing).

Bull Sequana X1125 blade

The 1U Bull Sequana X1125 blade includes a single compute node equipped with 4 NVIDIA Pascal GPUs.

Switch cabinet

Switch cabinet

The interconnect components located in the central cabinet form the first two levels of a fat-tree interconnect network. External nodes (such as I/O nodes and service nodes) plug directly into the system fabric at the cell level.

The switch cabinet contains:

  • the level 1 Direct Liquid Cooled switches – BXI or Infiniband EDR;
  • the level 2 Direct Liquid Cooled switches – BXI or Infiniband EDR;
  • the switch power group;
  • two optional Ultra Capacity Modules that compensate power outages up to 300 ms;
  • the management modules, including the Ethernet switches for management and the Rack Monitoring and Administration (RAMA) module – redundant management server with shared storage
  • the backplane octopus, a highly innovative column that provides connections between L1 & L2 switches and compute nodes.

Bull eXascale Interconnect (BXI)

The core feature of BXI is a full hardware-encoded communication management system, which enables compute processors to be fully dedicated to computational tasks while communications are independently managed by BXI. This interconnect offers:

  • sustained performance under the most demanding workloads;
  • revolutionary hardware acceleration;
  • designed for massive parallelism – up to 64k nodes, up to 16 million threads;
  • designed to support exascale programming models and languages.

 

Intel Xeon Phi logo

Sequana is powered

by Intel® Xeon® processors Scalable family and

Intel® Xeon Phi™ processors

Agenda

Embark on an interactive visit of sequana

Discover sequana in video

First sequana in production at SURFsara