bullx B500 blade system
The bullx blade system B500 series was designed by Bull’s HPC R&D team with the following guidelines in mind:
- Optimization and simplification of the compute node for HPC usage
- Integration of several compute nodes and first-level inter-connect
- Flexible structure for communication and I/O networks, for the closest fit with customer requirements
The bullx blade chassis can host up to 9 double compute blades, i.e. 18 compute nodes, in 7U. It contains the first-level interconnect, a management unit and all components necessary to power and cool the blades, interconnect and management unit. It is the ideal foundation to build a medium to large HPC cluster, with bullx compute blades associated with service nodes of the R423 family.
Leading edge technologies
Each node delivers the power of two of the lastest-generation Intel Xeon processors and leverages the bandwidth of an InfiniBand FDR switch used as part of a totally non-blocking architecture – which means that all blades can communicate simultaneously at the maximum nominal capacity.