System Architecture and Configuration

The cray supercomputer housed on Eight cabinets of 1506 nodes and Four cabinets of DDN storage with a total capacity of 2.88PB and detailed hardware and storage configuration is as follows:-CPU CLUSTER

The cluster is composed of 1376 compute nodes with a total count of 33024 cores and the hardware configuration of each node is as follows:-

Two Intel Xeon Haswell E5-2680 v3 12 core processors operating at 2.5 Ghz clock speed
128 GB Main Memory DDR4-2133
Proprietary Cray Aries Interconnect with Dragonfly topology

ACCELERATOR CLUSTER

XC40 has an accelerator cluster composed of:

  1. GPU nodes
44 GPU nodes, each GPU node is a combination of One CPU processor and One GPU Accelerator card
Host cpu cores composed of Intel Xeon Ivybridge E5-2695 v2 12 core processors operating at 2.4GHz
Nvidia Tesla K40 GPU Accelerator card for each node and each GPU card has 2880 cuda cores
64GB Main Memory
12GB Device Memory
Proprietary Cray Aries Interconnect with Dragonfly Topology
  1. Xeon-Phi nodes
24 Intel Xeon-Phi nodes, each node is a combination of One CPU processor and One Xeon-Phi Co-Processor
Host cpu cores composed of Intel Xeon Ivybridge E5-2695 v2 12 core processors operating at 2.4GHz
Intel(R) Xeon Phi(TM) CPU 7210 (Knights Corner based Graphics Processor) for each node and has 64 cores
96GB Main Memory
16GB Device Memory
Proprietary Cray Aries Interconnect with Dragonfly Topology
HIGH SPEED STORAGE
HPC System with 2 PetaByte usable space provided by high speed DDN storage unit supporting Cray’s parallel Lustre filesystem. The storage is configured with Lustre filesystem and RAID 6 and the specification as follows:
Four cabinets each have five disk trays and each tray contains 48 NEARLINE SAS Seagate Hard drives
Each cabinet contains 240 drives and the capacity of each hard drive is 3TB
Parallel Storage is connected directly to the Cray XC40 compute racks using infiniband- FDR interconnect

OPERATING SYSTEM The Cray Linux Environment (CLE) operating system includes Cray’s customized version of the SUSE Linux Enterprise Server (SLES) 11 Service Pack 3 (SP3) operating system, with a Linux 3.0.93 kernel. CLE consists of two components: CLE and Compute Node Linux (CNL). The service nodes, external login nodes, and post-processing nodes of Cray XC40 run a full-featured version of Linux.

The compute nodes of Cray XC40 run Compute Node Linux (CNL). CNL is a stripped-down version of Linux that has been extensively modified to reduce both the memory footprint of the OS and also the amount of variation in compute node performance due to OS overhead. Compute nodes are diskless nodes and also there is no swap space.

INTERCONNECTION XC40 compute nodes are connected using Cray Aries interconnect chip. Four compute nodes are housed on a blade is connected on-board Aries chip. The Aries ASIC provides the network interconnect for the compute nodes on the Cray XC40 system base blades and implements a standard PCI Express Gen3 host interface.

Sixteen blade units forming a chassis are connected through the Aries chip onto a backplane. One Cabinet is made of three chassis and two cabinets forms a group. All six chassis in a group are connected in all-to-all manner using electrical cables with data transfer rate of 14Gbps. All four groups of the Cray system are connected using optical cables with a data transfer of 12.5 Gbps. The Dragonfly network topology is constructed from a configurable mix of backplane, copper and optical links, providing scalable global bandwidth and avoiding expensive external switches.

Documentation:

Cray Aries Interconnection

Report Problems to:

If you encounter any problem in using this software please report to SERC helpdesk at the email address helpdesk.serc@auto.iisc.ac.in or contact System Administrators in #103(SERC)