FERMI CLUSTER

Introduction :-

GPUs are highly parallel multi-core systems. The architecture of the GPUs allows execution of many concurrent threads. Parallelization of programs makes use of the multi-core architecture to get better performance throughput. This approach of using the GPU to solve general-purpose problems is known as GPGPU. With the technique of GPGPU, the GPU, which generally handles computational graphics, now performs computations traditionally handled by CPUs. The GPGPUs use the massive floating point computation power as modified stream processors for non-graphics data thus making GPU a general-purpose computing power. The latest addition to the GPU computing in SERC is Nvidia’s Fermi architecture based C2070 and M2090 cards. It has ECC, Error-Correcting Code memory, a type of memory that includes special circuitry for testing the accuracy of data as it passes in and out of memory.Fermi cluster in SERC is composed of five GPU nodes. Node fermi1 to fermi4 are composed of one Intel(R) Xeon(R) CPU W3550 processor operating at 3.07 Ghz with 16Gb RAM, one Nvidia C2070 GPGPU card and 1TB local disk space while node fermi5 has one Intel(R) Xeon(R) CPU ES-2660 processor operating at 2.20Ghz with 64Gb RAM, three Nvidia M2090 GPGPU cards and 300GB local disk space.

This cluster is a parallel batch computing system. It is managed by the Torque workload manager to load balance the jobs. Job’s submission to the Torque batch scheduler is similar to that of PBSPro. The cluster is configured such that it can admit only GPU based jobs. Since this cluster is dedicated to GPU jobs, the user job scripts must specify the number of GPUs the job intends to use. Torque considers each GPU card as a single GPU for allocation and a job at any given time can use the GPGPUs of one card only. The cluster permits multi-node jobs combining mpi with CUDA. Hence all jobs to this cluster must specify ppn=1 to 4 and gpu=1 (for fermi1 to fermi4) andppn=1 to 16 and gpu=1 to 3 (for fermi5) in their job scripts.

Vendor :-

1. OEM – Fujitsu (fermi1 – fermi4)Authorised Seller – Wipro Ltd, Bangalore, India.
2. OEM – Hewlett-Packard (fermi5)

Hardware Overview :-

Node (fermi1 – fermi4):

  • Intel(R) Xeon(R) CPU W3550 processor operating at 3.07 Ghz clock speed.16 GB DDR3 Main Memory.
  • 500GB of Disk Space with 500 GB localscratch.
  • Nvidia Tesla C2070 card.

Node (fermi5):

  • Intel(R) Xeon(R) CPU ES-2660 processor operating at 2.20 Ghz clock speed.
  • 64 GB Main Memory.
  • 300 GB of Disk Space.
  • Nvidia Tesla M2090 card.
  • Gigabit Ethernet Connectivity
System Softwares/Libraries :-
Application Softwares/Libraries :-

Workload Manager :-

Location of Fermi Cluster :-
  • CPU Room, SERC.
DNS name of the machine :-
  • fermi1.serc.iisc.ernet.in

Accessing the system :-

The Fermi cluster has one login node, fermi1, through which the user can access the cluster and submit jobs. The machine is accessible for login using ssh from inside IISc network (ssh compututional_userid@fermi1.serc.iisc.ernet.in). The machine can be accessed after applying for basic HPC access, for which:

  • Fill the online HPC application form here & submit at Room: 117, SERC.
  • HPC Application form must be duly signed by your Advisor/Research Supervisor.

Helpdesk

For any queries, raise a ticket in the helpdesk or please contact System Administrator, #103,SERC.