GPUs are highly parallel multi-core systems. The architecture of the GPUs allows execution of many concurrent threads. Parallelization of programs makes use of the multi-core architecture to get better performance throughput. This approach of using the GPU to solve general-purpose problems is known as GPGPU. With the technique of GPGPU, the GPU, which generally handles computational graphics, now performs computations traditionally handled by CPUs. The GPGPUs use the massive floating point computation power as modified stream processors for non-graphics data thus making GPU a general-purpose computing power. The latest addition to the GPU computing in SERC is Nvidia’s Fermi architecture based C2070 and M2090 cards. It has ECC, Error-Correcting Code memory, a type of memory that includes special circuitry for testing the accuracy of data as it passes in and out of memory.Fermi cluster in SERC is composed of five GPU nodes. Node fermi1 to fermi4 are composed of one Intel(R) Xeon(R) CPU W3550 processor operating at 3.07 Ghz with 16Gb RAM, one Nvidia C2070 GPGPU card and 1TB local disk space while node fermi5 has one Intel(R) Xeon(R) CPU ES-2660 processor operating at 2.20Ghz with 64Gb RAM, three Nvidia M2090 GPGPU cards and 300GB local disk space.
This cluster is a parallel batch computing system. It is managed by the Torque workload manager to load balance the jobs. Job’s submission to the Torque batch scheduler is similar to that of PBSPro. The cluster is configured such that it can admit only GPU based jobs. Since this cluster is dedicated to GPU jobs, the user job scripts must specify the number of GPUs the job intends to use. Torque considers each GPU card as a single GPU for allocation and a job at any given time can use the GPGPUs of one card only. The cluster permits multi-node jobs combining mpi with CUDA. Hence all jobs to this cluster must specify ppn=1 to 4 and gpu=1 (for fermi1 to fermi4) andppn=1 to 16 and gpu=1 to 3 (for fermi5) in their job scripts.
Hardware Overview :-
Node (fermi1 – fermi4):
System Softwares/Libraries :-
Application Softwares/Libraries :-
Workload Manager :-
Location of Fermi Cluster :-
DNS name of the machine :-
Accessing the system :-
The Fermi cluster has one login node, fermi1, through which the user can access the cluster and submit jobs. The machine is accessible for login using ssh from inside IISc network (ssh email@example.com). The machine can be accessed after applying for basic HPC access, for which:
For any queries, raise a ticket in the helpdesk or please contact System Administrator, #103,SERC.