The latest addition to the SERC HPC facility is the 800-cores Tyrone cluster. This cluster is composed of 17 rack mounted nodes, located in the CPU room of SERC. Of the 17-nodes, one is a head node and all others are execution nodes. This cluster is a heterogenous cluster composed of two types of nodes, 9 nodes with 32-cores each and 8-nodes with 64-cores each. Each 32-core node has a 2.4GHz AMD Opteron 6136 processor and 64GB RAM. The 64-core node has 2.2GHz AMD Opteron 6274 processor and 128GB RAM. The cluster nodes are connected using Infiniband HBAs through a Mellanox QDR Interconnect switch. The cluster is managed by the opensource batch scheduler “torque” software for job scheduling and load balancing. The head-node allows user logins for application development and testing. The cluster has a local scratch filesystem of 1.8 TB, to be used for computational runs. This scratch filesystem is mounted across all execution nodes of the cluster. The localscratch space is enabled through different storage devices and hence must be accessed using job scripts only.
Hardware Overview :-
The cluster has 17 nodes and the hardware configuration for each node is as follows:-
32-core node :-
64-core node :-
Scratch Spaces for Job runs:
The Head node alone has two 1TB SATA HDDs (1.8TB of usable space and mounted as local scratch) which is mounted across the nodes(between node 1 – node 8) to be used during job execution. This scratch space is meant to use for the following execution queues “idqueue, qp32, qp64 and qp128”
The Tyrone node9 alone has two 1TB SATA HDDs (1.8TB of usable space and mounted as local scratch) which is mounted across the nodes(between node 9 – node 16) to be used during job execution. This scratch space is meant to use for the following execution queue “qp256”
System Softwares/Libraries :-
Application Softwares/Libraries :-
Workload Manager :-
Location of TyroneCluster :-
DNS name of the machine :-
Accessing the system :-
The Tyrone cluster has one login node, tyrone-cluster, through which the user can access the cluster and submit jobs. The machine is accessible for login using ssh from inside IISc network (ssh email@example.com). The machine can be accessed after applying for basic HPC access, for which:
For any queries, email to helpdesk_serc or please contact System Administrator, #109, SERC.