The Roddam Narasimha cluster (in honor of Prof. Roddam Narasimha) is designed to cater to the small and medium scale HPC needs of the community. The cluster is also setup to satisfy the heterogeneous needs of the community. In addition to the regular nodes with typical HPC configurations, the cluster has a high-memory node for large-scale memory intensive computing, two NVIDIA V100 nodes with high-speed NVLink for GPU accelerations and nodes with large-volume SSDs (Solid State Drive) for applications that require fast access to large volumes of data that will otherwise have to be fetched from hard drives.
CPU Cluster: 2X Intel® Xeon® Gold 6248R Processor 3.00Ghz 24C/48T 35.75M Cache based CPU cluster with 40
nodes; each node has 48 cores each.
Accelerator based clusters: GPU nodes will act as an accelerator-based cluster. 2 V100 nodes with each node consisting 4 NVIDIA V100 GPUs interconneted by NVLink. The GPU card is Tesla V100 SXM2 32GB CoWoS HBM2 with 300GB/s NVLINK.
High Speed Storage: SSD nodes are come up with 2TB Nvme M.2 (0.3DWPD) storage on each node with a total of 8 nodes. These nodes are well fit for applications that require fast access to large volumes of data.
High Memory Node: One High memory node is available within this cluster. This node having a high-capacity memory of 1536GB. It can be used for large-scale memory intensive computing.
Vendor: Netweb Technologies®
How to Use Roddam Narasimha Cluster(RNC):
Accessing the system:
The RNC has one login node,rnarasimha, through which the user can access the cluster and submit jobs.
The machine is accessible for login using ssh from inside IISc network.
The machine can be accessed after applying for Roddam Narasimha Cluster access, for which:
- Fill the online computational account form here & submit through the mail to email@example.com.
- HPC Application form must be duly signed by your Advisor/Research Supervisor.
- Once the computational account is created, Kindly fill the Roddam Narasimha Cluster Access form to access the Roddam Narasimha Cluster.
For any queries, raise a ticket in the helpdesk or please contact System Administrator, #103,SERC.