High Priority Queues

Introduction to high priority queues: 

SERC is introducing the computational services access through high priority queues on its facility. These queues are created with the highest priority on the systems and any job that is submitted to these queues will be serviced first as compared to other execution queues. Based on available resources on the system, the job scheduler will pick the eligible jobs from these queues first. Since these queues are charged differently as compared to normal execution queues, access to these queues will be through queue specific access control lists. Further details on these follows.

Charging policy link:
Please be advised, these queues are priced higher than the regular queues. Check the pricing information on Usage Charges page.

Who can use them and how to access them:
Users who have made advance payment for anticipated usage of high priority queues can request access to these queues.
To obtain access to these high-priority queues, users should submit a request on the NIS portal > Resources page.  

 

High Priority Queue details of Param Pravega: 
There are three high priority queues configured on the Param Pravega:

hipsmall: Meant for production runs with core counts ranging from 24 – 10008 with max job walltime of 24hrs. This queue is controlled by acls and will only allow jobs from authorised users. Per user 2 jobs can be in the queue including 2 jobs in running state.

hiplarge: Meant for production runs of users with demonstrable scalable parallel codes; Allows core counts from 10009 – 28008 with max job walltime of 24 hrs. This queue is controlled by acls and will only allow jobs from authorised users. Per user 1 job can be in the queue including a job in running state.

hipgpu: Meant for production runs for CUDA codes with core counts ranging from 1-12 and one GPU on a node, with max job walltime of 24hrs. This queue is controlled by acls and will only allow jobs from authorised users. The queue is dedicated only for multi-node jobs which can be used between 1 to 22 GPU nodes. Per user 3 jobs can be in the queue including 3 jobs in running state.

 

High Priority Queue details of DGX: 

There is one high priority queue configure on NVIDIA DGX-1 system.
Queue Name – hpq_2day_4G : This queue is meant for high priority production runs on CUDA cores with 4-GPU.
Walltime:  48 Hrs

 

High Priority Queue details of RNC: 

There are TWO high priority queues configured on the RODDAM NARASIMHA CLUSTER (RNC).

Queue Name & Properties (RNC):

1. hpq_1day_large:

Meant for production runs with core counts ranging from 16 to 512 cores with job wall-time hours could be range from >=1 to 24 hours and No. of nodes could be range from 1 to 29 nodes.

2. hpq_gpu_1day :

Meant for production runs with gpu counts ranges from 1 to 8 (Max. 4/node) and core counts ranging from 16 to 96 (Max. 48/Node) cores with job wall-time hours could be range from >=1 to 24 hours and No. of nodes could be range from 1 to 2 nodes.