High Priority Queues

Introduction to high priority queues: 

SERC is introducing the computational services access through high priority queues on its facility. These queues are created with the highest priority on the systems and any job that is submitted to these queues will be serviced first as compared to other execution queues. Based on available resources on the system, the job scheduler will pick the eligible jobs from these queues first. Since these queues are charged differently as compared to normal execution queues, access to these queues will be through queue specific access control lists. Further details on these follows.

Charging policy link:
Please be advised, these queues are priced higher than the regular queues. Check the pricing information on Usage Charges page.

Who can use them and how to access them:
Users who have made advance payment for anticipated usage of high priority queues can request access to these queues.
To obtain access to these high-priority queues, users should submit a request on the NIS portal > Resources page.  

Information on the system and queues:
There are three high priority queues configured on the Cray XC40:

hipsmall: Meant for production runs with core counts ranging from 24 – 10008 with max job walltime of 24hrs. This queue is controlled by acls and will only allow jobs from authorised users. Per user 2 jobs can be in the queue including 2 jobs in running state.

hiplarge: Meant for production runs of users with demonstrable scalable parallel codes; Allows core counts from 10009 – 28008 with max job walltime of 24 hrs. This queue is controlled by acls and will only allow jobs from authorised users. Per user 1 job can be in the queue including a job in running state.

hipgpu: Meant for production runs for CUDA codes with core counts ranging from 1-12 and one GPU on a node, with max job walltime of 24hrs. This queue is controlled by acls and will only allow jobs from authorised users. The queue is dedicated only for multi-node jobs which can be used between 1 to 22 GPU nodes. Per user 3 jobs can be in the queue including 3 jobs in running state.

 

There is one high priority queue configure on NVIDIA DGX-1 system.
Queue Name – hpq_2day_4G : This queue is meant for high priority production runs on CUDA cores with 4-GPU.
Walltime:  48 Hrs