High Priority Queues

Introduction to high priority queues: 

SERC is introducing the computational services access through high priority queues on its facility. These queues are created with the highest priority on the systems and any job that is submitted to these queues will be serviced first as compared to other execution queues. Based on available resources on the system, the job scheduler will pick the eligible jobs from these queues first. Since these queues are charged differently as compared to normal execution queues, access to these queues will be through queue specific access control lists. Further details on these follows.

Charging policy link:
Please be advised, these queues are priced higher than the regular queues. Check the pricing information on Usage Charges page.

Who can use them and how to access them:
Users who have made advance payment for anticipated usage of high priority queues can request access to these queues.
To obtain access to these high-priority queues, users should submit a request on the NIS portal > Resources page.  

 

High Priority Queue details of Param Pravega: 

Core and walltime limits for high priority queues are same as regular queues on Param Pravega. These queues are controlled by ACLs and will only allow jobs from authorised users. Please discuss the pricing with your respective advisors before using these queues.

 

High Priority Queue details of DGX: 

There is one high priority queue configure on NVIDIA DGX-1 system.
Queue Name – hpq_2day_4G : This queue is meant for high priority production runs on CUDA cores with 4-GPU.
Walltime:  48 Hrs

 

High Priority Queue details of RNC: 

There are TWO high priority queues configured on the RODDAM NARASIMHA CLUSTER (RNC).

Queue Name & Properties (RNC):

1. hpq_1day_large:

Meant for production runs with core counts ranging from 16 to 512 cores with job wall-time hours could be range from >=1 to 24 hours and No. of nodes could be range from 1 to 29 nodes.

2. hpq_gpu_1day :

Meant for production runs with gpu counts ranges from 1 to 8 (Max. 4/node) and core counts ranging from 16 to 96 (Max. 48/Node) cores with job wall-time hours could be range from >=1 to 24 hours and No. of nodes could be range from 1 to 2 nodes.