IMPORTANT: The information contained on this page is no longer up-to-date. Please refer to this page on our wiki that shows thew new queue configuration which will be rolled out in April.

These job classes are intended for general purpose workloads, with an emphasis on high throughput for large numbers of relatively small jobs.As on CX1, all jobs should be submitted with a #PBS resource request of the form:

  • #PBS -lselect=N:ncpus=X:mem=Ygb
  • #PBS -lwalltime=HH:MM:SS
  • Where N is the number of node, and X  and Y are the number of cores and amount of memory per node.

Additionally the optional parameters mpiprocs=Z and ompthreads=W may be added. These specify the number of MPI ranks per node and the number of OpenMP threads per MPI rank and are only required for hybrid MPI/threaded jobs. Z x W should equal or be less than ncpus X.

The following job geometries are supported:

Job class
Number of nodes Nncpus/nodeMax mem/nodeMax walltime/hrNumber of concurrently
running jobs per user
short 1-18 24 or 48 100GB 2hr 2
large 2-72 24 or 48 100GB 48hr 3
capability 72-270 28 or 56 100GB 24hr 1

 As on CX1, you should not  submit to a specific queue directly. PBS will automatically route the job based on the resource request.

  • The short class is for testing jobs. 
  • The large class is for big parallel jobs. You should only run jobs of this size if you understand the parallel efficiency of your code.
  • The capability class is for the very largest parallel jobs. Only run here if you are really sure you need it, generally as a precursor to using Archer

Choosing the number of CPUs/node for large jobs

The large queue is backed by a mix of 24 core and 28 core nodes. We strongly recommend that you always specify ncpus=24. This gives PBS the most flexibility in scheduling the job, as it can be allocated to nodes of either core count. Although the additional 4 cores will be left idle, the job turnaround time is likely to significantly be improved through reduction in queue time.

If you wish to always run on nodes of the same core count, use the cpumodel resource to indicate the requirement, eg ncpus=24:cpumodel=24 or ncpus=28:cpumodel=28. Note that by being more prescriptive about placement the queue time is likely to increase. 

Each node has 24 or 28 physical cores, depending on the CPU model. Each physical core presents as two logical cores. Using all logical cores may improve the performance of some applications, particularly hybrid MPI/threaded ones.  Test by requesting ncpus=48,56 and tailoring mpiprocs and ompthreads as described above.

Checking the availability of resources

Run the command availability for an instantaneous view of what resources are available for immediate execution of each job class. Note that jobs so-sized may still remain queued if the jobs/user concurrency limit is reached.

Estimating job start times

The command qstat -w -T can be used to show projected start times for jobs. These are estimates, not guarantees. Use them for guidance but do not rely on them. They may in some cases be quite inaccurate.