Getting Started
Please note: this service is intended primarily for supporting coursework and individual projects for taught programmes in the Department of Computing. Researchers and members of other departments may want to consult the Research Computing Services (RCS) for college-provided compute resources.
Update 27/9/2024 |
---|
Ubuntu 24.04 upgrades are complete, please create new 24.04 compatible Python virtual environments (links in following steps) using a lab PC |
Introduction
What is Slurm and the GPU Cluster?
Slurm is a Linux open-source task scheduling system for managing compute resources, in this case, the department's GPU resources.
Using Slurm commands such as 'sbatch' and 'salloc', your scripts (such as CUDA-based parallel computing - deep-learning, machine-learning and large language models (LLMs), using frameworks such as PyTorch and Tensorflow, or Jax, among others) are executed on our pool of NVIDIA GPU Linux servers.
Read this guide to learn how to:
- connect to the submission host server and submit a test script
- start an interactive job (connect directly to a GPU exclusively for a time limit)
- compose a shell script that uses shared storage, a python environment, CUDA and your python scripts
Before you start
Some familiarity with Department of Computing systems is desirable before using the GPU cluster:
|
Tip: make sure you have tested your python scripts on your own device or a Doc Lab PC with GPU before using the GPU cluster. Prior testing will help flag errors with your scripts before using sbatch |
Step by step
- 1a. Quick Start (submit from a DoC Lab PC)
- 1b. Quick Start (externally from a personal device)
- 1c. Quick Start (interactive shell using 'salloc')
- 2. Store your datasets under /vol/bitbucket
- 3. Creation of a Python virtual environment for your project (example)
- 4. Using CUDA (add to a script)
- 5. Example submission script
- 6. Connect to a submission host to send jobs
- 6b. GPU types
- Frequently Asked Questions
Open a Terminal window from a lab PC (Ubuntu/macOS, Windows 10/11 use Powershell built-in ssh or WSL/2), and type the following commands*:
ssh gpucluster2.doc.ic.ac.uk
# or ssh gpucluster3.doc.ic.ac.uk
sbatch /vol/bitbucket/shared/slurmseg.sh
In this example, a user first logs into a Slurm submission host server (gpucluster2.doc.ic.ac.uk via ssh) and then submits a pre-existing script using
the sbatch command. The output will be stored, by default, in the root of your ~/ home directory, with the filename slurm20-{xyz}.out.
If you have a bash script ready, replace /vol/bitbucket/shared/slurmseg.sh
Follow the next steps to learn how to prepare your own scripts for submission.
If connecting from your own computer or device, make sure you specify your College Username, an example:
YourCollegeUserName@gpucluster2.doc.ic.ac.uk sbatch /vol/bitbucket/shared/slurmseg.sh |
gpucluster2.doc.ic.ac.uk and gpucluster3.doc.ic.ac.uk are now accessible from outside the college network, but if for some reason they are not accessible, use shell[1-5].doc.ic.ac.uk as a JumpHost:
ssh YourCollegeUserName@shell5.doc.ic.ac.uk YourCollegeUserName@gpucluster2.doc.ic.ac.uk
This 'interactive' method allows you to work as if you were using a terminal prompt on a Lab PC with GPU (for a maximum of four days)
Connect to gpucluster2 or gpucluster3 from a Lab PC or externally from your own device, use 'salloc' to queue your CPU, RAM and GPU resources.
ssh YourCollegeUserName@gpucluster2.doc.ic.ac.uk salloc --gres=gpu:1 # typical output: |
Salloc will drop you straight into your allocated node, as indicated by the shell prompt eg. myaccount@cloud-vm-47-197
Run 'nvidia-smi' to show your allocated GPU. You can now commence writing your scripts and debugging with an Nvidia GPU
In addition you can ssh directly to the node hosting your GPU, as long as your job is running in the queue
squeue --me #example output: username@cloud-vm-40-244:~$ squeue --me
Make a note of your node from the Nodelist column, the user in this case would type : ssh username@cloud-vm-47-199.doc.ic.ac.uk You can also connect directly using IDEs such as VSCode - remember to run salloc first and find your node name. If reconnecting, make sure you ssh to gpucluster2/3 first and then ssh to your allocated node (or directly from a lab PC or even VPN) |
There is a department-wide network share /vol/bitbucket for data and virtual environment storage. Create your personal folder as follows:
mkdir -p /vol/bitbucket/${USER}
Read the detailed Python Virtual Environments guide for best practice in using /vol/bitbucket and creating virtual environments.
Tip: shared folders such as /vol/bitbucket or your home directory /homes/username are vital to get your scripts running on remote GPU cluster nodes. On your own laptop or computer, you would store files on local storage but for the GPU cluster, make sure you copy all necessary files to shared storage, so your scripts can access files regardless of which server they are running from.
Here are some examples how you might use /vol/bitbucket in the course of a GPU cluster project.
Please note: Use a lab PC to prepare your Python environment, avoid running 'pip' or 'git' commands when logged in to gpucluster2.doc.ic.ac.uk or gpucluster3.doc.ic.ac.uk or you may encounter 'out of space' errors. |
Installation of Python Virtual Environment:
# connect to a random lab PC - remember to use a lab PC to create envs, use pip and git
ssh shell1.doc.ic.ac.uk
/vol/linux/bin/sshtolab
cd /vol/bitbucket/${USER}
python3 -m virtualenv /vol/bitbucket/${USER}/myvenv
Again, consult the Python Virtual Environment guide for more about managing virtual environments in your account.
There exists a 'base' read-only environment, located at /vol/bitbucket/starter with Pytorch and tensorflow pre-installed using 'pip' and may suffice when first submitting jobs. Enable this in scripts using 'source /vol/bitbucket/starter/bin/activate'
Follow the previous steps when you need to create an environment using your specific required pip/conda packages.
Most GPU jobs will make use of the Nvidia CUDA tool-kit. Multiple versions of this tool-kit are available under /vol/cuda (network share). Inside those directories are numbered sub-directories for different versions of the CUDA tool-kit. If you need to use CUDA, please consult the README under any one of those directories.
Suppose that you want to use CUDA tool-kit verson 12.0.0, add the following line/s to your submission script:
If your shell is bash; note the initial dot-space (.␣)
. /vol/cuda/12.0.0/setup.sh
OR if your shell is (t)csh
source /vol/cuda/12.0.0/setup.csh
The script will set up your unix path to access commands such as nvcc.
If you are using frameworks such as TensorFlow, PyTorch and Caffe, make sure you have chosen a compatible version of the Nvidia CUDA tool-kit. For example, Pytorch comes in CPU and GPU flavours, but also different versions of CUDA - sourcing the matching CUDA distribution from /vol/cuda will help reduce errors in your output.
Here is a template you can copy to a shell script to get started. Please adjust any paths that may point to folders you have created.
IMPORTANT: This example assumes you have followed the previous steps and installed a python environment (using virtualenv, extra lines may be needed using minconda, check the example script furthe below) as directed. Please adjust paths if you have an existing python environment, or if you already load your environment in ~/.bashrc (note: sbatch does not load ~/.bashrc, source it as per example script) . Do not uncomment #SBATCH lines, keep them as below, make sure the #SBATCH directives are directly after #!/bin/bash
#!/bin/bash #SBATCH --gres=gpu:1 #SBATCH --mail-type=ALL # required to send email notifcations #SBATCH --mail-user=<your_username> # required to send email notifcations - please replace <your_username> with your college login name or email address export PATH=/vol/bitbucket/${USER}/myvenv/bin/:$PATH # the above path could also point to a miniconda install # if using miniconda, uncomment the below line # source ~/.bashrc source activate source /vol/cuda/12.0.0/setup.sh /usr/bin/nvidia-smi uptime |
Remember to make your script executable (run this command in a shell, do not include it in your script):
chmod +x <script_name>.sh
Please note, environment variables from ~/.bashrc or ~/.cshrc are not loaded by sbatch-submitted scripts, you should source them as in the preceding script. Your script can access your own home directory, your /vol/bitbucket folder or shared volumes such as /vol/cuda
gpucluster2.doc.ic.ac.uk and gpucluster3.doc.ic.ac.uk are submission hosts for the GPU cluster, from where you run the sbatch command to send your scripts to the remote GPU host servers.
Here is an example of the steps involved in submitting your script as a Slurm job:
- Connect to a slurm submission host (see step 2 for connecting from your own laptop):
ssh gpucluster2.doc.ic.ac.uk
# or ssh gpucluster3.doc.ic.ac.uk - Change to an appropriate directory on the host:
- # this directory may already exist after Step 3
mkdir -p /vol/bitbucket/${USER}
cd /vol/bitbucket/${USER}
- # this directory may already exist after Step 3
-
Now try running an example job. A simple shell-script has been created for this purpose. You can view the file with less, more or view. You can use the sbatch command to submit that shell-script to run it as a Slurm job on a GPU host:
sbatch /vol/bitbucket/shared/slurmseg.sh If you have composed your own script, in your bitbucket folder, for example, enter:
cd /vol/bitbucket/${USER}
sbatch /path_to_script/my_script.shSubstitute '/path_to_script/my_script.sh' for your actual script and path name.
- You can invoke the squeue command to see information on running jobs:
squeue -
The results of sbatch will output to the directory where the command was invoked, eg /vol/bitbucket/${USER}. The filenames will be derived from the invoked command or script – for example:
less slurm-XYZ.out
where XYZ is a unique Slurm job number. Visit the FAQ below to find out how to customise the job output name
The GPU hosts (or nodes) each contain :
Partition name (taught/research) | GPU | CPU |
---|---|---|
gpgpu / resgpu | - Tesla A40 48GB | AMD Epyc |
gpgpuB / resgpuB | - Tesla A30 24GB GPU | AMD Epyc |
gpgpuC / resgpuC | - Tesla T4 16GB GPUs | Intel |
gpgpuD /resgpuD | - Tesla T4 16GB GPUs | Intel |
gpgpuM | - Tesla A100 10GB Mig Devices | AMD Epyc 7/9 |
For example, to target a T4 GPU (taught students):
sbatch --partition gpgpuC /path/to/script.sh
*Research/PhD users are automatically assigned 'resgpu' versions of the above, but a smaller pool, please make use of the college HPC cluster for more resources
Please note: the submission hosts are not to be used for computation directly. Please do not attempt to SSH and then run resource-intensive python or similar processes on the submission hosts. The servers only have one role:
Note in particular that the submission hosts do not have Nvidia CUDA-capable cards installed; they are virtual machines. This is deliberate. Do not be surprised if you SSH to the hosts to invoke a GPU script (without sbatch) and receive an error message |
---|
- What GPU cards are installed on the GPU hosts?
Answer: Nvidia Tesla A30 (24GB RAM split into 12GB instances), Tesla T4 (16GB RAM), Tesla A40 (48GB RAM) and Tesla A100 (80GB split into 10GB instances) - What are the general platform characteristics of the GPU hosts?
Answer: 24-core/48 thread Intel Xeon CPUs with 256GB RAM and AMD EPYC 7702P 64-Core CPUs - How do I see what Slurm jobs are running?
Answer: invoke any one of the following commands on gpucluster:
# List all your current Slurm jobs in brief format
squeue
# List all your current Slurm jobs in extended format.
squeue -l
Please run man squeue on gpucluster for additional information. - How do I delete a Slurm job?
Answer: First, run squeue to get the Slurm job ID from the JOBID column, then run:
scancel <job ID>
You can only delete your own Slurm jobs. - How many GPU hosts are there?
Answer: As of July 2023, there are nine host GPU servers, with eight running DoC Cloud GPU nodes. - How do I analyse a specific error in the Slurm output file/e-mail after running a Slurm job?
Answer: If the reason for the error is not apparent from your job’s output, make a post on the Edstem CSG board , including all relevant information – for example:
- the context of the Slurm command that you are running. That is, what are you trying to achieve and how have you gone about achieving it? Have you, created a Python virtual environment? Are you using a particular server or deep learning framework?
- the Slurm script/command that you have used to submit the job. Please include the full paths to the scripts if they live under /vol/bitbucket
- what you believe should be the expected output.
- the details of any error message displayed. You would be surprised at how many forget to include this.
- I receive no output from a Slurm job. How do I go about debugging that?
Answer: This is an open-ended question. Please first confirm that your Slurm job does indeed generate output when run interactively. You may be able to use one of the 'gpu01-36' interactive lab computers to perform an interactive test. If you still need assistance, please follow the advice in the preceding FAQ entry (Number vi). - How do I customise my job submission options?
Answer: Add a Slurm comment directive to your job script – for example:
# To request 1 or more GPUs (default is 1):
#SBATCH --gres=gpu:1
# To request a 48GB Tesla A40 GPU:
#SBATCH --partition gpgpu
# or 80GB A100 GPU
#SBATCH --partition AMD7-A100-T
# Please note, there are only a few 48GB/80GB GPUs available, interactive jobs are not permitted
# For other GPUs, refer to 6b. GPU types, including the research equivalents of the above
# To receive email notifications
#SBATCH --mail-type=ALL
#SBATCH --mail-user=<your_username>
#Customise job output name
#SBATCH --output=<your_job_name>%j.out - How do I run a job interactively?
Answer: Use srun and specify a gpu, and other resources. eg. for a bash shell:
srun --pty --gres=gpu:1 bash
Update: use 'salloc' as detailed in Step 2c - I need a particular software package to be installed on a GPU host.
Answer: Have you first tried installing the package in a Python virtual environment or in your own home directory with the command:
pip install --user <packagename>
If the above options do not work then make a post on the Edstem CSG board with details of the package that you would like to be installed on the GPU server(s). Please note: CSG are only able to install standard Ubuntu packages if doing so does not conflict with any exisiting package or functionality on all the GPU servers. - My job is stuck in queued status, what does this mean?
Answer: This could be because all GPUs are in use. PD status occurs if you are already running two jobs, and will run (R) when one of your previous tasks is complete. (QOSMaxGRESPerUser) means you are using your maximum of two GPUs at any one time. - What are the CUDA compute capabilities for each GPU?
Please consult the NVIDIA Compatiiblity Index for more information.
The cluster GPUs support the following levels:
sm75 (T4), sm80 (A30), sm86 (A40)
These should be considered when, for example, using older versions of Pytorch and receiving 'not supported' errors
General Comments
The following policies are in effect on the GPU Cluster:
- User can have two running jobs only (taught students), all other jobs will be queued until one of the two jobs completes running
- A job that runs for more than four days will be automatically terminated - this is a walltime restriction for taught students - configure checkpoints with your python framework to resume training.
- As with all departmental resources, any non-academic use of the GPU cluster is strictly prohibited.
- Any users who violate this policy will be banned from further usage of the cluster and will be reported to the appropriate departmental and college authorities.
ICT, the central college IT services provider, has approximately one-hundred CX1 cluster nodes which have GPUs installed. It is possible to select and use these computational resources through PBS Pro job specifications.
Students cannot request access to this resource but project supervisors can apply – on behalf of their students - for access to ICT GPU resources run by the Research Computing Service team:
Other resources
If you do not need a GPU for your computation then please do not use the GPU Cluster. You could end up inconveniencing users who do need a GPU. Please instead consider:
- The departmental DoC Condor service
- The departmental batch servers:
Long Running Processes guide
Long Running Processes PDF link