1-GPU Job-Script q_1day-1G (Wall time = 24Hrs)

#!/bin/sh
#SBATCH --job-name=serial_job_test    # Job name
#SBATCH --ntasks=1                    # Run on a single CPU
#SBATCH --time=24:00:00               # Time limit hrs:min:sec
#SBATCH --output=serial_test_job.out  # Standard output
#SBATCH --error=serial_test_job.err   # error log
#SBATCH --gres=gpu:1
#SBATCH --partition=q_1day-1G 
pwd; hostname; date |tee result
docker run -t --gpus '"device='$CUDA_VISIBLE_DEVICES'"' --name $SLURM_JOB_ID --ipc=host --shm-size=20G --user $(id -u $USER):$(id -g $USER) --rm -v /localscratch/<uid>:/workspace/localscratch/<uid> <preferred_docker_image name>:<tag> bash -c 'cd /workspace/localscratch/<uid>/<path to desired folder>/ && python <script to be run.py>' | tee -a log_out.txt

##example for above looks like ( do not include these 2 highlighted lines in your script):
##docker run -t --gpus '"device='$CUDA_VISIBLE_DEVICES'"' --name $SLURM_JOB_ID --ipc=host --shm-size=20G --user $(id -u $USER):$(id -g $USER) --rm -v /localscratch/secdsan:/workspace/localscratch/secdsan secdsan_cuda:latest bash -c 'cd /workspace/localscratch/secdsan/gputestfolder/ && python gputest.py' | tee -a log_out.txt

Job Submission Instructions:

  1. All jobs should be sumbitted via slurm.
  2. If jobs are run without slurm, your actions will be notified to your professor and account will be blocked.
  3. Get the sbatch script as above in a file, for job submission use the below command.

     

    sbatch <SCRIPT NAME> 

     example: sbatch test_script.sh