# Getting Started Before you begin, make sure you have the following: - Access to an HPC cluster with Slurm installed. - Basic knowledge of Linux command-line operations. ## Slurm scheduler Simple Linux Utility for Resource Management (SLURM) is commonly used for job scheduling and resource management on high-performance computing (HPC) clusters. Slurm operates on the concept of jobs, nodes, and partitions. Familiarize yourself with these key terms: - **Job**: A computational task submitted to the cluster. - **Node**: A computing resource that performs tasks as part of a job. - **Partition**: A logical division of the cluster resources. ## Connecting to the HPC Cluster Use SSH to connect to the HPC cluster: ``` ssh @hpc-cluster.example.com ``` Replace `username` with your username and hpc-cluster.example.com with the actual address of your HPC cluster. ## Submitting a Job To submit a job using the `sbatch` command followed by the script file, run the command as- ``` sbatch script.sh ``` where `script.sh` is the name of your shell script. The script should include the commands necessary for running your analysis or simulation. The output will provide a unique job ID (e.g., 12345678). You can view more information about your job using the `squeue` command:: ## Example of slurm script ``` #!/bin/bash #SBATCH --job-name=my_mpi_job # Job name #SBATCH --partition=LocalQ # Partition to submit to #SBATCH --output=output.%j # Output file name (%j expands to jobID) #SBATCH --error=error.%j # Error file name (%j expands to jobID) #SBATCH --ntasks=4 # Number of tasks (MPI processes) #SBATCH --nodes=1 # Number of nodes #SBATCH --ntasks-per-node=4 # Number of tasks per node #SBATCH --time=00:10:00 # Walltime (hh:mm:ss) # Load available modules if you require any module load # Run your program here python3 test.py ``` This is a basic example, and you may need to customize it according to your specific needs, such as adjusting resource requirements, module loading, and paths. You can specify the following parameters in your slurm script. **--job-name=**: Specifies a name for the job. This name will be used in identifying the job in the queue and in the output and error files. **--output=**: Specifies the name of the file where the standard output of the job will be written. You can use `%j` in the filename, and SLURM will replace it with the job ID. **--error=**: Specifies the name of the file where the standard error of the job will be written. Like `--output`, `%j` can be used in the filename. **--partition=**: Specifies the name of the partition or queue to which the job should be submitted. Partitions are used to group nodes with similar characteristics. **--ntasks=**: Specifies the total number of tasks (or processes) to be run. This is often used in parallel computing with MPI. **--nodes=**: Specifies the number of nodes requested for the job. If not specified, SLURM may allocate the tasks across nodes as needed. **--ntasks-per-node=**: Specifies the number of tasks (or processes) to be run per node. **--cpus-per-task=**: Specifies the number of CPU cores to allocate per task. This can be used to control the number of threads per task. **--time=