For this explanation, the tutorial pipeCyclic will be run using multiple processors on the Idun cluster.
The tutorial can be copied to your working directory using:
$ cp -r $FOAM_TUTORIALS/incompressible/simpleFoam/pipeCyclic/ .
Logging on Idun
The following command will log you into the Idun cluster. The username is the same as for the ntnu email, i.e. <username>@stud.ntnu.no
$ ssh <username>@idun-login1.hpc.ntnu.no
Use your FEIDE password when prompted. To leave the cluster and return to your local computer use 'exit'.
$ exit
Transferring files
To upload files to the cluster, run the 'scp' command from your local terminal. Below, pipeCyclic is copied to the users home directory on Idun. Note, it is copied and not moved. Also note the colon followed by the full stop at the end (':.').
$ scp -r pipeCyclic <username>@idun.hpc.ntnu.no:.
Using the script provided in the "Job script" section below, the case can be run. Once this is done, exit the cluster and from your local terminal run the following commands to copy the results back to your machine.
The pipeCyclic case that was originally copied to Idun is renamed "_orig" to avoid having two folders with the same name. Note the ' .' at the end of the second line.
$ mv pipeCyclic pipeCyclic_orig $ scp -r <username>@idun.hpc.ntnu.no:./pipeCyclic .
Make sure you are in the local computer (not on logged into the cluster) when transferring files using the scp commands as they are written above.
Job script
Here is a file that can be used to execute an openFoam solver using multiple processors on Idun.
To create a new script, the text editor nano can be used. Make sure to create/save it in the case you are wanting to run.
$ nano myJob.slurm
It is important to save the .slurm file in the case that is to be run. In this example that means along with the '0', 'constant' and 'system' folders of pipeCyclic.
Copy following commands into your .slurm script.
Take care to read comments next to each line and make appropriate changes where there is a *. This will set up the case in the same way as the author has. It is of course possible to change all the SBATCH commands.
#!/bin/sh #SBATCH --partition=CPUQ #SBATCH --account=<username> # * <username>@stud.ntnu.no #SBATCH --time=00-00:10:00 # * upper time limit for job (DD-HH:MM:SS). MAX 7 days (=167h) on Idun #SBATCH --nodes=1 # 1 compute node #SBATCH -c 28 # * number of CPUs as specified in your decomposeParDict ##SBATCH --ntasks-per-node=1 # 1 mpi process each node #SBATCH --mem=12000 # 12GB - in megabytes #SBATCH --job-name="pipeCyclic" # * name of your case #SBATCH --output=srun_pipeCyclic.out # * change name according to case #SBATCH --mail-user=<username>@stud.ntnu.no # * student email #SBATCH --mail-type=ALL # no idea module purge module load OpenFOAM/8-foss-2020a ## * make sure to load the correct OpenFoam version (more info below) source $FOAM_BASH blockMesh decomposePar mpirun --oversubscribe -np 28 simpleFoam -parallel >log_solve ## * change "28" to number of CPUs and "simpleFoam" to desired solver reconstructPar
To check what the exact syntax of the openFoam version you wish to load looks like, use the following command in your users directory:
$ module spider openfoam
This will list the versions of openFoam that exist on Idun. Copy the exact text of the version you want into 'myJob.slurm', line 15.
Running the script
To run the script, match the number of processors specified in decomposeParDict and .slurm files before running the following two commands. The first makes the .slurm script executable (this only needs to be done once), whilst the second executes it.
Before executing, you will need to change the number of processors in the decomposeParDict file to match with the number of CPUs specified in the .slurm file, or vice versa.
$ chmod u+x myJob.slurm $ ./myJob.slurm
Make a call on whether you wish to run blockMesh or not. If you have run the case before and have only changed non-mesh related parameters, there is no need to run blockMesh again.