Table of Contents
Beginners' Guide
You can access the cluster using SSH. The default shell is Bash. For a detailed introduction to the shell and other topics, please see the Software Carpentry project.
User ID
Login
To login you have to authenticate yourself on the master node with SSH at cluster.wr.informatik.uni-hamburg.de
.
On *nix-like operation systems you can simply open a shell and use the following command (where <name> is your user name on the cluster):
ssh <name>@cluster.wr.informatik.uni-hamburg.de
To speed things up you can add the following entry to your ssh config in $HOME/.ssh/config
and connect via ssh wr-cluster
:
Host wr-cluster HostName cluster.wr.informatik.uni-hamburg.de User <name>
To use GUI applications, you need X forwarding. Alternatively, you can use X2Go, which offers better performance.
ssh -X <name>@cluster.wr.informatik.uni-hamburg.de
To view PDFs use zathura:
zathura your_file.pdf
If you are running Windows it is recommended to use a graphical SSH client like PuTTY. The leading X-Server for Windows is Xming. Do not forget to enable X11 forwarding.
To transfer data between cluster and your local Windows system use WinSCP.
Warning: If you try to login too often within two minutes your login gets blocked for two minutes. Should you try to login again with these two minutes the block will be extended automatically.
Changing the Password
Your password should be changed as soon as possible.
To change you password run the command passwd
on the cluster.
passwd
Public Key Authentication
If you do not want to type your password every time you login, it is possible to generate an SSH key on your local computer using ssh-keygen
.
The public key now has to be copied to the cluster.
This can be achieved with the following command:
ssh-copy-id <name>@cluster.wr.informatik.uni-hamburg.de
If you chose to store your key at a different location you have to adjust the path with -i
.
Copying Files
For copying files you can use the command line tool scp.
For copying files from your computer the cluster you execute the following command on your local computer:
scp /path/to/local/file <name>@cluster.wr.informatik.uni-hamburg.de:/path/to/remote/file
If you want to copy files form the cluster to your local computer you just switch the order of the arguments.
scp <name>@cluster.wr.informatik.uni-hamburg.de:/path/to/remote/file /path/to/local/file
If you want to copy a folder just use scp -r
(recursive) to copy the folder including its content.
Another way is to mount your cluster home folder on your local computer. This is possible with sshfs. The advantage of this solution is that you can edit the files directly on your local computer and they are saved on the cluster.
To mount the remote home folder on your local computer run:
sshfs -o reconnect -o workaround=rename <name>@cluster.wr.informatik.uni-hamburg.de:/home/user /some/mount/point
Normally you will want to mount your cluster home folder in /media
or a subdirectory of it.
To unmount the sshfs share simply run:
fusermount -u /some/mount/point
Job Managing
A jobscript for MPI applications (mpi.slurm
):
#!/bin/bash # Time limit is one minute. See "man sbatch" for other time formats. #SBATCH --time=1 # Run a total of ten tasks on two nodes (that is, five tasks per node). #SBATCH --nodes=2 #SBATCH --ntasks-per-node=5 #SBATCH --ntasks=10 # Use "west" partition. #SBATCH --partition=west # Output goes to "job.out", error messages to "job.err". #SBATCH --output=job.out #SBATCH --error=job.err srun hostname mpiexec ./mpi-application
To run the job execute:
sbatch mpi.slurm
To cancel or delete a job execute:
scancel <jobid>
Display accounting data for all jobs in a log
sacct
Display the information about jobs, partitions etc. in a graphical view
smap
A graphical user interface to view and modify your jobs
sview
Information about SLURM nodes and partitions
sinfo
To allocate a node for interactive usage (this is especially useful for magny1
):
$ salloc -p magny -N 1 salloc: Granted job allocation XYZ $ srun hostname $ mpiexec ./mpi-application $ exit salloc: Relinquishing job allocation XYZ salloc: Job allocation XYZ has been revoked.
Further information about SLURM, its architecture and its commands are available in this Slurm User Manual; the Official Documentation provides more extensive information. In addition, see the man pages of the several commands.
Compiling
POSIX Threads
To compile a program with POSIX threads use the -pthread
option.
gcc -pthread pthread-test.c
Notice: Include the POSIX threads header file (pthread.h
).
OpenMP
To compile a program that uses OpenMP just add the -fopenmp
option.
gcc -fopenmp openmp-test.c
Notice: Include the OpenMP header file (omp.h
).
MPI
To compile a program with MPI you need to use the MPI compiler.
mpicc mpi-test.c
Notice: Include the MPI header file (mpi.h
).
If you use MPI and OpenMP just add the -fopenmp
option.
mpicc -fopenmp mpi-omp-test.c
Notice: Include the OpenMP and MPI header files (omp.h
and mpi.h
).