====== Beginners' Guide ====== You can access the cluster using SSH. The default shell is Bash. For a detailed introduction to the shell and other topics, please see the [[https://software-carpentry.org/|Software Carpentry]] project. ===== User ID===== ==== Login ==== To login you have to authenticate yourself on the master node with SSH at ''cluster.wr.informatik.uni-hamburg.de''. On *nix-like operation systems you can simply open a shell and use the following command (where is your user name on the cluster): ssh @cluster.wr.informatik.uni-hamburg.de To speed things up you can add the following entry to your ssh config in ''$HOME/.ssh/config'' and connect via ''ssh wr-cluster'': Host wr-cluster HostName cluster.wr.informatik.uni-hamburg.de User To use GUI applications, you need X forwarding. Alternatively, you can use [[http://wiki.x2go.org/|X2Go]], which offers better performance. ssh -X @cluster.wr.informatik.uni-hamburg.de To view PDFs use zathura: zathura your_file.pdf If you are running Windows it is recommended to use a graphical SSH client like [[http://www.chiark.greenend.org.uk/~sgtatham/putty/|PuTTY]]. The leading X-Server for Windows is [[http://sourceforge.net/projects/xming/|Xming]]. Do not forget to enable [[http://www.ollis-place.de/tmp2/2009/02/putty.jpg|X11 forwarding]]. To transfer data between cluster and your local Windows system use [[http://winscp.net/eng/docs/introduction|WinSCP]]. Warning: If you try to login too often within two minutes your login gets blocked for two minutes. Should you try to login again with these two minutes the block will be extended automatically. ==== Changing the Password ==== Your password should be changed as soon as possible. To change you password run the command ''passwd'' on the cluster. passwd ==== Public Key Authentication ==== If you do not want to type your password every time you login, it is possible to generate an SSH key on your local computer using ''ssh-keygen''. The public key now has to be copied to the cluster. This can be achieved with the following command: ssh-copy-id @cluster.wr.informatik.uni-hamburg.de If you chose to store your key at a different location you have to adjust the path with ''-i''. ==== Copying Files ==== For copying files you can use the command line tool [[http://linux.die.net/man/1/scp|scp]]. For copying files from your computer the cluster you execute the following command on your local computer: scp /path/to/local/file @cluster.wr.informatik.uni-hamburg.de:/path/to/remote/file If you want to copy files form the cluster to your local computer you just switch the order of the arguments. scp @cluster.wr.informatik.uni-hamburg.de:/path/to/remote/file /path/to/local/file If you want to copy a folder just use ''scp -r'' (recursive) to copy the folder including its content. Another way is to mount your cluster home folder on your local computer. This is possible with [[http://fuse.sourceforge.net/sshfs.html|sshfs]]. The advantage of this solution is that you can edit the files directly on your local computer and they are saved on the cluster. To mount the remote home folder on your local computer run: sshfs -o reconnect -o workaround=rename @cluster.wr.informatik.uni-hamburg.de:/home/user /some/mount/point Normally you will want to mount your cluster home folder in ''/media'' or a subdirectory of it. To unmount the sshfs share simply run: fusermount -u /some/mount/point ===== Job Managing ===== A jobscript for MPI applications (''mpi.slurm''): #!/bin/bash # Time limit is one minute. See "man sbatch" for other time formats. #SBATCH --time=1 # Run a total of ten tasks on two nodes (that is, five tasks per node). #SBATCH --nodes=2 #SBATCH --ntasks-per-node=5 #SBATCH --ntasks=10 # Use "west" partition. #SBATCH --partition=west # Output goes to "job.out", error messages to "job.err". #SBATCH --output=job.out #SBATCH --error=job.err srun hostname mpiexec ./mpi-application To run the job execute: sbatch mpi.slurm To cancel or delete a job execute: scancel Display accounting data for all jobs in a log sacct Display the information about jobs, partitions etc. in a graphical view smap A graphical user interface to view and modify your jobs sview Information about SLURM nodes and partitions sinfo To allocate a node for interactive usage (this is especially useful for ''magny1''): $ salloc -p magny -N 1 salloc: Granted job allocation XYZ $ srun hostname $ mpiexec ./mpi-application $ exit salloc: Relinquishing job allocation XYZ salloc: Job allocation XYZ has been revoked. Further information about SLURM, its architecture and its commands are available in this [[https://hpc.llnl.gov/banks-jobs/running-jobs/slurm-user-manual|Slurm User Manual]]; the [[https://slurm.schedmd.com/documentation.html|Official Documentation]] provides more extensive information. In addition, see the man pages of the several commands. ===== Compiling ===== ==== POSIX Threads ==== To compile a program with POSIX threads use the ''-pthread'' option. gcc -pthread pthread-test.c Notice: Include the POSIX threads header file (''pthread.h''). ==== OpenMP ===== To compile a program that uses OpenMP just add the ''-fopenmp'' option. gcc -fopenmp openmp-test.c Notice: Include the OpenMP header file (''omp.h''). ==== MPI ==== To compile a program with MPI you need to use the MPI compiler. mpicc mpi-test.c Notice: Include the MPI header file (''mpi.h''). If you use MPI and OpenMP just add the ''-fopenmp'' option. mpicc -fopenmp mpi-omp-test.c Notice: Include the OpenMP and MPI header files (''omp.h'' and ''mpi.h'').