Skip to content

HPC

HPC SLURM docs.s3it.uzh.ch/how-to_articles/how_to_run_snakemake/ Running Snakemake workflow on Puhti - Docs CSC Can I add more CPUs to my job? - SCG

Snakemake

Snakemake for Bioinformatics: Cleaning up

  • Snakemake re-evaluates the DAG after the successful execution of every job spawned from a checkpoint.
  • Use --touch if we know the results are good and don't want to waste time re-making them.
  • In Shadow mode, Snakemake links the input files into a temporary working directory then runs the shell command and finally copies the outputs back. It helps clean up temporary files that are only used within the job.

tmux

A beginner's guide to tmux

  • Basic commands
    # list active sessions
    tmux ls
    # reattach to the existing session
    tmux attach -t 0
    # detach a script and log to file
    tmux new -d 'script.sh |& tee log'
    
  • Keybinds - Ctrl + B D to detach - Ctrl + B % to split horiziontally - Ctrl + B " to split vertically - Ctrl + B X to close pane
  • Mouse mode - Ctrl + B : , then enter set -g mouse

Disk IO

How to Monitor Disk IO in a Linux System | Baeldung on Linux

  • profile Disk I/O
# report disk activities using -b
# report activity for each block device using -d
# identify devices using -p
sar -p -d -b 1

# obtain statistics about a partition
vmstat -p /dev/sda2 1
  • identify the process behind the bottleneck
  • iotop requires root privilege
# show only processed or threads actually proforming IO using -o
# display list of processes without threads using -P
sudo iotop -oP

Using Pytorch on Legacy GPU

  • Install Pytorch with correct version of CUDA bundled. - Previous PyTorch Versions - However, a matching CUDA version does not necessarily mean the architecture is supported. e.g. Kepler GPUs support CUDA up to 11.4, while they have a compute capability up to 3.5, which is no longer supported after Pytorch 1.12 (presumably). - In this case, one has to build the supported arch from source on their own.
  • Build Pytorch for sm_35 on your own. - Check if the compute capability of your GPU is included intorch.cuda.get_arch_list() - Build for your architecture GitHub - nelson-liu/pytorch-manylinux-binaries
  • disable cuDNN backends in case of CUDNN_STATUS_MAPPING_ERROR - torch.backends.cudnn.enabled = False