BlueBEAR HPC Configuration
Configuration for the BlueBEAR high performance computing (HPC) cluster at the University of Birmingham.
Setup
To run a Nextflow pipeline, load the following modules,
Apptainer/Singularity
BlueBEAR comes with Apptainer installed for running processes inside containers. Our configuration makes use of this when executing processes.
We advise you create a directory in which to cache images using the NXF_SINGULARITY_CACHEDIR
environment variable.
For example,
where _project_
is your project code and _initial_
is its initial letter. Edit and add the above line to the
.bashrc
file in your home directory. If this variable is not defined, Nextflow will store images in the pipeline
working directory which is cleaned after each successful run.
You may notice that our nf-core configuration file enables Singularity, not Apptainer, for use with Nextflow. Due to
their similarities (see this announcement), Apptainer
creates the alias singularity
for apptainer
allowing both commands to be used. Nextflow’s Singularity engine makes
use of pre-built Singularity images, whereas its Apptainer engine does not.
Run
Do not run Nextflow pipelines on a login node. Instead, create a submission script or start an interactive job.
To run a Nextflow pipeline, specify the BlueBEAR profile option. For example,
where _pipeline_
is the name of the pipeline.
Make sure the job time requested is sufficient for running the full pipeline. If the job ends before the pipeline is
complete, rerun it from a checkpoint using the -resume
option. For example, run
in a new job with more time requested.
Nextflow creates a working directory to store files while the pipeline is running. We have configured Nextflow to clean
the contents of this directory after a successful run. The -resume
option will only work after an unsuccessful run.
If you want to keep the working directory contents, add the debug
profile to your run command (e.g.
-profile bluebear,debug
).
Example
Here is an example job submission script. This job runs the nf-core/rnaseq
pipeline tests with the BlueBEAR config
file. We request 1 hour to run the pipeline on 1 node with 2 tasks.
Troubleshooting
Failed while making image from oci registry
You may get an error which looks like the following,
This may be caused by an issue with parallel image downloads (see e.g.
this issue). You can fix this by downloading all images required
by the pipeline using nf-core
tools to
download the pipeline either interactively with
or in a Slurm job.
Note: nf-core
tools is a Python package which needs to be installed first.
Here is an example Slurm job script which downloads version 3.14.0 of the nf-core/rnaseq
pipeline.
Then, you can update the final line in the example script with the path to the locally downloaded pipeline.
Note: Make sure that the NXF_SINGULARITY_CACHEDIR
environment variable defined (e.g. in your .bashrc
file) and
takes the same value when downloading and running the pipeline.