FSL on ARC: Difference between revisions

From RCSWiki
Jump to navigation Jump to search
 
(4 intermediate revisions by the same user not shown)
Line 27: Line 27:
  apptainer exec -B /work,/scratch,/bulk --nv /global/software/fsl/fsl-6.0.5/fsl605.sif someCommand
  apptainer exec -B /work,/scratch,/bulk --nv /global/software/fsl/fsl-6.0.5/fsl605.sif someCommand
   
   
Note that the <code>-B</code> option is required to have the <code>/work</code> and <code>/bulk</code> file systems mounted inside the container.
<code>someCommand</code> should probably be executing a shell script that includes   
<code>someCommand</code> should probably be executing a shell script that includes   
  export PATH=$FSLDIR/bin:$PATH
  export PATH=$FSLDIR/bin:$PATH
Line 42: Line 44:
</source>
</source>
Or similar ones for a100s
Or similar ones for a100s
=== Using <code>eddy</code> form a container ===
When a '''command is executed in a container''' a completely new terminal session is created for it.
The session will "exist" only for this single command.
It means that if there is a need for some kind of '''initialization''' it has to be also done, and done somehow within the same '''single command'''.
In this container, the <code>$PATH</code> variable is not set, thus, setting it may be considered the required initialization.
To combine several commands into one you can simply put them in a bash script and run it.
Running that script will be the '''single command''' the container executes.
Assuming you are planning to use <code>eddy</code> from the provided container, your job script could look like this:
<source lang=bash>
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=32gb
#SBATCH --time=24:00:00
#SBATCH --gres=gpu:1
#SBATCH --partition=gpu-v100
CONT=/global/software/fsl/fsl-6.0.5/fsl605.sif
apptainer exec -B /work,/bulk,/scratch $CONT run_eddy.sh
</source>
This job script relies on an external script <code>run_eddy.sh</code> that actually does all the necessary data processing,
including running <code>eddy</code>.
Note that the <code>eddy</code> binary in the container is called <code>eddy_cuda11.2</code>.
Thus your <code>run_eddy.sh</code> script should include the following lines:
<source lang=bash>
#! /bin/bash
export PATH=$FSLDIR/bin:$PATH
....
....
eddy_cuda11.2 ....
....
</source>
If there is a need for some additional pre- or post-processing, it can also be added to this script.
This way, there will be only one '''complex script''' and a '''simple job script''' (recommended approach).
If necessary, an argument can be passed to if from the job script like this, <code>input_parameter</code>:
apptainer exec -B /work,/bulk $CONT run_eddy.sh input_parameter
Then the <code>input_parameter</code> can be accessed in the <code>run_eddy.sh</code> script as the <code>$1</code> variable.


= Links =
= Links =


[[ARC Software pages]]
[[ARC Software pages]]

Latest revision as of 20:27, 13 March 2024

Background

  • Downloads and Registration:
https://fsl.fmrib.ox.ac.uk/fsldownloads_registration

FSL is a comprehensive library of analysis tools for FMRI, MRI and diffusion brain imaging data. It runs on macOS (Intel and M1/M2), Linux, and Windows via the Windows Subsystem for Linux, and is very easy to install. Most of the tools can be run both from the command line and as GUIs. To quote the relevant references for FSL tools you should look in the individual tools' manual pages, and also please reference one or more of the FSL overview papers.

Licensing

FSL is a licensed software and it is the property of Oxford University Innovation:

FSL container

The container is available to be copied to your home directory

 /global/software/fsl/fsl-6.0.5/fsl605.sif

or it could be used directly from this location.

This is an apptainer (former singularity) container and can be run with the following line in your slurm script:

apptainer exec -B /work,/scratch,/bulk --nv /global/software/fsl/fsl-6.0.5/fsl605.sif someCommand

Note that the -B option is required to have the /work and /bulk file systems mounted inside the container.

someCommand should probably be executing a shell script that includes

export PATH=$FSLDIR/bin:$PATH

or (if the above does not work)

export PATH=/build/fsl/bin:$PATH

and your usual eddy script.

In this container, eddy_cuda is called eddy_cuda11.2 for the cuda toolkit version it works with. The --nv option says to look for a GPU so it will only work on a node with a gpu.

Requesting a GPU requires the lines

 
#SBATCH --partition=gpu-v100
#SBATCH --gres=gpu:1

Or similar ones for a100s

Using eddy form a container

When a command is executed in a container a completely new terminal session is created for it. The session will "exist" only for this single command. It means that if there is a need for some kind of initialization it has to be also done, and done somehow within the same single command.

In this container, the $PATH variable is not set, thus, setting it may be considered the required initialization. To combine several commands into one you can simply put them in a bash script and run it. Running that script will be the single command the container executes.

Assuming you are planning to use eddy from the provided container, your job script could look like this:

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=32gb
#SBATCH --time=24:00:00
#SBATCH --gres=gpu:1
#SBATCH --partition=gpu-v100

CONT=/global/software/fsl/fsl-6.0.5/fsl605.sif
apptainer exec -B /work,/bulk,/scratch $CONT run_eddy.sh

This job script relies on an external script run_eddy.sh that actually does all the necessary data processing, including running eddy. Note that the eddy binary in the container is called eddy_cuda11.2. Thus your run_eddy.sh script should include the following lines:

#! /bin/bash
export PATH=$FSLDIR/bin:$PATH
....
....
eddy_cuda11.2 ....
....

If there is a need for some additional pre- or post-processing, it can also be added to this script. This way, there will be only one complex script and a simple job script (recommended approach).

If necessary, an argument can be passed to if from the job script like this, input_parameter:

apptainer exec -B /work,/bulk $CONT run_eddy.sh input_parameter

Then the input_parameter can be accessed in the run_eddy.sh script as the $1 variable.

Links

ARC Software pages