Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
habrok:examples:alphafold [2022/06/24 12:51] – external edit 127.0.0.1habrok:examples:alphafold [2024/11/22 14:41] (current) admin
Line 1: Line 1:
 ====== AlphaFold ====== ====== AlphaFold ======
  
-GPU versions of AlphaFold are now available on Peregrine. You can find the available versions using ''module avail AlphaFold'', and you can load the latest version using ''module load AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1''+===== AlphaFold 3 =====
  
-===== Running AlphaFold =====+AlphaFold 3 is not available as module yet, and due to the complex installation this may still take a while.  
 + 
 +Meanwhile, it should be possible to run AlphaFold 3 with an Apptainer container. You can either try to build your own container using the instructions at https://github.com/google-deepmind/alphafold3/blob/main/docs/installation.md (which requires you to first build it with Docker, then convert it to Singularity/Apptainer), or you can use a prebuilt container from Docker Hub, e.g. from https://hub.docker.com/r/bockpl/alphafold/tags. We will use the latter in the following examples. 
 + 
 +==== Pulling in the container image and AlphaFold 3 code ==== 
 + 
 +<code> 
 +cd /scratch/$USER 
 +export APPTAINER_CACHEDIR=/scratch/$USER/apptainer_cache 
 +apptainer pull docker://bockpl/alphafold:v3.0.0-22.04-1.0 
 +</code> 
 + 
 +This will result in a container image file named ''alphafold_v3.0.0-22.04-1.0.sif''. Now clone the AlphaFold repository in the same directory using: 
 +<code> 
 +git clone https://github.com/google-deepmind/alphafold3.git 
 +</code> 
 + 
 +==== Running the container ==== 
 + 
 +You should now be able to run the code from the cloned GitHub repository in the container (which provides all the dependencies) by doing something like: 
 +<code> 
 +apptainer exec ./alphafold_v3.0.0-22.04-1.0.sif python3 alphafold3/run_alphafold.py 
 +</code> 
 + 
 +When running on a GPU node, the GPU can be made available in the container by adding a ''--nv'' flag: 
 +<code> 
 +apptainer exec --nv ./alphafold_v3.0.0-22.04-1.0.sif python3 alphafold3/run_alphafold.py 
 +</code> 
 + 
 +More examples can be found at https://github.com/google-deepmind/alphafold3/blob/main/docs/installation.md#build-the-singularity-container-from-the-docker-image, and more examples/information about Apptainer at https://wiki.hpc.rug.nl/habrok/examples/apptainer. 
 + 
 +==== Data files ==== 
 + 
 +The genetic databases files that are required for AlphaFold 3 can be found at ''/scratch/public/AlphaFold/3.0''. Due to license restrictions, the model parameters are not available (yet). You can obtain these yourselves using the instructions provided at https://github.com/google-deepmind/alphafold3?tab=readme-ov-file#obtaining-model-parameters. 
 + 
 +===== AlphaFold 2 ===== 
 + 
 +GPU versions of AlphaFold are now available on Peregrine. You can find the available versions using ''module avail AlphaFold'', and you can load the latest version using ''module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0''.  
 + 
 +==== Running AlphaFold ====
 The module provides a simple ''alphafold'' symlink that points to the ''run_alphafold.py'' script, which means you can simply run ''alphafold'' with all required options (run ''alphafold %%--%%help'' to get more information). The module provides a simple ''alphafold'' symlink that points to the ''run_alphafold.py'' script, which means you can simply run ''alphafold'' with all required options (run ''alphafold %%--%%help'' to get more information).
  
 Note that the ''run_alphafold.py'' was tweaked a little bit, so that it knows where to find required commands like ''hhblits, hhsearch, jackhmmer, kalign''. This means that you do not have to provide the paths to these executables with options like ''%%--%%hhblits_binary_path''. Note that the ''run_alphafold.py'' was tweaked a little bit, so that it knows where to find required commands like ''hhblits, hhsearch, jackhmmer, kalign''. This means that you do not have to provide the paths to these executables with options like ''%%--%%hhblits_binary_path''.
  
-==== Running on a CPU node ====+=== Running on a CPU node ===
 By default, AlphaFold will try to use a GPU, and it even fails on nodes without a GPU. In order to instruct AlphaFold to run without a GPU, add the following to your job script: By default, AlphaFold will try to use a GPU, and it even fails on nodes without a GPU. In order to instruct AlphaFold to run without a GPU, add the following to your job script:
 <code> <code>
Line 14: Line 53:
 </code> </code>
  
-===== Controlling the number of CPU cores for HHblits and jackhmmer =====+==== Controlling the number of CPU cores for HHblits and jackhmmer ====
 The module allows you to control the number of cores used by the ''hhblits'' (default: 4 cores) and ''jackhmmer'' (default: 8 cores) tools by setting the environment variables ''$ALPHAFOLD_HHBLITS_N_CPU'' and/or ''$ALPHAFOLD_JACKHMMER_N_CPU''. You can override the default number of cores using, for instance, ''export ALPHAFOLD_HHBLITS_N_CPU=8''. Do note that these tools seem to run slower on more than 4/8 cores, but this may depend on your workload. The module allows you to control the number of cores used by the ''hhblits'' (default: 4 cores) and ''jackhmmer'' (default: 8 cores) tools by setting the environment variables ''$ALPHAFOLD_HHBLITS_N_CPU'' and/or ''$ALPHAFOLD_JACKHMMER_N_CPU''. You can override the default number of cores using, for instance, ''export ALPHAFOLD_HHBLITS_N_CPU=8''. Do note that these tools seem to run slower on more than 4/8 cores, but this may depend on your workload.
  
-===== Database files =====+==== Database files ====
  
-The large database files for the different AlphaFold versions are available in version-specific subdirectories at ''/data/public/alphafold'' (the ones in ''/data/public/alphafold/data'' are only for AlphaFold 2.0.0), and the module will automatically use the appropriate one.+The large database files for the different AlphaFold versions are available in version-specific subdirectories at ''/scratch/public/AlphaFold/''.
  
 If you want to use different databases, you can override the default data directory by using ''export ALPHAFOLD_DATA_DIR=/path/to/data''. If you want to use different databases, you can override the default data directory by using ''export ALPHAFOLD_DATA_DIR=/path/to/data''.
  
-Given the fact that the initialization phase of AlphaFold is very I/O intensive while the database files are being read, reading the files from the /data file system directly is very time-consuming. In order to alleviate this issue the database files have been stored in a smaller Zstandard (zstd) compressed SquashFS file system image. Using this image instead of the files on /data directly is faster. These database images (which are also specific to the version of AlphaFold that you want to use) can be found at:+Given the fact that the initialization phase of AlphaFold is very I/O intensive while the database files are being read, reading the files from the ''/scratch'' file system directly is very time-consuming. In order to alleviate this issue the database files have been stored in a smaller Zstandard (zstd) compressed SquashFS file system image. Using this image instead of the files on /data directly is faster. These database images (which are also specific to the version of AlphaFold that you want to use) can be found at:
 <code> <code>
-/data/public/alphafold/alphafold_data-<version>.zstd.squash+/scratch/public/AlphaFold/2.3.1.zstd.sqsh
 </code> </code>
 The image can be mounted to a given directory using the ''squashfuse'' tool, for which a module is loaded that should give slightly better performance: The image can be mounted to a given directory using the ''squashfuse'' tool, for which a module is loaded that should give slightly better performance:
 <code> <code>
 mkdir $TMPDIR/alphafold_data mkdir $TMPDIR/alphafold_data
-module load squashfuse/0.1.104-GCCcore-10.3.0 +squashfuse /scratch/public/AlphaFold/2.3.1.zstd.sqsh $TMPDIR/alphafold_data
-squashfuse /data/public/alphafold/alphafold_data-2.2.2.zstd.squash $TMPDIR/alphafold_data+
 </code> </code>
  
Line 39: Line 77:
 </code> </code>
  
-==== Using fast local storage ====+=== Using fast local storage ===
  
-The I/O performance can be increased even further by copying the squashfs image file to fast local node storage first. Some of the GPU nodes have 1 TB of fast NVMe storage available. In order to use these nodes the following constraint has to be added to the job script: +The I/O performance can be increased even further by copying the squashfs image file to fast local node storage first. All nodes have at least 1 TB of fast solid state storage available. 
-<code> +
-#SBATCH --constraint=nvme +
-</code> +
-Note that for CPU jobs this constraint is not required, as these nodes always have local storage. But since this storage is based on spinning disk the performance will not be good enough, and copying will already take a lot of time+
  
 The local disk can be reached using the environment variable ''$TMPDIR'' within the job. And copying can be done using the command: The local disk can be reached using the environment variable ''$TMPDIR'' within the job. And copying can be done using the command:
 <code> <code>
-cp /data/public/alphafold/alphafold_data-2.2.2.zstd.squash $TMPDIR+cp /scratch/public/AlphaFold/2.3.1.zstd.sqsh $TMPDIR
 </code> </code>
 The directory will be automatically removed when the job has finished. The mount command then looks as follows: The directory will be automatically removed when the job has finished. The mount command then looks as follows:
 <code> <code>
 mkdir $TMPDIR/alphafold_data mkdir $TMPDIR/alphafold_data
-module load squashfuse/0.1.104-GCCcore-10.3.0 +squashfuse $TMPDIR/2.3.1.zstd.sqsh $TMPDIR/alphafold_data
-squashfuse $TMPDIR/alphafold_data-2.2.2.zstd.squash $TMPDIR/alphafold_data+
 </code> </code>
  
-===== Example of job script =====+==== Example of job script ====
  
 The following minimal examples can be used to submit an AlphaFold job to a regular (CPU) node or a V100 GPU node. The following minimal examples can be used to submit an AlphaFold job to a regular (CPU) node or a V100 GPU node.
Line 73: Line 106:
 # Clean the module environment and load the squashfuse and AlphaFold module # Clean the module environment and load the squashfuse and AlphaFold module
 module purge module purge
-module load squashfuse/0.1.104-GCCcore-10.3.0 +module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0
-module load AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1+
  
 # Uncomment the following line(s) if you want to use different values for the number of cores used by hhblits/jackhmmer # Uncomment the following line(s) if you want to use different values for the number of cores used by hhblits/jackhmmer
Line 82: Line 114:
 # Use the CPU instead of a GPU # Use the CPU instead of a GPU
 export OPENMM_RELAX=CPU export OPENMM_RELAX=CPU
 +
 +# Copy the squashfs image to $TMPDIR
 +cp /scratch/public/AlphaFold/2.3.1.zstd.sqsh $TMPDIR
  
 # Create a mountpoint for the AlphaFold database in squashfs format # Create a mountpoint for the AlphaFold database in squashfs format
 mkdir $TMPDIR/alphafold_data mkdir $TMPDIR/alphafold_data
 # Mount the AlphaFold database squashfs image # Mount the AlphaFold database squashfs image
-squashfuse /data/public/alphafold/alphafold_data-2.2.2.zstd.squash $TMPDIR/alphafold_data+squashfuse $TMPDIR/2.3.1.zstd.sqsh $TMPDIR/alphafold_data
 # Set the path to the AlphaFold database # Set the path to the AlphaFold database
 export ALPHAFOLD_DATA_DIR=$TMPDIR/alphafold_data export ALPHAFOLD_DATA_DIR=$TMPDIR/alphafold_data
Line 105: Line 140:
 #SBATCH --cpus-per-task=12 #SBATCH --cpus-per-task=12
 #SBATCH --mem=120GB #SBATCH --mem=120GB
-#SBATCH --gres=gpu:v100:1 +#SBATCH --gres=gpu:1
-#SBATCH --constraint=nvme+
  
 module purge module purge
-module load squashfuse/0.1.104-GCCcore-10.3.0 +module load AlphaFold/2.3.1-foss-2022a-CUDA-11.7.0
-module load AlphaFold/2.2.2-foss-2021a-CUDA-11.3.1+
  
 # Uncomment the following line(s) if you want to use different values for the number of cores used by hhblits/jackhmmer # Uncomment the following line(s) if you want to use different values for the number of cores used by hhblits/jackhmmer
Line 120: Line 153:
  
 # Copy the squashfs image with the AlphaFold database to fast local storage # Copy the squashfs image with the AlphaFold database to fast local storage
-cp /data/public/alphafold/alphafold_data-2.2.2.zstd.squash $TMPDIR+cp /scratch/public/AlphaFold/2.3.1.zstd.sqsh $TMPDIR 
 # Create a mountpoint for the AlphaFold database in squashfs format # Create a mountpoint for the AlphaFold database in squashfs format
 mkdir $TMPDIR/alphafold_data mkdir $TMPDIR/alphafold_data
 # Mount the AlphaFold database squashfs image # Mount the AlphaFold database squashfs image
-squashfuse $TMPDIR/alphafold_data-2.2.2.zstd.squash $TMPDIR/alphafold_data+squashfuse $TMPDIR/2.3.1.zstd.sqsh $TMPDIR/alphafold_data
 # Set the path to the AlphaFold database # Set the path to the AlphaFold database
 export ALPHAFOLD_DATA_DIR=$TMPDIR/alphafold_data export ALPHAFOLD_DATA_DIR=$TMPDIR/alphafold_data