Running SDM calculations on XSEDE Comet

Post date: Apr 5, 2020 5:29:38 PM

File transfer from local machine to XSEDE Comet

Transferring of files to Comet from local machine and vice versa can be done using "rsync" or using the "Globus Transfer" utilities

Using rsync:

To rsync files and directories, do the following:

rsync -avz <somefolder> <USER>@comet.sdsc.edu:/oasis/projects/rut147/<USER>/

Using Globus:

For more information about how to use Globus to set up a local "Endpoint" for file transfers to Comet, please visit :

https://www.globus.org/data-transfer

Storage Resource on Comet:

The storage/simulation volume is located at

/oasis/projects/nsf/rut147

All user directories are located here under rut147/. The storage space for the home directory (/home/<USER>) is significantly smaller. It is not suitable to keep simulation directories here. Software, applications, scripts etc. can be stored in the home directory. All user simulations files and directories can be kept here and periodically transferred to local machine using rsync or Globus transfer.

Running minimization/thermalization on Comet

Update: the procedure below which is based on one GPU on a shared node, might fail depending on which GPU is assigned to the job. An alternative is posted on our slack channel and will be described here soon.

Minimization and thermalization require only one GPU and cannot be run under the head node (where your files and software are located). For this, a single GPU needs to be requested at the Comet queuing system as follows:

srun --partition=gpu-shared --gres=gpu:1 --nodes=1 -t 01:00:00 -A TG-MCB150001 --pty /bin/bash

This should give you eventually a prompt on one of the gpu nodes with 1 gpu for 1 hour. Navigate to the simulation directory of the complex you want to do minimization/thermalization and do for example:

./runopenmm 6vww-1-congored_mintherm.py

The runopenmm script that is required to run the minimization/thermalization and as well as SDM is given here:

#!/bin/bash
openmm_dir=/home/u15684/software/openmm-7.3.1
pythondir=/home/u15684/miniconda-latest
export LD_LIBRARY_PATH=${openmm_dir}/lib:${openmm_dir}/lib/plugins:$LD_LIBRARY_PATH
${pythondir}/bin/python "$@"

Running SDM Calculations on Comet

Comet uses a queuing system to launch any jobs on its GPU nodes. The queuing system is based on "SLURM" and uses .slurm script to launch jobs.

To run jobs on Comet, each complex directory needs a .slurm script. An example .slurm script is given here:

#!/bin/bash
#SBATCH -J 6vww-1
#SBATCH -p gpu
#SBATCH --gres=gpu:4
#SBATCH -N 1 # This is nodes, not cores
#SBATCH -n 1 # one process per node so we get one entry per node
#SBATCH -t 18:00:00 # Max time allotted for job
#SBATCH -A TG-MCB150001
echo "Number of nodes: $SLURM_NNODES"
echo "Nodelist: $SLURM_NODELIST"
echo "Number of tasks: $SLURM_NTASKS"
echo "Tasks per node: $SLURM_TASKS_PER_NODE"
jobname=6vww-1-congored
scontrol show hostname $SLURM_NODELIST > .slurm_nodes
awk '{ for(i=0;i<4;i++)print $1 ","0":"i",1,centos-OpenCL,,/tmp"}' < .slurm_nodes > nodefile
cd ..
rsync -av --exclude='slurm*.out' ${jobname} /scratch/$USER/$SLURM_JOBID/
cd /scratch/$USER/$SLURM_JOBID/${jobname}
for i in `seq 1 10` ; do
  ./runopenmm ~/software/async_re-openmm/bedamtempt_async_re.py ${jobname}_asyncre.cntl
  rsync -av * $SLURM_SUBMIT_DIR/
done

Change the jobname in the slurm file to suit your job. If I have many ligands I put a placeholder for the -J and jobname entries:

#SBATCH -J <job>
jobname=<job>

and then use sed -i "/<job>/${complex}/g" ${complex}/${complex}.slurm in a for loop running over the directories in the complex/ directory

To launch the slurm script do for example: sbatch <jobname>.slurm

This is how the .slurm script works:

The slurm script runs the simulation in 10 batches.

1. It firsts rsync's the simulation directory to the /scratch/ disk of the node,

2. It then runs the batch,

3. rsync's back to the working directory, and

4. runs the next batch and so on.

So, for example, if you want the asyncre job to run for 960 minutes, the asyncre.cntl file should specify 96 minutes as the WALL_TIME.

Free Energy Analysis on Comet

Prepare your R environment (first time only):

mkdir ~/R/x86_64-pc-linux-gnu-library
cp -r ~u15684/R/x86_64-pc-linux-gnu-library/3.6 ~/R/x86_64-pc-linux-gnu-library/

Load the R package:

module load R

Navigate to the scripts/ directory of the SDM project. For example:

cd /oasis/projects/nsf/rut147/<username>/6vww-1/scripts/

Edit the

setup-setting.sh

file to point it to the top-level SDM project directory (first entry, in this example it would be /oasis/projects/nsf/rut147/<username>/6vww-1) and the number of samples to discard for equilibration (last entry), and then run the analyze script:

bash ./analyze.sh