A CentOS Docker Image for OpenMM/SDM Development

Post date: Mar 30, 2020 12:11:26 AM

It is difficult to maintain an OpenMM build environment. There are a number of strict requirements for tools, libraries, and compilers that are not easy to satisfy for each operating system. An obvious workaround is to maintain a development build box. However, the build box consumes space and resources and, as during the ongoing emergency, it might not be always available over the network. In fact, for unknown reasons, we lost access to our lab internal network at Brooklyn College and no one can go there to fix it.

A better alternative is to implement the software development environment as Docker image. Docker is a framework to create and run virtual machines, called "images" in Docker's parlance. I summarize here the steps we took to create one such image, and how we use it to build OpenMM and related utilities and prepare molecular systems for the Single Decoupling Method for protein-ligand binding free energy estimation.

The latest image is called egallicchio/centos610-anvil-7. To load it do:

$ docker pull egallicchio/centos610-anvil-7

How to Use the Docker Image to Run the SDM workflow

Run the image:

$ docker run -it egallicchio/centos610-anvil-7

Follow the usual procedures to run the SDM workflow. Skip the minimization/thermalization in the setup-sdm.sh script as there is no GPU in the docker image to do it. Also, for some reason, mae2dms needs the conda libs:

> export LD_LIBRARY_PATH=/opt/conda/lib/:$LD_LIBRARY_PATH

> bash ./setup-sdm.sh

Finally, rsync the working directory to a computational server/cluster to do the minimization/thermalizations and perform the ASyncRE calculations.

Building and Maintaining the Docker Image

We have based our image on the one used by conda-forge, which is based on CentOS 6.10:

$ docker pull condaforge/linux-anvil

$ docker run -it condaforge/linux-anvil

you will be dropped into a bash shell as user conda. A python 3.7 conda environment stored in /opt/conda is automatically activated. This user is authorized to install packages via sudo and yum. The Red Hat devtoolset-2 is also installed. The sudo command in this toolset conflicts with the system sudo. To run sudo use /usr/bin/sudo explicitly. To gain root do (while the image is running):

$ docker ps

to get the id of the running image, 8d0865c61538, say. Then:

$ docker exec -it 8d0865c61538 bash

To compile OpenMM we need gcc from devtoolset-6, also some reasonable text editors and such:

> /usr/bin/sudo install devtoolset-6

> scl enable devtoolset-6 'bash'

> /usr/bin/sudo install nano emacs

> /usr/bin/sudo install wget rsync

This is how we built msys:

> conda install -c conda-forge boost

> conda install -c conda-forge scons

> mkdir src && cd src

> git clone https://github.com/DEShawResearch/msys.git

I edited the SConscript file to add /opt/conda/include to the include path, also added the needed libraries in LIBS. The key section looks like:

if True:



for p in env['CPPPATH']:

if p.startswith('/proj') or p.startswith('/gdn'):

flg.append('-I%s' % p)




env.Append(CFLAGS=flg, CXXFLAGS=flg)





> scons -j4

> scons -j4 PYTHONVER=37

> scons -j4 PYTHONVER=37 install PREFIX=$HOME/local

The msys python tools such as dms-info fail complaining about some kind of python/C++ function argument mismatch. Here is a post that explains how to fix the problem if we need to. We mostly care about mae2dms, which works.

The next steps are to gather the tools necessary to compile OpenMM.

After a number of failed attempts related to cmake incompatibilities, I ended up installing cmake 3.6.3 from sources. As root:

# cd ~/src/

# wget https://github.com/Kitware/CMake/archive/v3.6.3.tar.gz

# tar xzvf v3.6.3.tar.gz

# yum install ncurses-devel

# cd CMake-3.6.3

# ./bootstrap && make && make install

I installed CUDA as root as well. I skipped the installation of the NVIDIA driver since the image does not have an NVIDIA GPU card:

# wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_rhel6.run

# sh cuda_10.1.243_418.87.00_rhel6.run

# rm cuda_10.1.243_418.87.00_rhel6.run

The doxygen app packaged with conda-forge is broken. It does not scan directories with header files. It took a while to debug this. The centos version of doxygen is fine:

> /usr/bin/sudo yum install doxygen

The next steps are for building OpenMM. As the conda user:

> cd ~/src

> wget https://github.com/openmm/openmm/archive/7.3.1.tar.gz

> tar zxvf 7.3.1.tar.gz

> cd openmm-7.3.1

I modified CMakeLists.txt in src/openmm-7.3.1/wrappers/python/ to do the python installation under local/openmm-7.3.1/lib. Here is an excerpt:

#set(PYTHON_SETUP_COMMAND "install --root=\$ENV{DESTDIR}/")

set(PYTHON_SETUP_COMMAND "install --prefix=/home/conda/local/openmm-7.3.1/")

Next, do the actual build. For whatever reason CUDA_CUDA_LIBRARY needs to be specified explicitly:

> mkdir -p ~/local/openmm-7.3.1 && mkdir -p ~/devel/build_openmm_7.3.1

> cd ~/devel/build_openmm_7.3.1

> ccmake -i ../../src/openmm-7.3.1/ -DCUDA_CUDA_LIBRARY=/usr/local/cuda/lib64/stubs/libcuda.so

In the ccmake interface I turned off the C and Fortran wrappers and pointed the installation directory to /home/conda/local/openmm-7.3.1. Then did the usual:

> make install && make PythonInstall

Now OpenMM is installed under ~/local/openmm-7.3.1. I followed similar steps to install the AGBNP and SDM plugins (see README).

> cd ~/src

> git clone https://github.com/egallicc/openmm_agbnp_plugin.git

> git clone https://github.com/rajatkrpal/openmm_sdm_plugin.git

> git clone https://github.com/egallicc/openmm_sdm_workflow.git

Etc. The SDM workflow does not need building. For the AGBNP and SDM plugins, the python wrapper CMakeLists.txt was modified to do the installation of python libraries under ~/local/openmm-7.3.1/lib as above.

The OpenMM installation is now ready for shipment for deployment on computational servers:

> cd ~/local

> tar zcvf openmm-7.3.1.tgz openmm-7.3.1

> scp openmm-7.3.1.tgz me@myfavoriteserver:~/software/

On the server, untar the distribution. Then use a launch script (runopenmm) such as:




export OPENMM_PLUGIN_DIR=${openmm_dir}/lib/plugins

export LD_LIBRARY_PATH=${openmm_dir}/lib:${openmm_dir}/lib/plugins:$LD_LIBRARY_PATH

export PYTHONPATH=${openmm_dir}/lib/python3.7/site-packages:$PYTHONPATH

${pythondir}/bin/python "$@"

For example:

$ ~/software/bin/runopenmm MD.py

To finish up the development image, I installed a version of the academic Desmond-Maestro needed by the SDM workflow:

> mkdir -p ~/schrodinger/installers && cd ~/schrodinger/installers

> export SCHRODINGER=~/schrodinger/Desmond_Maestro_2018.4

> scp me@myfavoriteserver:~/software/Desmond_Maestro_2018.4.tar .

> tar xf Desmond_Maestro_2018.4.tar

and then proceed as usual with the Maestro installation.

Finally, exit from the docker image and commit the changes:

> exit

$ docker commit -m "build box" -a "Emilio Gallicchio" <image id> egallicchio/centos610-anvil-7

where <image id> is the id of the docker container one gets from docker ps -a.

For subsequent updates commit with a new tag:

$ docker commit -m "update make" -a "Emilio Gallicchio" <image id> egallicchio/centos610-anvil-7:version2