POOL

General Information

  • POOL is a joint HPC cluster between two departments: Chemical and Biomolecular Engineering and Chemistry
  • PIs are Fernando Escobedo (fe13), Don Koch (dlk15), Yong Joo (ylj2), Robert DiStasio (rad332), and Nandini Ananth (na346)
  • Cluster access is restricted to these CAC groups: fe13_0001, dlk15_0001, ylj2_0001, rad332_0001, and na346_0001
  • Head node: pool.cac.cornell.edu (access via ssh)
    • OpenHPC deployment running Centos 7
    • Scheduler: slurm 18
  • Home directories are provided for each group from 3 file servers.
    • icsefs01 - serves fe13_0001
    • icsefs02 - serves dlk15_0001 and ylj2_0001
    • chemfs01 - serves rad332_0001 and na346_0001
  • Data is generally NOT backed up (check with your PI for details).
  • Please send any questions and report problems to: cac-help@cornell.edu

How To Login

  • To get started, login to the head node pool.cac.cornell.edu via ssh.

    ssh cacuser@pool.cac.cornell.edu <-- substitute your cacid for cacuser

  • You will be prompted for your CAC account password

  • If you are unfamiliar with Linux and ssh, we suggest reading the Linux Tutorial and looking into how to Connect to Linux before proceeding.
  • NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked.

Hardware and Networking

  • The head node has 1.8TB local /scratch disk.
  • Head node and 3 file servers have 10Gb connections.
  • Most compute nodes currently have 1GB connections.
  • Nodes in the astra queue have an InfiniBand interconnect.
  • Nodes in the vega queue have an Omni-Path interconnect.
  • Pool Hardware technical information.

Partitions

"Partition" is the term used by slurm for designated groups of compute nodes

  • hyperthreading is turned on in all nodes EXCEPT astra
    • where hyperthreading is turned on Slurm considers each core to consist of 2 logical CPUs
    • for astra nodes Slurm considers each core to consist of 1 logical CPU
  • all partitions have a default time of 1 hour and are set to OverSubscribe (per core scheduling vs per node)

Partitions on the pool cluster:

Queue/Partition Number of nodes Node Names Limits Group Access
common (default) 18 c00[17,19,20,22,29-38,50-53] walltime limit: 168 hours (i.e. 7 days) All Groups
**plato ** 1 c0009 walltime limit: 168 hours (i.e. 7 days) limited access per fe13
fe13 24 c00[01-08,18,21,23,24,40-49,54,104] walltime limit: 168 hours (i.e. 7 days) fe13_0001
dlk15 12 c0[010-014,025,039,106-110] walltime limit: 168 hours (i.e. 31 days) dlk15_0001
ylj2 5 c00[15-16,26-28] walltime limit: 168 hours (i.e. 7 days) ylj2_0001
vega 10 c00[56-62,101-103] walltime limit: 168 hours (i.e. 7 days) rad332_0001
astra 38 c0[055,063-69,71-100] walltime limit: 720 hours (i.e. 30 days) na346_0001

Running Jobs / Slurm Scheduler

CAC's Slurm page explains what Slurm is and how to use it to run your jobs. Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.

The Slurm Quick Start guide is a great place to start.

  • NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked.
    A few slurm commands to initially get familiar with:
    
    sinfo -l
    scontrol show nodes
    scontrol show partition
    
    Submit a job: sbatch testjob.sh
    Interactive Job: srun -p common --pty /bin/bash
    
    scontrol show job [job id]
    scancel [job id]
    
    squeue -u userid
    

Slurm Examples & Tips

NOTE: All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored (i.e. a comment), you must place 2 "##" at the beginning of your line.

Example batch job to run in the partition: common

Example sbatch script to run a job with one task (default) in the 'common' partition (i.e. queue):

#!/bin/bash
## -J sets the name of job
#SBATCH -J TestJob

## -p sets the partition (queue)
#SBATCH -p common

## 10 min
#SBATCH --time=00:10:00

## sets the tasks per core (default=2 for hyperthreading: cores are oversubscribed)
## set to 1 if one task by itself is enough to keep a core busy
#SBATCH --ntasks-per-core=1 

## request 4GB per CPU (may limit # of tasks, depending on total memory)
#SBATCH --mem-per-cpu=4GB

## define job stdout file
#SBATCH -o testcommon-%j.out

## define job stderr file
#SBATCH -e testcomon%j.err

echo "starting at `date` on `hostname`"

# Print the Slurm job ID
echo "SLURM_JOB_ID=$SLURM_JOB_ID"

echo "hello world `hostname`"

echo "ended at `date` on `hostname`"
exit 0

Submit/Run your job:

sbatch example.sh

View your job:

scontrol show job <job_id>

Example MPI batch job to run in the partition: common

Example sbatch script to run a job with 60 tasks in the 'common' partition (i.e. queue):

#!/bin/bash
## -J sets the name of job
#SBATCH -J TestJob

## -p sets the partition (queue)
#SBATCH -p common

## 10 min
#SBATCH --time=00:10:00

## the number of slots (CPUs) to reserve
#SBATCH -n 60

## the number of nodes to use (min and max can be set separately)
#SBATCH -N 3

## typically an MPI job needs exclusive access to nodes for good load balancing
#SBATCH --exclusive

## don't worry about hyperthreading, Slurm should distribute tasks evenly
##SBATCH --ntasks-per-core=1 

## define job stdout file
#SBATCH -o testcommon-%j.out

## define job stderr file
#SBATCH -e testcommon-%j.err

echo "starting at `date` on `hostname`"

# Print Slurm job properties
echo "SLURM_JOB_ID = $SLURM_JOB_ID"
echo "SLURM_NTASKS = $SLURM_NTASKS"
echo "SLURM_JOB_NUM_NODES = $SLURM_JOB_NUM_NODES"
echo "SLURM_JOB_NODELIST = $SLURM_JOB_NODELIST"
echo "SLURM_JOB_CPUS_PER_NODE = $SLURM_JOB_CPUS_PER_NODE"

mpiexec -n $SLURM_NTASKS ./hello_mpi

echo "ended at `date` on `hostname`"
exit 0

To include or exclude specific nodes in your batch script

To run on a specific node only, add the following line to your batch script:

#SBATCH -w, --nodelist=c0009

To include one or more nodes that you specifically want, add the following line to your batch script:

#SBATCH --nodelist=<node_names_you_want_to_include>

## e.g., to include c0006:
#SBATCH --nodelist=c0006

## to include c0006 and c0007 (also illustrates shorter syntax):
#SBATCH -w c000[6,7]

To exclude one or more nodes, add the following line to your batch script:

#SBATCH -exclude=<node_names_you_want_to_exclude>

## e.g., to avoid c0006 through c0008, and c0013:
#SBATCH -exclude=c00[06-08,13]

## to exclude c0006 (also illustrates shorter syntax):
#SBATCH -x c0006

Environment variables defined for tasks that are started with srun

If you submit a batch job in which you run the following script with "srun -n $SLURM_NTASKS", you will see how the various environment variables are defined.

 #!/bin/bash
 echo "Hello from `hostname`," \
 "$SLURM_CPUS_ON_NODE CPUs are allocated here," \
 "I am rank $SLURM_PROCID on node $SLURM_NODEID," \
 "my task ID on this node is $SLURM_LOCALID"

These variables are not defined in the same useful way in the environments of tasks that are started with mpiexec or mpirun.

Use $HOME within your script rather than the full path to your home directory

In order to access files in your home directory, you should use $HOME rather than the full path . To test, you could add to your batch script:

echo "my home dir is $HOME"

Then view the output file you set in your batch script to get the result.

Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!!

  • We cannot stress enough how important this is to avoid delays on the file systems.
#!/bin/bash
## -J sets the name of job
#SBATCH -J TestJob

## -p sets the partition (queue)
#SBATCH -p common
## time is HH:MM:SS
#SBATCH --time=00:01:30
#SBATCH --cpus-per-task=15

## define job stdout file
#SBATCH -o testcommon-%j.out

## define job stderr file
#SBATCH -e testcommon-%j.err

echo "starting $SLURM_JOBID at `date` on `hostname`"
echo "my home dir is $HOME" 

## copying my data to a local tmp space on the compute node to reduce I/O
MYTMP=/tmp/$USER/$SLURM_JOB_ID
/usr/bin/mkdir -p $MYTMP || exit $?
echo "Copying my data over..."
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?

## run your job executables here...

echo "ended at `date` on `hostname`"
echo "copy your data back to your $HOME" 
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?
## remove your data from the compute node /tmp space
rm -rf $MYTMP 

exit 0

Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).

To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with cat or less.

Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.

Software

Software List

Software Path Notes
GNU Compilers 8.3.0 /opt/ohpc/pub/compiler/gcc/8.3.0 module load gnu8/8.3.0
openmpi 3.1.4 /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4 or /opt/ohpc/pub/mpi/openmpi3-intel/3.1.4 module load openmpi3/3.1.4
Intel compiler 2019.2.187 /opt/ohpc/pub/compiler/intel/2019 module load intel/2019.2.187
Intel compiler 2020.4.304 /opt/ohpc/pub/compiler/intel/2020 module load intel/2020.4.304
Intel MPI /opt/ohpc/pub/compiler/intel/2020 module load impi/2019.9.304
Julia 1.5.3 /opt/ohpc/pub/compiler/julia/1.5.3 module load julia/1.5.3
fftw 3.3.8
  • /opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8
  • /opt/ohpc/pub/libs/gnu8/mvapich2/fftw/3.3.8
  • /opt/ohpc/pub/libs/gnu8/mpich/fftw/3.3.8
module load fftw/3.3.8
scalapack 2.0.2
  • /opt/ohpc/pub/libs/gnu8/openmpi3/scalapack/2.0.2
  • /opt/ohpc/pub/libs/gnu8/mvapich2/scalapack/2.0.2
  • /opt/ohpc/pub/libs/gnu8/mpich/scalapack/2.0.2
module load scalapack/3.3.8
lapack 3.0.9 /opt/ohpc/pub/libs/gnu8/lapack/3.9.0/ module load lapack/3.0.9
Intel Python 3.7 Cirq 0.9.1 /opt/ohpc/pub/software/intelpython3/ module load python/intel3.7
cp2k 7.1 (Serial) /opt/ohpc/pub/apps/cp2k/7.1 module load cp2k-s/7.1
cp2k 7.1 (Parallel) /opt/ohpc/pub/apps/cp2k/7.1 cp2k parallel code is built using gcc 8 and OpenMPI 3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load cp2k-p/7.1
charmm c43b2 /opt/ohpc/pub/apps/charmm/c43b2/ charmm is built using GNU 8 compilers and openmpi3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load charmm/c43b2
chemShell-tcl 3.7.1 (parallel version) /opt/ohpc/pub/apps/chemsh-tcl/3.7.1/openmpi3 chemsh-tcl (parallel version) is built using GNU 8 compilers and openmpi3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load chemsh-tcl-openmpi3/3.7.1
chemShell-tcl 3.7.1 (serial version) /opt/ohpc/pub/apps/chemsh-tcl/3.7.1/serial chemsh-tcl (serial version) is built using GNU 8 compilers. To use it:
  • module load gnu8
  • module load chemsh-tcl-serial/3.7.1
molpro 2012.1 /opt/ohpc/pub/apps/molpro/2012.1 molpro is built using gcc 8 and OpenMPI 3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load molpro/2012.1
Gaussian C.01 /opt/ohpc/pub/apps/g09/C.01 module load g09
Gromacs 2020.4 /opt/ohpc/pub/apps/gromacs/2020.4/ grimaces is built using gcc 8 and OpenMPI 3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load gromacs/2020.4
Amber 12 /opt/ohpc/pub/apps/amber/12/ Amber is built using gcc 8 and OpenMPI 3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load amber/12
Orca 4.2.1 /opt/ohpc/pub/apps/orca/4.2.1/ Amber is built using gcc 8 and OpenMPI 3. To use it:
  • module load gnu8
  • module load openmpi3
  • module load orca/4.2.1
Armadillo 12.8.2 /opt/ohpc/pub/libs/gnu8/armadillo/12.8.2 Armadillo is built using gcc 8. To use it:
  • module load gnu8
  • module load armadillo/12.8.2

LMOD Module System

The 'lmod module' system is implemented to list and load software. Loading software via the module command will put you in the software environment requested.

  • For more information, type: module help
  • To list the available software type: module avail
  • To get a more complete listing, type: module spider
  • Software listed with "(L)" references it is already loaded.

EXAMPLE: To be sure you are using the environment setup for gromacs, you would type:

module load gromacs/2019.1
module list (you will see gromacs (L) to show it is loaded)
* when done with gromacs, either logout and log back in or type:
module unload gromacs/2019.1

You can create your own modules and place them in your $HOME. Once created, type: module use $HOME/path_to_personal/modulefiles This will prepend the path to $MODULEPATH [type echo $MODULEPATH to confirm]

Reference: User Created Modules

Intel Compilers and Tools

The following Intel compilers are installed on the pool cluster

  • Intel Compiler 2020 - default (2020.4.304)
  • Intel Compiler 2019 (2019.2.187)
  • Intel MPI (2019.9.304)

By default, GNU 8 compilers and OpenMPI are selected, but you can use any combinations of compilers and MPI.

Switch from GNU8/OpenMPI environment to the Intel environment

  • Load the intel and impi modules:

    -bash-4.2$ module list
    
    Currently Loaded Modules:
    1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc
    
    -bash-4.2$ module swap gnu8 intel
    
    Due to MODULEPATH changes, the following have been reloaded:
    1) openmpi3/3.1.4
    
    -bash-4.2$ module swap openmpi3 impi
    -bash-4.2$ module list
    
    Currently Loaded Modules:
    1) autotools   3) ohpc               5) impi/2019.9.304
    2) prun/1.3    4) intel/2020.4.304
    
  • Select interconnect for Intel MPI: Select the interconnect to use for Intel MPI by setting the FI_PROVIDER environment variable like this.

    export FI_PROVIDER=tcp
    

Ethernet interconnect is available on all queues, but some queues have faster interconnects. Here is the list of fastest interconnect available in each queue:

Queue Fastest Interconnect FI_PROVIDER value
astra QDR Infiniband export FI_PROVIDER=verbs
vega Omni-Path export FI_PROVIDER=psm2
all other queues 1 Gb/s ethernet export FI_PROVIDER=tcp

Optionally, you can add this export FI_PROVIDER=... line to select a default interconnect in ~/.bash_profile and skip this step in the future. * You can now submit your jobs that use Intel MPI.

Build software from source into your home directory ($HOME)

  • It is usually possible to install software in your home directory $HOME.
  • List installed software via rpms: rpm -qa. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]
    * download and extract your source
    * cd to your extracted source directory
    ./configure --./configure --prefix=$HOME/appdir
    [You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]
    make
    make install
    
    The binary would then be located in ~/appdir/bin. 
    * Add the following to your $HOME/.bashrc: 
        export PATH="$HOME/appdir/bin:$PATH"
    * Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)
    

Install Python Packages in a Python virtual environment

Using MPI in Julia

  • Make sure the desired MPI and julia 1.9 or later modules are loaded.

    -bash-4.2$ module list
    
    Currently Loaded Modules:
    1) autotools   3) gnu8/8.3.0       5) ohpc
    2) prun/1.3    4) openmpi3/3.1.4   6) julia/1.9.2
    
  • Install MPI and MPIPreferences modules in your julia environment.

    julia --project -e 'using Pkg; Pkg.add("MPIPreferences"); Pkg.add("MPI")'
    
  • Configure MPI for your julia enviornment.

    julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
    
  • Running the MPI job using the mpiexec command like this:

    mpiexec -n <# of cores> julia --project <job script>