THECUBE

Hardware

  • Head node: thecube.cac.cornell.edu.
  • access modes: ssh
  • OpenHPC v1.3.8 with CentOS 7.6
  • 32 compute nodes (c0001-c0032) with dual 8-core Intel E5-2680 CPUs @ 2.7 GHz, 128 GB of RAM
  • Hyperthreading is enabled on all nodes, i.e., each physical core is considered to consist of two logical CPUs.
  • Interconnect is InfiniBand FDR: Mellanox MT27500 Family (ConnectX-3).
  • Submit HELP requests: help OR by sending an email to help@cac.cornell.edu?Subject=THECUBE; please include THECUBE in the subject area.

File Systems

Home Directories

  • Path: ~

User home directories is located on a NFS export from the head node. Use your home directory (~) for archiving the data you wish to keep. Do NOT use this file system for computation as bandwidth to the compute nodes is very limited and will quickly be overwhelmed by file I/Os from large jobs.

Unless special arrangements are made, data in user's home directories are NOT backed up.

Scratch File System

LUSTRE file system runs Intel Lustre 2.7: * Path: /scratch/

The scratch file system is a fast parallel file system. Use this file system for scratch space for your jobs. Copy the results you want to keep back to your home directory for safe keeping.

Scheduler/Queues

  • The cluster scheduler is Slurm. All nodes are configured to be in the "normal" partition with no time limits. See Slurm documentation page for details. The Slurm Quick Start guide is a great place to get started.
  • Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
  • Partitions (queues):
Name Description Time Limit
normal all nodes no limit

Software

Work with Environment Modules

Set up the working environment for each package using the module command. The module command will activate dependent modules if there are any.

To show currently loaded modules: (These modules are loaded by default system configurations)

-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc

To show all available modules (as of Sept 30, 2013):

-bash-4.2$ module avail

-------------------- /opt/ohpc/pub/moduledeps/gnu8-openmpi3 --------------------
boost/1.70.0    netcdf/4.6.3    pnetcdf/1.11.1
fftw/3.3.8      phdf5/1.10.5    py3-scipy/1.2.1

------------------------ /opt/ohpc/pub/moduledeps/gnu8 -------------------------
R/3.5.3        mpich/3.3.1       openblas/0.3.5        py3-numpy/1.15.3
hdf5/1.10.5    mvapich2/2.3.1    openmpi3/3.1.4 (L)

-------------------------- /opt/ohpc/pub/modulefiles ---------------------------
autotools          (L)    intel/19.0.2.187        prun/1.3        (L)
clustershell/1.8.1        julia/1.2.0             valgrind/3.15.0
cmake/3.14.3              octave/5.1.0            vim/8.1
gnu8/8.3.0         (L)    ohpc             (L)    visit/3.0.1
gurobi/8.1.1              pmix/2.2.2

Where:
L:  Module is loaded

To load a module and verify:

-bash-4.2$ module load R/3.5.3 
-bash-4.2$ module list

Currently Loaded Modules:
1) autotools   3) gnu8/8.3.0       5) ohpc             7) R/3.5.3
2) prun/1.3    4) openmpi3/3.1.4   6) openblas/0.3.5

To unload a module and verify:

-bash-4.2$ module list

Currently Loaded Modules:
1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc

Install R Packages in Home Directory

If you need a new R package not installed on the system, you can install R packages in your home directory using these instructions.

Manage Modules in Your Python Virtual Environment

Both python2 (2.7) and python3 (3.9) are installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see the documentation on virtual environments on python.org for details.

Load Python Module

You will first want to load the desired or required python module for your work. Use the command module spider python or module avail to see what python modules are available. Then load the specific required module, e.g., via: * module load python/3.9.4 After you execute this, python3 should point to the program installed as part of the 3.9.4 module.

Create Virtual Environment

You can create as many virtual environments, each in their own directory, as needed. * python2: python -m virtualenv <your virtual environment directory> * python3: python3 -m venv <your virtual environment directory>

Activate Virtual Environment

You need to activate a virtual environment before using it:

source <your virtual environment directory>/bin/activate

Install Python Modules Using pip

After activating your virtual environment, you can now install python modules for the activated environment: * It's always a good idea to update pip first:

    pip install --upgrade pip
  • Install the module:

    pip install <module name>
    
  • List installed python modules in the environment:

    pip list modules
    
  • Examples: Install tensorflow and keras like this:

    -bash-4.2$ python3 -m venv tensorflow
    -bash-4.2$ source tensorflow/bin/activate
    (tensorflow) -bash-4.2$ pip install --upgrade pip
    Collecting pip
        Using cached https://files.pythonhosted.org/packages/30/db/9e38760b32e3e7f40cce46dd5fb107b8c73840df38f0046d8e6514e675a1/pip-19.2.3-py2.py3-none-any.whl
    Installing collected packages: pip
        Found existing installation: pip 18.1
            Uninstalling pip-18.1:
                Successfully uninstalled pip-18.1
    Successfully installed pip-19.2.3
    (tensorflow) -bash-4.2$ pip install tensorflow keras
    Collecting tensorflow
    Using cached https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl
    :
    :
    :
    Successfully installed absl-py-0.8.0 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-2.2.5 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 numpy-1.17.1 protobuf-3.9.1 pyyaml-5.1.2 scipy-1.3.1 six-1.12.0 tensorboard-1.14.0 tensorflow-1.14.0 tensorflow-estimator-1.14.0 termcolor-1.1.0 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2
    (tensorflow) -bash-4.2$ pip list modules
    Package              Version
    -------------------- -------
    absl-py              0.8.0  
    astor                0.8.0  
    gast                 0.2.2  
    google-pasta         0.1.7  
    grpcio               1.23.0 
    h5py                 2.9.0  
    Keras                2.2.5  
    Keras-Applications   1.0.8  
    Keras-Preprocessing  1.1.0  
    Markdown             3.1.1  
    numpy                1.17.1 
    pip                  19.2.3 
    protobuf             3.9.1  
    PyYAML               5.1.2  
    scipy                1.3.1  
    setuptools           40.6.2 
    six                  1.12.0 
    tensorboard          1.14.0 
    tensorflow           1.14.0 
    tensorflow-estimator 1.14.0 
    termcolor            1.1.0  
    Werkzeug             0.15.5 
    wheel                0.33.6 
    wrapt                1.11.2
    

Run MPI-Enabled Python in a Singularity Container

The following Dockerfile should create an Ubuntu image that is able to run Python applications parallelized with mpi4py. Note that simply doing apt-get install -y openmpi in Ubuntu 18.04 will not generally install an Open MPI version that is recent enough to be compatible with the host's Open MPI version.

### start with ubuntu base image
FROM ubuntu:18.04

### install basics, python3, and modules needed for application
RUN apt-get update && apt-get upgrade -y && apt-get install -y build-essential zlib1g-dev libjpeg-dev python3-pip openssh-server
RUN pip3 install Pillow numpy pandas matplotlib cython

### install Open MPI version 4.0.5, consistent with Hopper & TheCube
RUN wget 'https://www.open-mpi.org/software/ompi/v4.0/downloads/openmpi-4.0.5.tar.gz' -O openmpi-4.0.5.tar.gz
RUN tar -xzf openmpi-4.0.5.tar.gz openmpi-4.0.5; cd openmpi-4.0.5; ./configure --prefix=/usr/local; make all install
RUN ldconfig

### install mpi4py now that openmpi is installed
RUN pip3 install mpi4py

### add all code from current directory into “code” directory within container, and set as working directory
ADD .  /code
WORKDIR /code
ENV PATH "/code:$PATH"

### compile cython for this particular application
RUN python3 setup.py build_ext --inplace

### set python file as executable so it can be run by docker/singularity
RUN chmod +rx /code/run_reservoir_sim.py

### change username from root
RUN useradd -u 8877 <my_username>
USER <my_username>

The resulting image can then be run in a Singularity container by putting commands like these into your Slurm batch file:

module load singularity
mpirun singularity run cython_reservoir_0.1.sif run_reservoir_sim.py

Software List

Software Path Notes
Intel Compilers, MPI, and MKL /opt/ohpc/pub/compiler/intel/2019/ module unload gnu8; module load intel/19.0.2.187; module load impi/2019.2.187
gcc 8.3 /opt/ohpc/pub/compiler/gcc/8.3.0/ module load gnu8/8.3.0 (Loaded by default)
Openmpi 3.1.4 /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4 module load openmpi3/3.1.4 (Loaded by default)
python 3.9.4 /opt/ohpc/pub/utils/python/3.9.4 module load python/3.9.4
python 2.7.16 /opt/ohpc/pub/utils/python/2.7.16 module load python/2.7.16
perl 5.30.1 /opt/ohpc/pub/utils/perl/5.30.1 module load perl/5.30.1
Boost 1.70.0 /opt/ohpc/pub/libs/gnu8/openmpi3/boost/1.70.0 module load boost/1.70.0
cmake 3.14.3 /opt/ohpc/pub/utils/cmake/3.14.3 module load cmake/3.14.3
hdf5 1.10.5 /opt/ohpc/pub/libs/gnu8/hdf5/1.10.5 module load hdf5/1.10.5
octave 5.1.0 /opt/ohpc/pub/apps/octave/5.1.0 module load octave/5.1.0
netcdf 4.6.3 /opt/ohpc/pub/libs/gnu8/openmpi3/netcdf/4.6.3 module load netcdf/4.6.3
fftw 3.3.8 /opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8 module load fftw/3.3.8
valgrind 3.15.0 /opt/ohpc/pub/utils/valgrind/3.15.0 module load valgrind/3.15.0
visit 3.0.1 /opt/ohpc/pub/apps/visit/3.0.1 module load visit/3.0.1
R 3.5.3 /opt/ohpc/pub/libs/gnu8/R/3.5.3 module load R/3.5.3
openblas 0.3.5 /opt/ohpc/pub/libs/gnu8/openblas/0.3.5 module load openblas/0.3.5
vim 8.1 /opt/ohpc/pub/apps/vim/8.1 module load vim/8.1
julia 1.2.0 /opt/ohpc/pub/compiler/julia/1.2.0 module load julia/1.2.0
gurobi 8.1.1 /opt/ohpc/pub/apps/gurobi/8.1.1 module load gurobi/8.1.1 * Create a ~/gurobi.lic file with the following line: TOKENSERVER=infrastructure2.tc.cornell.edu * gurobipy is installed in python-3.6.9. You can use it by loading that module.
remora 1.8.3 /opt/ohpc/pub/apps/remora/1.8.3 module load remora/1.8.3
GMAT R2019aBeta1 /opt/ohpc/pub/apps/GMAT/R2019aBeta1 module load GMAT/R2019aBeta1

Help