// Discover CSS Access through Slurm

CSS read-only access on Discover is provided to a subset of Discover's Slurm-managed compute nodes. These are limited to Scalable Unit 16, which includes two different node types: 676 CPU-only nodes with Intel "Cascade Lake" CPU architecture, and twelve nodes with AMD "Rome" CPUs combined with NVIDIA A100 GPUs and Scalable Units 17 and 18, which include the AMD Milan CPU architecture with a total of 1408 nodes. (As always, CSS will remain available on all login and gateway nodes.)

To run Slurm jobs on these nodes, you will need to request the constraint "cssro," but you will also need to specify the processor type. For CPU-only Cascade Lake nodes, you’ll specify constraint "cas", with the following being the resulting inline directive (double quotes are mandatory here):

#SBATCH --constraint="cas&cssro"

For Milan nodes, you'll specify constraint "mil", with the following being the resulting inline directive (double quotes are mandatory here):

#SBATCH --constraint="mil&cssro"

Alternatively, for the AMD Rome+GPU nodes, you'll need to specify:

#SBATCH --constraint="rome&cssro"

(Note: for the GPU partition only, you'll need to specify --partition=gpu_a100 along with other GPU-specific Slurm options. You will need to request access to this small, special-use partition. See access and usage details on the Discover GPU page.)

Transferring Final Data Products to CSS via SLURM


If you have write access to a shared fileset on CSS, then you may copy final data from Discover to CSS by using a datamove node, either with a SLURM batch job or interactively. Enter the following resources in your SLURM scripts for datamove and it will direct the job to be run on a node with write access:

#SBATCH --constraint=cssrw
#SBATCH --partition=datamove

Note: The "cssrw" constraint will *not* work if run from within compute nodes. Here is an example SLURM script for a batch job that will copy a big file from your $NOBACKUP to a project directory on CSS:

#!/bin/bash
#SBATCH -J mycopyjob
#SBATCH -t 00:01:00
#SBATCH --constraint=cssrw
#SBATCH -p datamove
#SBATCH --account=xxxx
source /usr/share/modules/init/bash
module purge
cp /discover/nobackup/bigfile /css/destination/

As a rule of thumb it is recommended to set the wall time limit to be two seconds per gigabyte, then multiply that by another two for good measure. The datamove walltime limits are two hours.

If you have any issues accessing these CSS resources, please submit a ticket to support with the subject "CSS access on Discover", so we can assist your efforts.