// Using Cron on Explore

Cron is available via the alias "cron201". Rather than enabling cron on every node, cron201 is a single node that handles all of the cron work on Explore. It has access to all of the same filesystems as the other Explore nodes, as well as the same NFS filesystems as the login nodes.

To get to cron201, once logged into the Explore cluster, run: $ ssh cron201

To access and edit the crontab, run: $ crontab -e

Cron is run using a bash environment only; it will not use your default shell environment. Users are recommended to refer to the man pages for more information on crontab and writing/running bash scripts. To access these man pages, run: $ man bash
$ man cron
$ man crontab

Crontab Structure and Examples

  -------------- minute (0 - 59)
 |  .----------- hour (0 - 23)
 |  | .--------- day of month (1 - 31)
 |  | | .------- month (1 - 12) or jan,feb,mar...
 |  | | | .----- day of week (0 - 6) (Sunday=0)
 |  | | | |
 *  * * * * <command>
The [*] means every possible unit (i.e. every minute of every hour throughout the year, unless otherwise specified).

15 * * * * ( ssh foyer201 <command> <command_args> ) At 15 minutes after the hour, every hour, this example will ssh to foyer201 and run <command>.

21 13 * * * mycron.csh 1> FULLPATH/test.out 2>/dev/null
52  * * * * showquota 1>> FULLPATH/test.out 2>&1
21 13 * * * mycron.csh | mailx -s "Subject" User@wherever.co
Note: Please refer to the bash man page to learn more on redirections (>, >>, &). You may also search for "bash redirections" online.

This last example shows how to set up a cron job to email standard output and standard error to the user. Be careful, if the standard output or standard error is large (which may happen if the job does not run as expected), this may cause the mail daemon to have problems delivering the email, which fills up /var on the node and may then cause problems with the node.

Important: Compute/memory intensive work should not be run on cron201. Please use a compute node for work intensive jobs, or via slurm if the compute node is part of a slurm cluster.

To run a process on a compute node you will need to use remote ssh. An example:
#!/bin/bash;ssh mycompute ’/home/muser/myscript.sh’>>FULLPATH/myscript.out

To submit a SLURM job. Create the appropriate sbatch script on a file system that is available on a slurm submission compute node. Then create a cron script to remotely submit the slurm batch script to slurm. /etc/profile;sbatch myjob.sh’ 1>> FULLPATH/submit.out 2>&1