shadow_tr
Quick Links
 
• 
 
 
• 
 

 

Events


2014-04-15 - Kathleen Hoffman
 
2014-10-14 - Howard Elman
 

News


No pages meet the criteria

Portable Batch System

The Center for Computational Science's OneSIS computing cluster uses Cluster Resources' Torque Portable Batch System software along with the Maui scheduler. All jobs must be run under Torque/PBS (qsub).

All faculty and students at Tulane University are permitted access to the computational resources at the Center for Computational Sciences with the following queue structure in increasing order of permission (and responsibility):

Queue Name

Wall Time


Memory


Max # CPUs / User

Max # Running Jobs

Priority

Jobs / User

ccs_short

< 6hrs

800 MB / processor

None

50

High

10

ccs_long

< 72hrs

800 MB / processor

None

50

Low

10


Queue Name:
the name of the queue.

Wall Time: amount of total wall time for job.

Memory: amount of memory for job.

Max # CPUs / User: the max number of CPUs that are available to a user for a job.

Max # Running Jobs: maximum number of jobs allowed to run in the queue concurrently.

priority: positive and higher values have greater priority.

Jobs / User: maximum number of requests allowed to be run in the queue at one time by one user.

SUBMITTING JOBS

To submit jobs: ssh to ares and run `qsub job.sh` from your scratch directory (e.g. /scratch00/wcurry).

JOB STATUS

Run `qstat` to see your job’s status.

DELETING JOBS

`qdel <job_id>` will delete a job from the cluster.

AVAILABLE QUEUES

ccs_short: Short jobs (6 hours or less).  Jobs in this queue have the highest priority.
ccs_long:  Long jobs (greater than 6 hours).

SERIAL JOB SCRIPT

----BEGIN serial_job.sh----
## queue name
#PBS -q ccs_short

## job name
#PBS -N my_serial_job        

## estimated wall time (hh:mm:ss)
#PBS -l walltime=01:00:00

## leave unchanged for serial jobs
#PBS -l nodes=1:ppn=1

## path to your working directory
#PBS -d /scratch02/wcurry

/scratch02/wcurry/path/to/serial/binary
----END serial_job.sh----

More PBS options are available.  See the qsub man page for information.

PARALLEL JOB SCRIPT

----BEGIN parallel_job.sh----
## queue name
#PBS -q ccs_short

## job name
#PBS -N my_parallel_job            

## estimated wall time (hh:mm:ss)
#PBS -l walltime=01:00:00

## number of nodes and processors per node (ppn)
#PBS -l nodes=2:infiniband:ppn=4

## path to your working directory
#PBS -d /scratch02/wcurry

## -n <total processors> is 8
## 2 nodes, 4 processors per node
mpiexec –n 8 /scratch02/wcurry/path/to/parallel/binary
----END parallel_job.sh----

COMPILING FOR MPI OVER INFINIBAND

Those using MPI should recompile their code using `mpicc` or `mpif77` after loading the appropriate compiler and mvapich2 module via:

# GNU #
`module load gcc/4.3.1`
`module load mvapich2-gnu-ib`

# INTEL #
`module load intel/9.1`
`module load mvapich2-intel-ib`

# PGI #
`module load pgi/7.2`
`module load mvapich2-pgi-ib`

Center for Computational Science, Stanley Thomas Hall 402, New Orleans, LA 70118 ccs@tulane.edu