CyEnce: Storage

Home directories
  Currently each user has 1 GB in home directory space (/home/<username>).
  This is for configuration and login files.  It will be quite a bit slower
  than other disk resources, so it should not be used for high volume access.

Long-term Group storage
  Each group has a directory shared by all members of the group (and only 
  that group).  These directories are links under the directories /work*. 
  To find your group, issue the "groups" command. The first group listed 
  is your primary group.  Issue "ls -d /work*/*" to see all group

  E.g.  For user jjc, "groups" shows ccresearch users.  This means that 
  /work/ccresearch is available on all nodes and jjc can create files 
  there using the ccresearch group's quota.  Group quota for this space 
  is based on the shares for the group and can be seen using "quota -gs".
  Always use /work*/<group> links to access your group's storage
  since location of the group directories may change.

Large short-term storage
  For large short-term storage currently 288 TB of storage is available in
  the Lustre Space /lustre . Please use this only for large files.  Using 
  this for small files (less than 8MB) is likely to be slower than the NFS 
  storage due to lack of parallelism and increased complexity, and 
  decreases the lifespan of the MetaData disks.

  To use /lustre create your directory:
     mkdir /lustre/<group>/$USER (where <group> is your primary group name)
  and use that directory.

  Since this is a short-term scratch storage, files in /lustre are subject 
  to deletion to make space for other users.

Temporary local storage
  One can use the storage on the disk drive on each of the compute nodes 
  by reading and writing to $TMPDIR (about 2.5 TB).  This is temporary 
  storage that can be used only during the execution of your program. Only 
  processors executing on a node have access to this disk drive.  You must 
  ensure that multiple processes executing on the same compute node don't 
  accidentally access files in $TMPDIR meant for other processes; one way 
  to accomplish this is by including MPI process ranks in temporary 

Submitting MPI job
  When submitting an MPI job make sure that the executable resides in one 
  of the shared locations:
       /home/<user>     (where <user> is your user name)
       /work/<group>    (where <group> is your group name)
       /lustre/<group>    (where <group> is your group name)
  All these locations are mounted on each of the compute nodes.