Home directories Currently each user has 1 GB in home directory space (/home/<username>). This is for configuration and login files. It will be quite a bit slower than other disk resources, so it should not be used for high volume access. Long-term Group storage Each group has a directory shared by all members of the group (and only that group). These directories are links under the directory /work. To find your group, issue the "groups" command. The first group listed is your primary group. Issue "ls -d /work/*" to see all group directories. E.g. For user jjc, "groups" shows ccresearch users. This means that /work/ccresearch is available on all nodes and jjc can create files there using the ccresearch group's quota. Group quota for this space is based on the shares for the group and can be seen using "quota -gs". Always use /work/<group> links to access your group's storage since location of the group directories may change. Large short-term storage For large short-term storage currently 288 TB of storage is available in the Lustre Space /ptmp . Please use this only for large files. Using this for small files (less than 8MB) is likely to be slower than the NFS storage due to lack of parallelism and increased complexity, and decreases the lifespan of the MetaData disks. To use /ptmp create your directory: mkdir /ptmp/<group>/$USER (where <group> is your primary group name) and use that directory. Since this is a short-term scratch storage, files in /ptmp are subject to deletion to make space for other users. We expect to delete files which are 90 days past their creation date. Note that for tar files, one should the -m flag in tar so that all untar'd files have the date they were untar'd not the date at which the author of the tar file created them. Temporary local storage One can use the storage on the disk drive on each of the compute nodes by reading and writing to $TMPDIR (about 2.5 TB). This is temporary storage that can be used only during the execution of your program. Only processors executing on a node have access to this disk drive. You must ensure that multiple processes executing on the same compute node don't accidentally access files in $TMPDIR meant for other processes; one way to accomplish this is by including MPI process ranks in temporary filenames. MyFiles Myfiles are mounted on the Condo cluster. To access your directory, ssh to condodtn (you won't be prompted for the password). When on condodtn, issue: kinit and enter your ISU password when prompted. Then you will be able to cd to /myfiles and find your directory. Submitting MPI job When submitting an MPI job make sure that the executable resides in one of the shared locations: /home/<user> (where <user> is your user name) /work/<group> (where <group> is your group name) /ptmp/<group> (where <group> is your group name) All these locations are mounted on each of the compute nodes.