How To: Guides for users and Maryland T3 admins.

Help: Links and emails for further info.

Configuration: technical layout of the cluster, primarily for admins.

Log: Has been moved to a Google page, accessible only to admins.

User How-To Guide

A number of guides are linked from the Help for Users page. The information below is specific to our cluster.

Please let Marguerite Tonjes know if you would like a section added to the user guide or if you find any inaccuracies.

Last edited August 20, 2011

Table of Contents

 

Request an account

Simply email Marguerite Tonjes with your request.

The first time you login, please ssh to the head node (follow these instructions to get PuTTY if you have Windows):

ssh hepcms-0.umd.edu

When you first log in, you will be prompted to create a new password; please do so. You will be automatically logged off and must log in again with the new password. Log in one more time to the head node. The second time you login, you will be prompted to generate public/private rsa key pairs. If you do not wish to use this functionality (we encourage you to not use it), simply hit return three times (for the file, password, and password confirmation). If you accidentally set this functionality and wish to remove it, issue the commands:

rm -rf .ssh
rm -rf .Xauthority

Now that you have set your password and handled your rsa key, please direct all subsequent logins to the farm:

ssh hepcms.umd.edu

Note that your new password can take up to 30 minutes to propagate to the farm. Please avoid logging directly into the head node if possible and do not, under any circumstances, run resource-intensive jobs (such as cmsRun) on the head node. However, if you wish to change your password at a later time, you will need to do so at the head node.

In addition to your home area, /home/username, you have space in /data/users/username. At the present time, neither area has user quotas, so please be courteous to other users of these disks; check with Marguerite Tonjes and/or Nick Hadley if you're not sure. Please place any large datasets into /data/users/username. Neither area is properly backed up (although they both have safeguards), so any critical files should be backed up off-site.

 

Connect to the cluster

From a Linux machine:

ssh -X username@hepcms.umd.edu

From a Windows machine:

PuTTY provides an ssh client for Windows machines. Configuring PuTTY after installation varies a bit from version to version, but the most important setting is the host name: hepcms.umd.edu. You will also want to turn X11 forwarding on, typically under Connection->SSH->Tunnels and you may want to set your username, typically under Connection. Your settings can be saved for future sessions.

A Kerberos-enabled PuTTY is also available, which can be used for connecting to FNAL. Instructions to install and configure it can be found at the FNAL website. If you use this PuTTY client, be sure to turn off Kerberos authentication for your connection to hepcms, typically under Connection->SSH->Auth.

Xming is a light-weight X11 emulator and does support PuTTY. It comes with some versions of PuTTY. It is needed for any software running at hepcms which pops up windows on your machine, such as root.

 

Transfer files to the cluster

These instructions are for transfering small quantities of personal files to or from your working area on the cluster. If you wish to transfer files from FNAL's or CERN's storage element, please use srmcp or PhEDEx. If you wish to transfer data from your store in /data, please break transfers up into small chunks, not exceeding 100MB in size. DBS-registered data hosted at our site can be copied using exactly the filename path given in DBS (always starting with /store).

From a Linux machine:

To the cluster:

scp filename.ext username@hepcms-0.umd.edu:~/filename.ext
OR
scp filename.ext username@hepcms-0.umd.edu:/data/users/username/filename.ext

From the cluster:

scp username@hepcms-0.umd.edu:~/filename.ext filename.ext
OR
scp username@hepcms-0.umd.edu:/data/users/username/filename.ext filename.ext

scp accepts wildcards (*) and -r for recursive functionality.

From a Windows machine:

FileZilla is a useful Windows application for transfering files using various protocols. Once installed, select File->Site Manager. Click on the "New Site" button and enter hepcms-0.umd.edu as the host. Select SFTP using SSH2 as the Servertype. Select Logontype Normal and enter your user name and password if desired. Click the "Connect" button. Transfer files by dragging them from the left directory to the right or vice-versa.

 

Run CMSSW

A CMSSW working environment is created and jobs are executed on this cluster as at any other:

scramv1 list CMSSW
cmsrel CMSSW_X_Y_Z
cd CMSSW_X_Y_Z/subdir
cmsenv
cmsRun yourConfig.py

You can use this example CMSSW config file to test, which has been verified to work in CMSSW_2_1_0 but should work in most modern releases. Further details on CMSSW can be found at the Workbook and in the tutorials.

 

Get Kerberos tickets for FNAL & CERN

Two aliased commands have been set up in each new user's .cshrc/.bashrc files, automatically sourced on login. Execute either command below, with the appropriate username for FNAL or for CERN (not necessarily the same as your HEPCMS username).

  • kinit_fnal username@FNAL.GOV
  • kinit_cern username@CERN.CH

 

Check out code in the CVS CMSSW repository

  1. Tell CVS you want CMSSW code:
    cmscvsroot CMSSW
  2. Get your CERN kerberos ticket. If you have an account at CERN:
    kinit_cern username@CERN.CH
    If you don't have an account at CERN, get the anonymous password from someone else in CMS and:
    cvs login
    You will not be able to commit code to CVS using anonymous login.
  3. Check code out examples:
    cvs co UserCode/Kirn/CondorAndCMSSW
    or
    cvs co Configuration/StandardSequences/python
  4. Further help for cvs commands is available:
    cvs --help
  5. You can browse the CMSSW CVS repository here.

 

Submit jobs to the cluster using Condor

Jobs are submitted via condor using the command:

condor_submit jdl_file

Note that you may receive the error:
WARNING: Log file ... is on NFS.
This could cause log file corruption and is _not_ recommended.

You can safely ignore this error.

Details on other condor commands and possible values in the jdl file can be found in this Condor guide.

The following three files together are an example of how to submit CMSSW jobs to the cluster:

To run these files, install a CMSSW release and place ttbar.py inside the test subdirectory. ttbar.py has been verified to work in CMSSW_2_1_0, though it should work in most modern releases. condor-executable.sh and ttbar-condor_submit.jdl can be placed together in any directory. Edit ttbar-condor_submit.jdl with your email address, your CMSSW release, your username and the date and time (date '+%y%m%d_%H%M' prints a machine-readable format if desired). Make condor-executable.sh executable:

chmod +x condor-executable.sh

Then submit the jobs:

condor_submit ttbar-condor_submit.jdl

You can watch the status of your jobs, they should take about 15 minutes to complete once they start running:

condor_q -submitter your_username

You will receive email notification when a job completes. You can also watch condor_q:

watch -n 60 "condor_q -submitter your_username"

Once jobs are complete, the *.condor files (in CMSSW_X_Y_Z/test) will indicate if condor itself ran successfully, as well as providing the exit code of the CMSSW job. The *.stdout and *.stderr files will show output from condor-executable.sh and the *.log files will show output from CMSSW itself.

 

Get your grid certificate & proxy

Get your grid certificate

Follow these instructions to get your personal certificate and to register to the CMS VO. To complete your registration to the CMS VO, you will also need a CERN lxplus account. This process can take up to a week. The same browser must be used for all steps connected with getting your grid certificate.

Certificates expire in a year, you will be emailed with instructions on how to get a new one. Follow the instructions below every time you receive a new certificate.

Export your grid certificate from your browser, the interface varies from browser to browser. The exported file will probably have the extension .p12 or .pfx. Copy this file to the cluster following these instructions.

Now extract your certificate and encrypted private key:

If your first time: mkdir $HOME/.globus
or to replace existing cert with new: rm $HOME/.globus/userkey.pem
openssl pkcs12 -in YourCert.p12 -clcerts -nokeys -out $HOME/.globus/usercert.pem
openssl pkcs12 -in YourCert.p12 -nocerts -out $HOME/.globus/userkey.pem
chmod 400 $HOME/.globus/userkey.pem
chmod 600 $HOME/.globus/usercert.pem
chmod go-rx $HOME/.globus

To run grid jobs using CRAB, you will also need to register your grid identity in SiteDB following the instructions in this Twiki.

Get your proxy:

voms-proxy-init -voms cms

You can safely ignore the error "Cannot find file or dir: /home/condor/execute/dir_14135/userdir/glite/etc/vomses"

Proxies automatically expire in 24 hours, simply issue this command again to renew it. You can get information on your proxy by issuing the command:

voms-proxy-info -all

 

Submit jobs to the grid using CRAB

These instructions assume you have already gotten your grid certificate & proxy. Additionally, you need to have registered your grid certificate in SiteDB. You should already have a CMSSW config file which you've run successfully using interactive commands (cmsRun). An example CMSSW config file is provided in the example if you prefer. This CRAB tutorial can help you get started if you are unfamiliar with CRAB.

Note: CRAB is installed on the worker nodes only. It cannot be used from the head node (nor should it, as the HN should not be used to perform resource-intensive tasks).

Temporary note regarding CRAB versions: The latest CRABserver is incompatible with older CRAB clients. The newest CRABserver, installed at FNAL and possibly other sites, requires CRAB_2_6_0. CRAB_2_6_0 is installed and linked by /scratch/crab/current. Older versions of CRAB are available in /scratch/crab and may be necessary to speak to older CRABservers, such as legnaro.

Setup your environment:

If you plan to use the gLite scheduler (currently the CRAB default), you will need to get the gLite-UI (user interface) environment:

source /scratch/gLite/gLite-UI/etc/profile.d/grid_env.csh

Get the CRAB environment:

source /scratch/crab/current/crab.csh

Navigate to the CMSSW_X_Y_Z subdirectory which contains your CMSSW config file that you wish to submit. Get the CMSSW environment variables & copy the default crab.cfg file to this directory:

cmsenv
cp $CRABPATH/crab.cfg .

Edit crab.cfg:

Edit the following values in crab.cfg:

  • datasetpath: set to a DBS dataset name or leave as none if this is a production job
  • pset: set to the name of your CMSSW config file that you wish to submit
  • total_number_of_events: the minimum number of events you wish to run over or produce (CRAB will run over at least this many events, but may run over more), set to -1 if you wish to run over all the events in the DBS dataset
  • number_of_jobs: the approximate number of jobs to create (a good rule of thumb is that jobs run for ~1 hour), CRAB will attempt to create this many jobs, but may create more or less
  • output_file: a comma separated list of the names of output files that you wish returned to you; typically root files

Edit crab.cfg for SE output:

If your output files might exceed 50MB for each job, then your must stage your output back to a storage element (SE). You can send it back to the hepcms cluster or you can send it to your user area in another SE (typically FNAL for people affiliated with UMD). The information below assumes you are using CRAB_2_4_2 or newer; the hepcms cluster has newer versions of CRAB installed. Although you will stage all files in the output_file list back to a SE, the log files, *.stderr and *.stdout, will still be retrieved via normal means (the -getoutput command).

To stage your data back to the hepcms SE:

Create the directory where you want to store the data and give it correct permissions:

mkdir /data/users/username/subdir
chmod 775 /data/users/username/subdir

Alternatively, you can use the srm dropbox, already configured with the correct permissions:

/data/users/srm-drop

Be sure to move your files out of this directory within a week, as files older than a week in this directory are automatically deleted. You may want to make a subdirectory there, to guarantee your files won't be overwritten if someone else runs a job that produces files with the same names.

Set the following values in your crab.cfg file:

return_data = 0
copy_data = 1
storage_element = hepcms-0.umd.edu
storage_path = /srm/v2/server?SFN=/data/users
user_remote_dir = username/subdir
or
user_remote_dir = srm-drop

To stage your data back to your user area in the FNAL SE:

These directions assume that you have already been given user space on the FNAL SE. To request SE user space at FNAL, email Rob Harris & the FNAL T1 admins. You should also have an account at the FNAL LPC. Note that data which is staged out to FNAL cannot be easily copied to the UMD hepcms cluster, even if you used CRAB DBS registration. A way to do this easily is under development by the CMS computing group.

Login to cmslpc.fnal.gov and create the directory where you want to store the data and give it correct permissions:

mkdir /pnfs/cms/WAX/11/store/user/username/subdir
chmod 775 /pnfs/cms/WAX/11/store/user/username/subdir

Set the following values in your crab.cfg file:

return_data = 0
copy_data = 1
storage_element = cmssrm.fnal.gov
storage_path = /srm/managerv2?SFN=/11/store/user
user_remote_dir = username/subdir

Create, submit, watch, and retrieve your jobs:

crab -create
crab -submit
crab -status
crab -getoutput

You can also monitor the status of your jobs with the CMS dashboard. Select your name from the drop down menu and you will be provided with a great deal of information in the form of tables and plots on the status of your jobs.

Output files will be stored inside the crab_x_y_z/res directory and in the SE, if you specified this option in your crab.cfg.

Example:

These files together can be used to create and submit CRAB production jobs with output that stages back directly (not using a SE):

To run these files, install a CMSSW release and place ttbar-local.py and crab.cfg inside any CMSSW_X_Y_Z subdirectory, such as src. ttbar-local.py has been verified to work in CMSSW_2_1_0, but should work in most modern releases of CMSSW. crab.cfg has been verified to work in CRAB_2_3_1, but it should work in most modern releases of CRAB.

Setup your environment:

voms-proxy-init -voms cms
source /scratch/gLite/gLite-UI/etc/profile.d/grid_env.csh
source /scratch/crab/current/crab.csh
cmsenv

Create the CRAB jobs:

crab -create

Submit them:

crab -submit

Watch for when they complete (this can take anywhere from 15 minutes to several hours):

crab -status

Once at least one job has completed:

crab -getoutput

Output will be inside the crab_x_y_z/res directory and can be viewed with root.

Submit a CRAB job to the UMD hepcms cluster

If you're located remotely, you may want to submit jobs to the hepcms cluster via CRAB rather than Condor. CRAB jobs can be submitted from any computer which has various grid client tools installed, including CRAB. Consult your siteadmin if you do not know the appropriate commands to set up the grid and CRAB environment. You need to set two parameters in your crab.cfg file:

se_white_list = T3_US_UMD
ce_white_list = UMD.EDU

We also recommend the condor_g scheduler, as it has a much faster response time. However, the computer from which you are submitting your job must have Condor-G installed (try which condor_gridmanager if you aren't sure). To use the condor_g scheduler, set the parameter in your crab.cfg file:

scheduler = condor_g

Of course, the hepcms cluster must have the version of CMSSW that you are using installed and must be hosting the data you are attempting to run over (production jobs require no input data). Check here for your dataset name. You may request to have a CMSSW version installed by contacting Marguerite Tonjes or Nick Hadley and may bring DBS-registered data to the cluster by submitting a PhEDEx request.

 

Transfer data via PhEDEx

You can view the data currently hosted at our site here.

To place a request to transfer data via PhEDEx, you will need your grid certificate to be in your browser. Follow these instructions to get your certificate if you don't have it already.

  1. Navigate to DBS and set the search parameters for your dataset.
  2. In the list of results, choose the dataset desired.
  3. Check the size of the dataset, both in terms of number of events and GB. You have a few options, depending on the size:
    1. Any transfers of a few hundred GB or less will probably be approved, typically all AOD/AODSIM datasets are small enough to get immediate approval.
    2. If the dataset is large and you don't need all the events, select the link labeled "Block info" and copy the Block id (or ids) containing the smallest number of events you require.
    3. If the dataset is large and you must have all the events, contact Marguerite Tonjes and/or Nick Hadley to get approval before submitting your PhEDEx request.
  4. Check the sites hosting the dataset (click the "Show" link). We must have a commissioned PhEDEx link from one of the sites listed as hosting the dataset for the transfer to complete successfully. Our currently commissioned links are listed here. If the desired link is not present, send an email to Marguerite Tonjes with the dataset name and your request to have a PhEDEx link commissioned to one of the hosting sites.
  5. Select the link labeled "PhEDEx" and provide your grid certificate when prompted.
  6. Under Data Items, verify that your desired dataset is listed. If you want to transfer a block, append the dataset name with a # and the block id. Multiple datasets (and multiple blocks) can be separated by a space. For example, to request two blocks:
    /tt0j_mT_70-alpgen/CMSSW_1_5_2-CSA07-2231/GEN-SIM-DIGI-RECO#7efe4257-c594-472f-a638-ac9100321b2f /tt0j_mT_70-alpgen/CMSSW_1_5_2-CSA07-2231/GEN-SIM-DIGI-RECO#25831448-37a3-4f05-b470-429fad0d090e
  7. Under Destinations, select T3_US_UMD.
  8. Select these options from the drop down menus:
    Transfer Type: Replica (default)
    Subscription Type: Growing (default)
    Priority: Normal (or leave as Low if you prefer, it doesn't matter much for our site)
    Custodial: Non-custodial (default)
    Group: undefined (default)
  9. Enter a Comment regarding the reason you need the dataset transferred to the cluster.
  10. Click the "Submit Request" button.
  11. You will receive emails notifying you that you've made the request and whether or not your request was approved.
  12. You can monitor the status of the transfer here, by providing the dataset name. Transfers typically take about a day, but can take several weeks in some special cases.
  13. When you no longer need the dataset, please inform Marguerite Tonjes and/or Nick Hadley so that the space can be used for other datasets.

You can run over the data hosted at the site with CMSSW interactively, via Condor jobs, or via CRAB jobs.

Interactive and Condor jobs need the PoolSource fileNames to be specified in the CMSSW config file. The PoolSource fileNames can be found in DBS at the dataset's link "LFNs: cff" or "LFNs: py," as you prefer. Unlike other sites, no modifications need to be made to the paths provided by DBS; paths work exactly as written. If you transferred a block (or blocks) in the dataset, rather than the entire dataset, you must get the PoolSource fileNames for just that block. Under the link "Block info," in the table row containing the block id that was transferred, select the link under the "LFNs" column titled "cff."

CRAB jobs need the datasetpath set in the CRAB config file. Simply set datasetpath to the name of the dataset in DBS and CRAB will automatically determine which blocks are hosted at the site and the paths.

 

Transfer data via FileMover

FileMover is a new utility which provides a user friendly interface to download individual files registered in DBS. It is intended primarily for local tests and examination prior to submitting a CRAB job to run over the entire dataset. It is not intended for downloading an entire dataset; FileMover restricts users to 10 transfer requests in a 24 hour period. FileMover can be accessed here and can only be viewed in Firefox. You will need a Hypernews account to log in.

To download a file for testing:

  1. Navigate to the DBS browser and find a dataset with your desired characteristics using the menu interface. Alternatively, the Summer08 production page also has a list of samples produced and their names in DBS, which can be pasted directly into the text box - press the "Search" button to display the needed information.
  2. Click on the "LFNs: ... plain" link to get a list of the files in the dataset. Copy one of the file names.
  3. Paste the file name into FileMover in the "Request file via LFN" box, then click the Request button.
  4. If there are no problems with the file, the transfer will begin and usually completes in five to ten minutes. Once complete, right click the Download link and select "Copy Link Location."
  5. Log into the cluster (ssh hepcms.umd.edu) and execute:
    wget --no-check-certificate "https://cmsweb.cern.ch/filemover/download/..."
    where the quotes contain the copied link.
  6. The file can now be examined with Root or run over with CMSSW for local tests. For Root, call cmsenv in the appropriate CMSSW release area to get the appropriate version of root in your path. For CMSSW, specify the file location by prepending its full path in your PoolSource with the keyword "file:", which will direct CMSSW to look at files located in the normal filesystem, rather than the UMD site catalog.

 

Transfer data via the srm protocol

The srm protocol is particularly useful for getting single files located at a grid storage element such as FNAL's or CERN's dCache. Examples are provided here for getting files from these two types of storage elements, both for personal files located at these storage elements and for official, DBS-registered files. You will need your grid certificate and proxy to execute srmcp commands. Additionally, you must use a shell in which you have not sourced the gLite-UI environment (source /scratch/gLite/gLite-UI/etc/profile.d/grid-env.csh).

Set directory permissions:

Files can be transferred to any directory which is writeable by the users group. To make a directory group writeable:

chmod 775 dirName

Note this directory must be placed inside a directory which is at least group readable. User areas in /home are not group readable by default. If your directory for the srm transfer is inside /home, you must make your /home directory group readable:

chmod 755 /home/username

Your directory in /data/users is already group readable. We encourage users to place large files in /data/users and use /home for smaller files.

Alternatively, the directory /data/users/srm-drop is specifically designated as a drop box for srm transfers and has all the correct permissions. Feel free to create a subdirectory in srm-drop if you need one. Week-old files are automatically removed from this folder, so please be sure to transfer your files out of this directory once srmcp has completed.

General srmcp syntax:

srmcp functions as a typical copy command, where you specify the source, then the destination. srmcp transfers only one file at a time and does not accept wildcards or recursive functionality.

If you are located at the UMD cluster:

srmcp -2 "srm://se-where-data-is-located:8443/se-path?SFN=/file-system-path/filename.ext" file:////full-path/filename.ext"

If you are located at a remote site, you may need to set up your environment to get the srmcp binary in your PATH (which srmcp to verify). Once you've done so:

srmcp -2 file:////full-path/filename.ext "srm://hepcms-0.umd.edu:8443/srm/v2/server?SFN=/path/filename.ext"

If you wish to get data from the cluster and you are located at a remote site, you will also need the access_latency parameter (not required if you are located at the UMD cluster):

srmcp -2 -access_latency=ONLINE "srm://hepcms-0.umd.edu:8443/srm/v2/server?SFN=/path/filename.ext" file:////full-path/filename.ext

Some older srmcp clients do not support the access_latency parameter, including the clients presently installed at FNAL and CERN. Note that scp and sftp are much easier ways to get data from the cluster, as our storage element is a normal disk system and does not require srmcp commands to retrieve data. Avoid using any of these commands (srmcp, scp, or sftp) for large transfers (100GB+) and contact Marguerite Tonjes and/or Nick Hadley if you must do so.

Finally, if you wish to perform third party transfers involving the UMD cluster and are located at a remote site, you will need the pushmode parameter (not required if you are located at the UMD cluster):

srmcp -2 -access_latency=ONLINE -pushmode=TRUE "srm://hepcms-0.umd.edu:8443/srm/v2/server?SFN=/path/filename.ext" "srm://se-host-name:8443/se-path?SFN=/file-system-path/filename.ext"

You can safely ignore the error:

"GridftpClient: Was not able to send checksum value:org.globus.ftp.exception.ServerException: Server refused performing the request. Custom message: (error code 1) [Nested exception message: Custom message: Unexpected reply: 500 Invalid command.] [Nested exception is org.globus.ftp.exception.UnexpectedReplyCodeException: Custom message: Unexpected reply: 500 Invalid command.]"

A good way to validate that your transfer completed successfully is to set the flag -debug=TRUE and to check the file has non-zero size after the transfer completes (ls -lh).

Note: Transfers using the srm protocol to or from afs areas can be problematic. It is recommended that you transfer your file from an afs area to a non-afs area before attempting to transfer it via srm. The UMD cluster does not have any afs areas.

Examples:

These examples assume you have already imported your grid certificate to the site at which you are executing the srmcp calls. Instructions to do so at the UMD cluster are here.

Some examples use DBS paths, which always start with /store and are given for all the files in the dataset at the DBS browser website. After finding your desired dataset, click on either link labeled "LFNs: cff, plain" to get the list. For example, for the dataset "/tt0j_mT_70-alpgen/CMSSW_1_5_2-CSA07-2231/GEN-SIM-DIGI-RECO", an example DBS file path is:

/store/mc/2007/8/31/CSA07-tt0j_mT_70-alpgen-2231/0001/0016DF8B-1A6D-DC11-A418-000423D987FC.root

Generally, you should use PhEDEx to transfer DBS-registered datasets, but srmcp is a good option if you wish to just examine one or two files manually. You will need to know the details for the storage element hosting the dataset; CERN & FNAL are provided below.

Examples follow:

At the UMD cluster, transferring a file from FNAL's dCache user-area (/pnfs/cms/WAX/11/store/user/username/file.ext) to your UMD home-area (/home/username/file.ext):

voms-proxy-init -voms cms
cd ~
srmcp -2 "srm://cmssrm.fnal.gov:8443/srm/managerv2?SFN=/11/store/user/username/file.ext" file:///`pwd`/file.ext

 

At the UMD cluster, transferring a file from CERN's dCache PhEDEx/DBS store-area to the UMD srm dropbox (/data/users/srm-drop/file.root):

voms-proxy-init -voms cms
srmcp -2 "srm://srm.cern.ch:8443/srm/managerv2?SFN=/castor/cern.ch/cms/DBS-path/file.root" file:///data/users/srm-drop/file.root

 

At the UMD cluster, transferring a file from FNAL's dCache PhEDEx/DBS store-area (/pnfs/cms/WAX/11/DBS-path/file.root) to the UMD srm-drop area (/data/users/srm-drop/file.root):

voms-proxy-init -voms cms
srmcp -2 "srm://cmssrm.fnal.gov:8443/srm/managerv2?SFN=/11/DBS-path/file.root" file:////data/users/srm-drop/file.root

 

At FNAL's cmslpc, transferring a file from UMD's BeStMan PhEDEx/DBS store-area (/data/se/DBS-path/file.root) to your FNAL home-area (/uscms/home/username/file.root):

cd ~
voms-proxy-init -voms cms
unsetenv PYTHONPATH
srmcp -2 -access_latency=ONLINE "srm://hepcms-0.umd.edu:8443/srm/v2/server?SFN=/data/se/DBS-path/file.root" file:///`pwd`/file.root

Note: at the present time, the cmslpc srm client does not support the access_latency parameter. This call will fail until the srm client is upgraded at FNAL. It is recommended that you use scp or sftp for this particular transfer.

 

At CERN's lxplus, transferring a file from your UMD large storage area (/data/users/username/file.ext) to CERN's dCache user-area (/castor/cern.ch/user/u/username/file.ext):

source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.csh
voms-proxy-init -voms cms
srmcp -2 -access_latency=ONLINE "srm://hepcms-0.umd.edu:8443/srm/v2/server?SFN=/data/users/username/file.ext" "srm://srm.cern.ch:8443/srm/managerv2?SFN=/castor/cern.ch/user/u/username/file.ext"

Note: at the present time, the lxplus srm client does not support the access_latency parameter. This call will fail until the srm client is upgraded at CERN. It is recommended that you use scp or sftp to get the file to your home (non-afs) area, followed by an srmcp call to get the file from your home area to Castor.

 

Notes on other utilities

Below is an incomplete list of generally useful software installed at the cluster and any details specific to the hepcms cluster. Consult the man pages or online manuals for details on use. Please email Marguerite Tonjes if you'd like a package installed, found an installed package that may be useful to others, or have encountered odd issues with an installed package.

  • Editors
    • vi
    • emacs
  • LaTeX/.eps/.ps/.pdf viewers & compilers
    • latex
    • ggv - very slow for some users
    • ps2pdf
    • gpdf
    • display - part of ImageMagick package, frequently rotates the image and doesn't always pop the GUI to manipulate the image
    • dvips - don't use, this tries to print to a default printer (does not exist)
    • dvipdf