How To: Guides for users and Maryland T3 admins.

Help: Links and emails for further info.

Configuration: technical layout of the cluster, primarily for admins.

Log: Has been moved to a Google page, accessible only to admins.

CMSSW

Description Install CMS software by hand or via OSG.
Notes

We have now switched to CVMFS mounted installations of CMSSW, so these instructions are only useful for installing the CMS software by hand if needed. However, SQUID is still needed as a standalone installation.
Production releases of CMSSW can be installed automatically via OSG tools. Automatic installs require you prepare your environment, install squid, and configure CMSSW. Note that we used automatic install so that details below may not be up to date.

Guide is 99% out of date as of 2015 and will be removed and replaced shortly. ADMINS of hepcms: please consult our private Google pages for documentation.

Last modified September 10, 2015

Table of Contents

Prepare the environment:

Description Create the CMSSW installation user(no longer used), network mount the installation directory (used for some useful shared scripts now), install libraries, set environment variables, and call the CMSSW bootstrap script. Not all the steps are required with CVMFS (in CVMFS do only steps 1-6).
Dependencies - To install automatically via OSG tools, an OSG CE must be running
- SL5 compatibility libraries installed on all nodes that will run CMSSW
Notes The last time we installed CMSSW manually, we were running SL4. We now install CMSSW automatically via OSG. Some SCRAM_ARCH settings in this guide may be incorrect as a result. We install CMSSW on our GN in /scratch/cmssw, network mounted on all nodes as /sharesoft/cmssw. Note that we switched to CVMFS (link1, link2).
Guides - SL5 compatibility libraries needed by CMSSW
- CMSSW one time initialization
- CMSSW user environment configuration
  1. For CVMFS, make the CVMFS user on the head node as root (su -) (taken from these instructions):
    useradd --comment "CVMFS" --groups fuse cvmfs --shell /bin/true
    ssh-agent $SHELL
    ssh-add
    rocks sync config
    rocks sync users
  2. For local CMSSW installs, create a user specifically for CMSSW installs, whom we will call cmssoft, following the instructions for adding new users.
  3. For local CMSSW installs, as root (su -) on the grid node, create /scratch/cmssw and cede control to cmssoft:
    mkdir /scratch/cmssw
    chown -R cmssoft:users /scratch/cmssw

    Prepare it to be network mounted by editing /etc/exports and adding the line:
    /scratch 10.0.0.0/255.0.0.0(rw,async)
  4. For local CMSSW installs, as root (su -) on the head node, network mount /scratch on the grid node as /sharesoft on all nodes:
    1. Create /etc/auto.sharesoft file with the content:
      cmssw grid-0-0.local:/scratch/cmssw
      And change the permissions:
      chmod 444 /etc/auto.sharesoft
    2. Edit /etc/auto.master and add the line:
      /sharesoft /etc/auto.sharesoft --timeout=1200
    3. Inform 411, the Rocks information service, of the change:
      cd /var/411
      make clean
      make
  5. For local CMSSW installs, once /etc/auto.sharesoft has propagated to all the nodes from 411, restart the NFS services on the grid node. As root (su -) on the grid node:
    /etc/rc.d/init.d/nfs restart
    service autofs reload

    If the NFS service on the GN doesn't already start on reboot, configure that now:
    /sbin/chkconfig --add nfs
    chkconfig nfs on

  6. For local CMSSW installs, tell WNs to restart their own auto-NFS service. As root (su -) on the head node:
    ssh-agent $SHELL
    ssh-add
    rocks run host compute interactive '/etc/rc.d/init.d/autofs restart'
    Note: Some directory restarts may fail because they are in use. However, /sharesoft should get mounted regardless.
  7. Not sure if needed for CVMFS, but needed for local: CMSSW needs compatibility libraries for SL5. Install them following these instructions.
  8. Note: the following instructions in this section are probably not needed for CVMFS. Click here to skip to the SQUID section next.
  9. For local CMSSW installs, as cmssoft on the grid node (su - cmssoft), prepare for CMSSW installation following these instructions. Some notes:
    1. Set the correct permissions first:
      chmod 755 /scratch/cmssw
    2. We use the VO_CMS_SW_DIR environment variable, as we later set up a link which points the appropriate directories in the OSG app directory to this directory:
      setenv VO_CMS_SW_DIR /sharesoft/cmssw
    3. The 'one time intialization' portion of the CMS Twiki is out of date for SL5. Since we performed our one time initialization in SL4, we haven't checked ourselves what SCRAM_ARCH should be, however, we understand setting SCRAM_ARCH for SL5 should be OK:
      setenv SCRAM_ARCH slc5_amd64_gcc472
    4. You can tail -f the log file to watch the install and check if the bootstrap was successful or to see any errors.
  10. For local CMSSW installs, once the bootstrap completes on SL5, some additional packages need to be installed (more than the SL5 compatibility libs):
    apt-cache pkgnames | grep fake
    install each one with:
    apt-get install <fake-package>
  11. For local CMSSW installs of SL4 based CMSSW releases (releases older than CMSSW_3_3_X) can be installed on an SL5 machine. We haven't done this ourselves, as our releases are now automatically installed via OSG utilities. We understand that the following should work, but please do this at your own risk!
    setenv SCRAM_ARCH slc4_ia32_gcc345
    Then call the bootstrap script again with an additional flag:
    -unsupport_distribution_hack
  12. For local CMSSW installs, we want all users to source the CMSSW environment on login according to these instructions. By placing the source commands in the .cshrc & .bashrc skeleton files, all new users will have the source inside their .cshrc & .bashrc files. Existing users (especially cmssoft) will have to add this manually. As root (su -) on the HN, edit /etc/skel/.cshrc to include the lines:
    # CMSSW
    setenv VO_CMS_SW_DIR /sharesoft/cmssw
    source $VO_CMS_SW_DIR/cmsset_default.csh
    Similarly, edit /etc/skel/.bashrc:
    # CMSSW
    export VO_CMS_SW_DIR=/sharesoft/cmssw
    . $VO_CMS_SW_DIR/cmsset_default.sh

    Users who want to run releases older than CMSSW_3_3_X will have to set SCRAM_ARCH manually to the SL4 value (may no longer be available - 2011):
    setenv SCRAM_ARCH slc4_ia32_gcc345
  13. Users who want to run releases CMSSW_3_3_X to CMSSW_4_1_X that need to use the 32-bit enviornment will have to set SCRAM_ARCH manually:
    setenv SCRAM_ARCH slc5_ia32_gcc434
    Later releases can be accessed with:
    setenv SCRAM_ARCH slc5_amd64_gcc472
  14. For local CMSSW installs, if OSG has been installed (instructions below are repeated under OSG installation):
    1. Inform BDII that we have the slc5_amd64_gcc434 environment. Edit /sharesoft/osg/app/etc/grid3-locations.txt to include the lines:
      VO-cmsslc5_amd64_gcc434 slc5_amd64_gcc434 /sharesoft/cmssw
      VO-cms-CMSSW_X_Y_Z CMSSW_X_Y_Z /sharesoft/cmssw
      (modify X_Y_Z and add a new line for each release of CMSSW installed)
    2. Create a link to CMSSW in the OSG app directory (set during OSG CE configuration inside config.ini):
      cd /sharesoft/osg/app
      mkdir cmssoft
      ln -s /sharesoft/cmssw cmssoft/cms
    3. Be sure to map Bockjoo's DN to the cmssoft account following these instructions.

Install Squid

Description Install Squid on the HN to cache Frontier database conditions queries, we use the OSG repository version. Definitely required with CVMFS
Dependencies - OSG EPEL repositories, yum_priorities, and osg-release as detailed in Squid installation instructions
Notes Since our configuration allows IPs not in our cluster to contact Squid, we restrict Squid to accepting queries for the Fronter DB only, as well as CVMFS.
Guides - OSG Frontier-Squid installation instructions
- Current CMS Squid installations
- Squid for CMS - note the CVMFS settings

As root (su -) on the HN:

  1. Setup the EPEL repositories, yum priorities, and install osg-release (3.1 series) for SL5 per the OSG instructions. Note that these may have already been done by the kickstart, so in a manual installation, this step may be skipped.
  2. Install the package:
    yum install frontier-squid
  3. Set it up to start at boot:
    chkconfig frontier-squid on
  4. Configure in /etc/squid/customize.sh (Instructions). Appropriate settings should match:
    awk --file `dirname $0`/customhelps.awk --source '{
    setoptionparameter("acl RESTRICT_DEST", 3, "^(cmsfrontier.*\\.cern\\.ch|
    cvmfs-stratum-one\\.cern\\.ch|cernvmfs\\.gridpp\\.rl\\.ac\\.uk|cvmfs\\.racf\\.bnl\\.gov|
    cvmfs\\.fnal\\.gov|cvmfs02\\.grid\\.sinica\\.edu\\.tw)$"
    setoption("cache_mem", "256 MB")
    setoptionparameter("cache_dir", 3, "10240")
    setoption("cache_log", "/scratch/squid/logs/cache.log")
    setoption("coredump_dir", "/scratch/squid")
    setoptionparameter("cache_dir", 2, "/scratch/squid/frontier-cache")
    setoptionparameter("access_log", 1, "/scratch/squid/logs/access.log")
    #setoption("cache_effective_user", "dbfrontier")
    #setoption("cache_effective_group", "dbfrontier")
    uncomment("acl RESTRICT_DEST")
    uncomment("http_access deny !RESTRICT_DEST")
    print
    }'
    Note that two lines are commented out because I cannot get them to work properly, it is instead using the default user/group.
    Also note the important cache_dir and access_log locations in /scratch/squid, otherwise it will fill up /var/log and the cluster will fail to work.
  5. Start squid:
    service frontier-squid start
  6. Test the installation.
  7. Register your server

To provide new configuration options, change /etc/squid/customize.sh, and changes will take effect with service frontier-squid reload, as detailed on the Squid Configuration twiki link.

Install CVMFS:

Description Install CVMFS to mount a networked filesystem for access to CMSSW and other useful software
Dependencies

- To install with OSG tools, setup the OSG installation method first
- SQUID installed and running
- CVMFS user exists on HN and sync'd with other nodes
- Maybe not needed? SL5 compatibility libraries installed on all nodes that will run CMSSW

Notes This guide is just notes on our CVMFS installation, NOT the complete installation instructions, for that you should follow the online guides (link1, link2).
Guides

- Maybe not needed? SL5 compatibility libraries needed by CMSSW
- CMSSW one time initialization (above)
- OSG CVMFS installation instructions (we follow these)
- CMS CVMFS installation
- CVMFS debugging guides: OSG, CERN, UFL

  1. Note that we installed CVMFS on a system in which CMSSW was already installed manually, and then switched to that filesystem, there may have been some dependencies we have missed. For more, see the CMSSW one time initialization and CMS CVMFS installation guides. The CVMFS installation will create the CVMFS user if it does not exist. Since we centrallty manage our users, it must be created on the HN as discussed in the previous CMSSW one time initialization section.
  2. We install CVMFS on the interactive nodes (cache to /scratch), grid node (cache to /scratch), compute & R510 nodes (cache to /tmp). The HN and SE do not mount /cvmfs.
  3. For each node that mounts CVMFS, follow the OSG CVMFS installation instructions to setup the OSG repositories and install yum priorities (this may already have been done by another OSG installation), and do the following (as root with su -).
  4. Install cvmfs
    yum install osg-oasis
  5. Setup fuse to work (on our cluster this required doing a chmod as well)
    chmod +x /usr/bin/fusermount
    Edit /etc/fuse.conf to contain user_allow_other
  6. As root on the head node (su-), edit /etc/auto.master, it should have the following line:
    /cvmfs /etc/auto.cvmfs
  7. Sync the head node auto.master with the rest with:
    ssh-agent $SHELL
    ssh-add
    rocks sync config
    rocks sync users
  8. On the individual node: restart autofs with
    service autofs restart
  9. Configure /etc/cvmfs/default.local to chance the cache location (/tmp on WNs, /scratch on INs and GN), and HTTP proxy:
    CVMFS_REPOSITORIES="`echo $(ls /cvmfs)|tr ' ' ,`"
    CVMFS_CACHE_BASE=/scratch/cvmfs
    CVMFS_QUOTA_LIMIT=20000
    CVMFS_HTTP_PROXY="http://hepcms-hn.umd.edu:3128"
  10. Create /etc/cvmfs/domain.d/cern.ch.local to read from the closest stratum one servers first (all one line):
    CVMFS_SERVER_URL="http://cvmfs.fnal.gov:8000/opt/@org@;http://cvmfs.racf.bnl.gov:8000/opt/@org@;http://cvmfs-stratum-one.cern.ch:8000/opt/@org@;http://cernvmfs.gridpp.rl.ac.uk:8000/opt/@org@"
  11. Following the CMS CVMFS guide, create /etc/cvmfs/config.d/cms.cern.ch.local to contain (note we have it mounted locally it is better to have this mounted via /cvmfs and checked into github - that needs to be done):
    export CMS_LOCAL_SITE=/sharesoft/cmssw/SITECONF/local/
  12. Verify cvmfs is installed (when you mount it will initially appear empty, and directories will not be mounted until you examine them like in the second example)
    ls /cvmfs
    ls -alh /cvmfs/cms.cern.ch/
  13. For historical reasons, our users access setup scripts from /sharesoft/cmssw/cmsset_default.(c)sh and /scratch/crab/current. Additionall, jobs running on the site will access $OSG_APP/cmssoft/cms. Change these to be soft links to point to the /cvmfs areas (after removing the old ones).
    1. As root on the GN:
      ln -s /cvmfs/cms.cern.ch/cmsset_default.csh /sharesoft/cmssw/cmsset_default.csh
      ln -s /cvmfs/cms.cern.ch/cmsset_default.sh /sharesoft/cmssw/cmsset_default.sh
      ln -s /cvmfs/cms.cern.ch /sharesoft/osg/app/cmssoft/cms
    2. As root on each of the two interactive nodes:
      ln -s /cvmfs/cms.cern.ch/crab /scratch/crab/current

Configure CMSSW locally

Description Configure CMSSW for Squid and file catalog (due to our soft link above, this is still required in CVMFS)
Dependencies - CMSSW environment prepped
- Squid for CMSSW installed
- PhEDEx does not have to be installed or configured, but we use the configuration files that we create and test during PhEDEx install. However, the configuration files can be created right away by following the guides and examples below.
Notes These need to be ported to github
Guides - CMS Twiki for storage.xml & site-local-config.xml
- SITECONF directories for all CMS sites
- T3_US_UMD site-local-config.xml
- T3_US_UMD storage.xml

We copy the contents of our PhEDEx SITECONF directory to our CMSSW directory:
cp -r /localsoft/phedex/current/SITECONF /sharesoft/cmssw/.
cp -r /sharesoft/cmssw/SITECONF/T3_US_UMD /sharesoft/cmssw/SITECONF/local

chown -R cmssoft:users /sharesoft/cmssw/SITECONF

If you plan to service grid jobs or to register in SiteDB, the site-local-config.xml and storage.xml configuration files must be uploaded to github (was CVS). Follow the instructions in the PhEDEx section to do so.

Install a CMSSW release locally:

Description Install a particular CMSSW release manually. Not required with CVMFS.
Dependencies - CMSSW environment prepped
- CMSSW configured
Notes This step does not have to be performed for production releases when CMSSW is automatically installed via OSG utilities. As for preparing the CMSSW environment, it's not entirely clear what the SCRAM_ARCH should be, it depends on which version of CMSSW you are installing. However, we understand that slc5_amd64_gcc434 should be OK.
Guides - Installing CMSSW with apt-get
  1. Login as cmssoft to the GN.
  2. Check to see if CVMFS is installed and running. If for some reason the CMSSW you are missing is not available, check your SCRAM_ARCH, and then go to get Sysadmin Help. You should NO LONGER need to install CMSSW by hand.
  3. The available CMSSW releases can be listed by (<apt-version> is currently 29):
    setenv VO_CMS_SW_DIR /sharesoft/cmssw
    setenv SCRAM_ARCH slc5_amd64_gcc47
    source $VO_CMS_SW_DIR/$SCRAM_ARCH/external/apt/<apt-version>/etc/profile.d/init.csh

    apt-cache search cmssw | grep CMSSW
  4. Follow these instructions, some notes:
    1. Be sure to set VO_CMS_SW_DIR & SCRAM_ARCH, get the environment, and update:
      setenv VO_CMS_SW_DIR /sharesoft/cmssw
      For CMSSW_6_0_X and newer:
      setenv SCRAM_ARCH slc5_amd64_gcc472
      For CMSSW_5_1_X and newer:

      setenv SCRAM_ARCH slc5_amd64_gcc462
    2. For CMSSW_4_1_X and newer:
      setenv SCRAM_ARCH slc5_amd64_gcc434

      For CMSSW_3_3_X to CMSSW_4_1_X:
      setenv SCRAM_ARCH slc5_ia32_gcc434
      For older (SL4 based) releases:
      setenv SCRAM_ARCH slc4_ia32_gcc345
      source $VO_CMS_SW_DIR/$SCRAM_ARCH/external/apt/<apt-version>/etc/profile.d/init.csh
      apt-get update
    3. RPM style options can be specified with syntax such as:
      apt-get -o RPM::Install-Options::="--ignoresize" install cms+cmssw+CMSSW_X_Y_Z
      or apt-get -o RPM::Install-Options::="--ignoresize" install cms+cmssw+patch+CMSSW_X_Y_Z_patchQ (in the case of a patch version)
    4. This process takes about an hour, depending on the quantity of data you'll need to download.
    5. You can safely ignore the message "find: python: No such file or directory"
  5. If OSG has been installed, inform BDII that this release of CMSSW is available. As root (su -), edit /sharesoft/osg/app/etc/grid3-locations.txt to include the line:
    VO-cms-CMSSW_X_Y_Z CMSSW_X_Y_Z /sharesoft/cmssw

Uninstall a CMSSW release locally

Description Uninstall a particular CMSSW release manually. Not required with CVMFS.
Dependencies - CMSSW environment prepped
Notes This step does not have to be performed for production releases when CMSSW is automatically removed via OSG utilities.
Guides - CMSSW FAQ: removing releases
  1. Login as cmssoft to the HN.
  2. List the currently installed CMSSW versions:
    setenv VO_CMS_SW_DIR /sharesoft/cmssw
    source $VO_CMS_SW_DIR/cmsset_default.csh

    scramv1 list | grep CMSSW
  3. If OSG has been installed, inform BDII that this release of CMSSW is no longer available. As root (su -), edit /sharesoft/osg/app/etc/grid3-locations.txt and remove the line:
    VO-cms-CMSSW_X_Y_Z CMSSW_X_Y_Z /sharesoft/cmssw
  4. Remove a CMSSW release:
    apt-get remove cms+cmssw+CMSSW_X_Y_Z