LoginTutorial

From Mu2eWiki
Jump to navigation Jump to search

Authentication and First Login

This page presumes that you have computer accounts and have completed the the Day 1 Checklist.

  1. If you are sitting at a Fermilab linux computer, you can use your kerberos principal and password login at the login screen.
  2. If you are sitting at your own computer or a university computer that is a unix computer or a Mac, follow the instructions at ComputingLogin#Logging_in_from_Linux_or_Mac.27s.
  3. If you are otherwise on a Windows machine, follow the instructions at ComputingLogin#Logging_in_From_PC.27s. PuTTy allows you to login in a terminal window on the central machines, and xming allows you to display xwindows back on your laptop.

If you have problems the places to look are, ComputingLogin, Authentication, ErrorRecovery or ComputingHelp. You can also ask a colleague or post a question on the Mu2e Slack channel is_it_me_or_a_bug.

On your local machine or on the central machine, you can see the status of your kerberos ticket with the klist command:

 > klist
Ticket cache: FILE:/tmp/krb5cc_1311_xShHU10541
Default principal: rlc@FNAL.GOV

Valid starting     Expires            Service principal
06/18/19 12:58:54  06/19/19 14:58:50  krbtgt/FNAL.GOV@FNAL.GOV
        renew until 06/25/19 12:58:50

If you stay logged on overnight, it will expire and you will need to renew it with kinit. If you kinit on the central machine it will extend only that session. If you kinit on your local machine, you can start new sessions but it won't extend any existing sessions.

Check Your Default Shell

All shell scripts in Mu2e and all of the Mu2e examples are designed to work using the bash shell. Fermilab computer accounts are created with the bash shell as the default.

Some Fermilab accounts created years ago may have a different default shell, possibly tcsh. You can check which shell is active by

echo $SHELL
/bin/bash

If your shell is not bash, open a ServiceDesk ticket and request that it be changed. The command chsh does not work our machines because the passwd file on each machine is externally managed. You will also want to port any customizations from your .login and .cshrc files to .my_bash_profile and .my_bashrc . See Shells to learn about these the two .my_ files.

Create Your Login Scripts

After you have checked that bash is your default shell, create your login scripts .bash_profile and .bashrc. To do this, follow the instructions in Shells. Be sure to follow the instructions to test the your login scripts correctly setup UPS.

Check setup mu2e

After the login scripts execute, you have a minimal UPS environment, the only UPS product that has been setup is UPS itself.

> ups active
Active ups products:
ups               v6_0_7          -f Linux64bit+3.10-2.17                    -z /cvmfs/fermilab.opensciencegrid.org/products/common/db

And your environment only knows about one UPS repositories.

> printenv PRODUCTS | tr ":" "\n"
/cvmfs/fermilab.opensciencegrid.org/products/common/db

The next step is to give the command:

setup mu2e

Now both UPS repositories are available in the environment. You can see this by:

> printenv PRODUCTS | tr ":" "\n"
/cvmfs/mu2e.opensciencegrid.org/artexternals
/cvmfs/fermilab.opensciencegrid.org/products/common/db

The first UPS repository contains a subset of the software provided by Fermilab Computing. The second UPS repository contains all of the Mu2e software plus additional software provided by Fermilab Computing.

The setup mu2e command has also set a few environmental variables for you. For example,

> printenv MU2E
/cvmfs/mu2e.opensciencegrid.org
> printenv MU2E_DATA_PATH
/cvmfs/mu2e.opensciencegrid.org/DataFiles

These are used by the Mu2e software.

The command has also setup two UPS more products:

  1. a modern version of git
  2. the Mu2e code build scripts, called Muse

you can see this with

> ups active
Active ups products:
git               v2_31_1         -f Linux64bit+3.10-2.17                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals
muse              v2_07_00        -f NULL                                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals
ups               v6_0_8          -f Linux64bit+3.10-2.17                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals

The exact versions you see will probably be different, since new versions come out fairly regularly.

Your environment is now configured to start work on Mu2e.

Mu2e Machines

There are several Mu2e dedicated machines (actually virtual machines).

  • mu2egpvm01,2,3,4,5,6 general purpose machines with SL7 operating system
  • mu2egpvm07 general purpose machines with AL9 operating system; as of October 2023 Mu2e software does not yet run on AL9.
  • mu2ebuild01 machine to build large repositories.

By "general purpose" we mean running executables for a short time, or making and viewing histograms, or writing notes. These machines have 4 cores. The build machine, mu2ebuild01 has 16 cores so it can compile the entire Mu2e code in a reasonable time, about 11 min.

Mu2e Disks

Below is a quick introduction to the disk space available for Mu2e users. When you have time, you can learn more at Disks.

home disk

When you log in, your default directory will be your home area, which is the same for all the Mu2e central machines. It may or may not be the same as a Fermilab linux desktop you are sitting at, depending on how that desktop was set up.

 > pwd
/nashome/r/rlc

We recommend you use the bash shell, which you should get by default:

 > echo $SHELL
/bin/bash

to do only minor configuration of your preferences, such and alias for convenience. We have some recommendations

This disk is backed up daily.

app disk

/srv/mu2e/app is where you will put code that you checkout, compile and develop. You can create you own area:

mkdir -p /srv/mu2e/app/users/$USER

and you should be able to write there

touch /srv/mu2e/app/users/$USER/test

New Mu2e people will have have a quota of 25 GB on this disk.

This disk is not backed up to tape but it does maintain Disks#Snapshots.

Until the summer of 2023, this area was called, /mu2e/app. If you still have files in the old area, please delete files that are no longer needed and move the remaining files to your space on /srv/mu2e/app. If you had a larger quota on /mu2e/app, you will also have it on /srv/mu2e/app.

data disk

/srv/mu2e/data is where you can put data files and ntuples that you are using now. You can create you own area:

mkdir -p /srv/mu2e/data/users/$USER

and you should be able to write there

touch /srv/mu2e/data/users/$USER/test

New Mu2e people will have have a quota of 150 GB on this disk.

This disk is not backed up.

Until the summer of 2023, this area was called, /mu2e/app. If you still have files in the old area, please delete files that are no longer needed and move the remaining files to your space on /srv/mu2e/app. If you had a larger quota on /mu2e/app, you will also have it on /srv/mu2e/app.

dCache

dCache is a system of software, databases, and disk servers that are design to present a huge amount of disk, spread over many machines, as a single simple disk system. We will cover these area in more depth in future tutorials, but as an overview, dCache has three main parts:

  • tape-backed If files are written here, they are copied to tape. These file might disappear off disk if they are not used, and can be restored to disk from tape, as needed. You will only write here using specific tools.
> ls -l /pnfs/mu2e/tape/phy-sim/sim/mu2e/DS-beam/MDC2018a/art/ff/f7/sim.mu2e.DS-beam.MDC2018a.001002_00373598.art 
-rw-r--r-- 1 mu2epro mu2e 34186158 May 24  2018 /pnfs/mu2e/tape/phy-sim/sim/mu2e/DS-beam/MDC2018a/art/ff/f7/sim.mu2e.DS-beam.MDC2018a.001002_00373598.art
  • persistant Files written here are not automatically purged and are not copied to tape, so the space must be managed. Currenlty we only allow writing here in specific cases.
> ls -l /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl 
-rw-r--r-- 1 mu2epro mu2e 690 Aug 27  2018 /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl
  • scratch You can write as much data here as you would like. As space is needed, file that have not be recently accessed will be deleted to make room. You can expect untouched files to last a month, but we have seen shorter times, under special circumstances.

Create your scratch directory:

mkdir -p /pnfs/mu2e/scratch/users/$USER

and you should be able to write there

touch /pnfs/mu2e/scratch/users/$USER/test

and copy a file there:

cp /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl \
 /pnfs/mu2e/scratch/users/$USER

There a are few important notes about dCache you should know right from the beginning.

  • dCache is not a file system - it is an nfs server backed by a database. This means simple disk commands such as "find" can cause millions of database queries and try up resources for hours. Only explore one directory at a time. Even using "ls" and not "ls -l" can be much faster.
  • You cannot modify files on dCache, only write and read them. If you open a dCache file with an editor, it will come up read-only. The system is designed to be a large data cache, not an interactive disk. All your code should be in your home area or /mu2e/app.

cvmfs

We store code on the /cvmfs disk. Software packages from outside Mu2e are stored at:

ls -l /cvmfs/mu2e.opensciencegrid.org/artexternals

Pre-built Mu2e code versions:

ls -l /cvmfs/mu2e.opensciencegrid.org/Musings

Our magentic field maps are at:

ls -l /cvmfs/mu2e.opensciencegrid.org/DataFiles/BFieldMaps

cvmfs acts like a ready-only disk. It is actually a set of clients, servers, and databases. Anywhere the client is run, the identical /cvmfs disk appears on the node and can be read by the users. So this is perfect for distributing code to interactive machines, grid worker nodes, and universities. You can even mount it on your laptop. If you set your your local cache is large enough, you can pre-fill the cache with everything you need to work without a network.

You can read me at the Cvmfs wiki page.

Proxies

You will need proxies to access some utilities provided by Fermilab Computing. The instructions in this section explain how to get a proxy and how to verify that it is present in your environment.

You can use your kerberos identity to invoke your certificate identity with kx509.

 > kx509
Authorizing ...... authorized
Fetching certificate ..... fetched
Storing certificate in /tmp/x509up_u1311
Your certificate is valid until: Tue Jun 25 13:18:18 2019

You can see your cert with voms-proxy-info --all :

> voms-proxy-info --all
subject   : /DC=org/DC=cilogon/C=US/O=Fermi National Accelerator Laboratory/OU=People/CN=Raymond Culbertson/CN=UID:rlc
issuer    : /DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon Basic CA 1
identity  : /DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon Basic CA 1
type      : unknown
strength  : 2048 bits
path      : /tmp/x509up_u1311
timeleft  : 167:51:31
key usage : Digital Signature, Key Encipherment, Data Encipherment

Your certificate can be extended with information about your experiment and the roles you have access to. This can be done with a script

/cvmfs/mu2e.opensciencegrid.org/bin/vomsCert 

If you then run voms-proxy-info --all you will see your cert has gotten longer.