LoginTutorial: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
Line 10: Line 10:
#If you are otherwise on a Windows machine, follow the instructions at [[ComputingLogin#Logging_in_From_PC.27s]].  PuTTy allows you to login in a terminal window on the central machines, and xming allows you to display xwindows back on your laptop.
#If you are otherwise on a Windows machine, follow the instructions at [[ComputingLogin#Logging_in_From_PC.27s]].  PuTTy allows you to login in a terminal window on the central machines, and xming allows you to display xwindows back on your laptop.


If you  have problems the places to look are, [[ComputingLogin]], [[Authentication]], or [[ErrorRecovery]].  You can also ask a colleague or post a question on the Mu2e [[Slack]] channel is_it_me_or_a_bug.
If you  have problems the places to look are, [[ComputingLogin]], [[Authentication]], [[ErrorRecovery]] or [[ComputingHelp]].  You can also ask a colleague or post a question on the Mu2e [[Slack]] channel is_it_me_or_a_bug.


On your local machine or on the central machine, you can see the status of your kerberos ticket with the ''klist'' command:
On your local machine or on the central machine, you can see the status of your kerberos ticket with the ''klist'' command:

Revision as of 04:45, 3 October 2023

Authentication and First Login

This page presumes that you have computer accounts and have completed the the Day 1 Checklist.

  1. If you are sitting at a Fermilab linux computer, you can use your kerberos principal and password login at the login screen.
  2. If you are sitting at your own computer or a university computer that is a unix computer or a Mac, follow the instructions at ComputingLogin#Logging_in_from_Linux_or_Mac.27s.
  3. If you are otherwise on a Windows machine, follow the instructions at ComputingLogin#Logging_in_From_PC.27s. PuTTy allows you to login in a terminal window on the central machines, and xming allows you to display xwindows back on your laptop.

If you have problems the places to look are, ComputingLogin, Authentication, ErrorRecovery or ComputingHelp. You can also ask a colleague or post a question on the Mu2e Slack channel is_it_me_or_a_bug.

On your local machine or on the central machine, you can see the status of your kerberos ticket with the klist command:

 > klist
Ticket cache: FILE:/tmp/krb5cc_1311_xShHU10541
Default principal: rlc@FNAL.GOV

Valid starting     Expires            Service principal
06/18/19 12:58:54  06/19/19 14:58:50  krbtgt/FNAL.GOV@FNAL.GOV
        renew until 06/25/19 12:58:50

If you stay logged on overnight, it will expire and you will need to renew it with kinit. If you kinit on the central machine it will extend only that session. If you kinit on your local machine, you can start new sessions but it won't extend any existing sessions.

Preparing the Mu2e Environment

All of the Mu2e shell scripts and all of the examples on the wiki are designed to work using the bash shell. Fermilab computer accounts are created with the bash shell as the default.

Once you have logged in, create your .bash_profile and .bashrc. Read the advice about these login scripts in Shells.

All Mu2e instructions presume that you have followed the advice that your .bash_profile should have the fragment:

upsfile="/cvmfs/fermilab.opensciencegrid.org/products/common/etc/setups.sh"
if [  -r "${upsfile}" ]; then
    . "${upsfile}"
fi

If you choose not to do this, everywhere that these instructions say,

setup mu2e

You should instead do

 /cvmfs/mu2e.opensciencegrid.org/setupmu2e-art.sh 

The recommended fragment does a minimal setup of the UPS system, which is a Fermilab supported system for distributing versioned software that is used by experiments. It where Mu2e finds it's compilers, geant4, root and many other packages.

To verify that the login scripts are doing their job, logout and log in again. Then do:

> printenv PRODUCTS
/cvmfs/fermilab.opensciencegrid.org/products/common/db
> type setup
setup is a function
setup () 
{ 
    . `/cvmfs/mu2e.opensciencegrid.org/artexternals/ups/v6_1_1/Linux64bit+3.10-2.17/bin/ups setup "$@"`
}


You now have the UPS system ready to use. UPS is a Fermilab package which allows you to access other UPS products in the UPS system. UPS commands do things like put executables in your path, and set environmental variables. The available products include code tools, analysis products, and access to grid computing.

The next step is to execute a UPS command to set the Mu2e environment.

setup mu2e

The main effect is to define PRODUCTS environmental variable:

> printenv PRODUCTS | tr ":" "\n"
/cvmfs/mu2e.opensciencegrid.org/artexternals
/cvmfs/fermilab.opensciencegrid.org/products/common/db

all the UPS products are under these directories and further setup commands can add them to your environment.

In addition, the setup mu2e command has set a few environmental variables for you

> printenv MU2E
/cvmfs/mu2e.opensciencegrid.org
> printenv MU2E_DATA_PATH
/cvmfs/mu2e.opensciencegrid.org/DataFiles

and setup two more UPS products:

  1. a modern version of git
  2. the Mu2e code build scripts, called Muse

you can see this with

> ups active
Active ups products:
git               v2_31_1         -f Linux64bit+3.10-2.17                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals
muse              v2_07_00        -f NULL                                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals
ups               v6_0_8          -f Linux64bit+3.10-2.17                    -z /cvmfs/mu2e.opensciencegrid.org/artexternals

The exact versions you see will probably be different, since new versions come out fairly regularly.

Ancient Fermilab Accounts with csh or tcsh

Some Fermilab accounts created years ago may have a different default shell, possibly tcsh. Check

Proxies

You can use your kerberos identity to invoke your certificate identity with kx509.

 > kx509
Authorizing ...... authorized
Fetching certificate ..... fetched
Storing certificate in /tmp/x509up_u1311
Your certificate is valid until: Tue Jun 25 13:18:18 2019

You can see your cert with voms-proxy-info --all:

> voms-proxy-info --all
subject   : /DC=org/DC=cilogon/C=US/O=Fermi National Accelerator Laboratory/OU=People/CN=Raymond Culbertson/CN=UID:rlc
issuer    : /DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon Basic CA 1
identity  : /DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon Basic CA 1
type      : unknown
strength  : 2048 bits
path      : /tmp/x509up_u1311
timeleft  : 167:51:31
key usage : Digital Signature, Key Encipherment, Data Encipherment

Your certificate can be extended with information about your experiment and the roles you have access to. This can be done with a script

/cvmfs/mu2e.opensciencegrid.org/bin/vomsCert 

If you then run voms-proxy-info --all you will see your cert has gotten longer.

You will want to bring your certificate into your browsers since a few web pages, notably the Mu2e doc-db, are authenticated with the certificate. Download your cert into your browsers.

You will want to create an account on hypernews and subscribe to forums that are relevant for you. If you are working on Mu2e software you should subscribe to the following forums: "Software and Simulation" and "Help! Is it me or a bug?". If you use STNTUPLE you will also want to sign up for that forum. Speak with your colleagues to learn what other forums are relevant for you work.

Mu2e Machines

There are several Mu2e dedicated machines (actually virtual machines).

  • mu2egpvm01,2,3,4,5,6 general purpose machines with SL7 operating system
  • mu2egpvm07 general purpose machines with AL9 operating system; as of October 2023 Mu2e software does not yet run on AL9.
  • mu2ebuild01 machine to build large repositories.

By "general purpose" we mean running executables for a short time, or making and viewing histograms, or writing notes. These machines have 4 cores. The build machine, mu2ebuild01 has 16 cores so it can compile the entire Mu2e code in a reasonable time, about 11 min.

Mu2e Disks

You can read more about the Disks, but in the tutorial, we give a quick tour.

home disk

When you log in, your default directory will be your home area, which is the same for all the Mu2e centrl machines. It may or may not be the same as a linux desktop you are sitting at, depending on how it was set up.

 > pwd
/nashome/r/rlc

We recommend you use the bash shell, which you should get by default:

 > echo $SHELL
/bin/bash

to do only minor configuration of your preferences, such and alias for convenience. We have some recommendations

This disk is backed up daily.

app disk

/mu2e/app is where you will put code that you checkout compile and develop. You can create you own area:

mkdir -p /mu2e/app/users/$USER

and you should be able to write there

touch /mu2e/app/users/$USER/test

Note that you will have a finite quota on this disk, so do not put data files here (see next).

This disk is not backed up.

data disk

/mu2e/data is where you can put data files and ntuples that you are using now. You can create you own area:

mkdir -p /mu2e/data/users/$USER

and you should be able to write there

touch /mu2e/data/users/$USER/test

This space is limited, so if you are using more than 500 GB or the files are sitting there for more than a few months, you should consider putting the files on tape or dCache (see next).

This disk is not backed up.

dCache

dCache is a system of software, databases, and disk servers that are design to present a huge amount of disk, spread over many machines, as a single simple disk system. We will cover these area in more depth in future tutorials, but as an overview, dCache has three main parts:

  • tape-backed If files are written here, they are copied to tape. These file might disappear off disk if they are not used, and can be restored to disk from tape, as needed. You will only write here using specific tools.
> ls -l /pnfs/mu2e/tape/phy-sim/sim/mu2e/DS-beam/MDC2018a/art/ff/f7/sim.mu2e.DS-beam.MDC2018a.001002_00373598.art 
-rw-r--r-- 1 mu2epro mu2e 34186158 May 24  2018 /pnfs/mu2e/tape/phy-sim/sim/mu2e/DS-beam/MDC2018a/art/ff/f7/sim.mu2e.DS-beam.MDC2018a.001002_00373598.art
  • persistant Files written here are not automatically purged and are not copied to tape, so the space must be managed. Currenlty we only allow writing here in specific cases.
> ls -l /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl 
-rw-r--r-- 1 mu2epro mu2e 690 Aug 27  2018 /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl
  • scratch You can write as much data here as you would like. As space is needed, file that have not be recently accessed will be deleted to make room. You can expect untouched files to last a month, but we have seen shorter times, under special circumstances.

Create your scratch directory:

mkdir -p /pnfs/mu2e/scratch/users/$USER

and you should be able to write there

touch /pnfs/mu2e/scratch/users/$USER/test

and copy a file there:

cp /pnfs/mu2e/persistent/datasets/phy-etc/cnf/mu2e/CeEndpoint/MDC2018b/fcl/00/04/cnf.mu2e.CeEndpoint.MDC2018b.001002_00000025.fcl \
 /pnfs/mu2e/scratch/users/$USER

There a are few important notes about dCache you should know right from the beginning.

  • dCache is not a file system - it is an nfs server backed by a database. This means simple disk commands such as "find" can cause millions of database queries and try up resources for hours. Only explore one directory at a time. Even using "ls" and not "ls -l" can be much faster.
  • You cannot modify files on dCache, only write and read them. If you open a dCache file with an editor, it will come up read-only. The system is designed to be a large data cache, not an interactive disk. All your code should be in your home area or /mu2e/app.

cvmfs

We store code on the cvmfs disk. software packages from outside Mu2e:

ls -l /cvmfs/mu2e.opensciencegrid.org/artexternals

and pre-built Mu2e code versions:

ls -l /cvmfs/mu2e.opensciencegrid.org/Offline

cvmfs acts like a ready-only disk. It is actually a set of clients, servers, and databases. Anywhere the client is run, the identical /cvmfs disk appears on the node and can be read by the users. So this is perfect for distributing code to interactive machines, grid worker nodes, and universities.