ConditionsData: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
Line 206: Line 206:


The first decision to me made is whether a new prupose needs to be made.  This should usually be a very rare need, mostly there will be a few purposes (PRODUCTION, CALIBRATION,..) and as needs evolve, the will gain new version numbers.  A need for a new purpose, for example, might be if the CRV group wants to run a calibration job on new data and needs the nominal calibration as input to the job.  They don't care about any other detector calibrations so no existing calibration sets are appropriate.  
The first decision to me made is whether a new prupose needs to be made.  This should usually be a very rare need, mostly there will be a few purposes (PRODUCTION, CALIBRATION,..) and as needs evolve, the will gain new version numbers.  A need for a new purpose, for example, might be if the CRV group wants to run a calibration job on new data and needs the nominal calibration as input to the job.  They don't care about any other detector calibrations so no existing calibration sets are appropriate.  
  > dbTool commit-purpose --name CRV_CALIBRATION --comment "input to the CRV calibration job"
  > dbTool commit-purpose --name CRV_CALIBRATION --comment "input to the CRV calibration job"
 
If a new purpose is not needed, one of the existing can be chosen
If a new prupose is not needed, one of the existing can be chosen
 
  > dbTool print-purposes
  > dbTool print-purposes
You shoudl end up with a purpose identifier number, or PID.


The second decision to be made is whether a new list of table types is needed.  You can see the existing lists
The second decision to be made is whether a new list of table types is needed.  You can see the existing lists
Line 218: Line 216:
  > dbTool print-versions
  > dbTool print-versions


If a new list of tables is needed, first find the tables, including their unique numeric indentifiers, their TIDs.
If a new list of tables is needed, first find the tables, including their unique numeric identifiers, their TIDs.
 
> dbTool print-tables
To make a list:
> dbTool commit-list --name CRV_CALIB --comment "for CRV calibration job" --tids 1,2,3
which will result in a new list id, or LID.


You will need a list ID (LID).
Finally, once you have the purpose and list, you will need to decide the version numbers.  If this is a new purpose, the XXXXXXXXXXXXXXXXx


  > dbTool commit-version --purpose PRODUCTION --list 3 --major 1 --minor 3 --comment "fix run 1234 of version 1_2"
  > dbTool commit-version --purpose PRODUCTION --list 3 --major 1 --minor 3 --comment "fix run 1234 of version 1_2"

Revision as of 20:06, 15 October 2018

Introduction

In Mu2e it will be necessary to maintain a database of calibration constants, also known as conditions data. This will include information like the alignment of the detector, the gas gain in straws, the time-space relationship for the straws, the gain of SiPMs, the calibration curve for ADC's and so on.

The current system is set of two postgres databases, mu2e_conditions_dev, for development by experts and mu2e_conditions_prd for users. These are maintained by the database group of the computing division. A user can access data by configuring a service in their art job, which reads the database using http protocol. The system is intended to support access from high-volume grid jobs. In addition, there is a mu2e bin called dbTool which can be used for dump database contents, or maintaining database contents.

Using the conditions database

selecting a calibration set

When accessing the conditions database, you select a purpose, such as "PRODUCTION" and version, such a "V1_1". These are entered in the DbService services stanza.

services : {
   DbService : {
      purpose :  PRODUCTION
      version :  v1_1
      dbName : "mu2e_conditions_prd"
      textFile : ["table.txt"]
      verbose : 1
   }
}

The version numbers have three fields like v1_2_3:

  • major version number, this changes when there is a major change in the content of the calibration set, that a user should probably be aware of, such as going from unaligned data to aligned data. The modules and services that will run and their configuration are likely to change. Your physics results will probably change.
  • minor version number. This changes when the list of table types changes, or if there was a repair to the calibration set. The modules and services that will run and their configuration might change. Your physics results might change.
  • extension number. This changes when new runs are added to the calibration set - physics results do not change.

You need to provide a PURPOSE. If you have no interest in the version number, then you can leave it blank and the code will take the highest version available. If you specify only the major version number, then the highest available minor version number will be used. Your results might change between runs of your exe. If you provide the major and minor version numbers, then any run numbers that successfully ran before will always run the same way, but runs that previously failed because no calibrations were available might succeed at a later time due to an extension of the calibration set. This (specifying major and minor) is probably the right approach for most user work. Finally, if you specify the major, minor and extension number, then you will get the exact same result every time.

The database parameter should usually be mu2e_condition_prd, and this is the default.

The table parameter allows you to add a table to the calibration set for testing, see below.

verbose set to 0 will give no output, set to 1 is intended to give interesting brief reports, and can be set up to 10.

You can see the available purposes with

dbTool print-purposes

and you can see what versions are available with

dbTool print-versions

overriding a table with a local file

The conditions code allows a user to provide a file (or several files) in the DbService fcl configuration, with the effect that the file content overrides or extends whatever is in the database. This is intended to make it easy to test new or updated tables. The table must be defined in the code in order to be included in the text file, but no database entries are needed before it can be used. The text file must be in a specific format.

access by command line tool

All table data can be dumped by the command line tool dbTool. For example, dumping a data table in canonical format:

 > dbTool print-table --name TstCalib1 --pretty
TABLE TstCalib1
#  cid 1
# channel   flag   dtoe   
        0,    12,  1.11
        1,    13,  2.11
        2,    11,  3.11
...

It can also tell you about what sets of calibrations are available

 > dbTool print-versions
VID       purpose     LID   major  minor          user              time                     comment
  1           TEST    1      1      0              rlc  2018-10-12 08:58:26.794692-05:00  initial version
  2          EMPTY    2      1      0              rlc  2018-10-12 08:58:26.798556-05:00  initial version
  3       TRK_TEST    3      1      0              rlc  2018-10-12 08:59:41.575732-05:00  first test of trk tables

this tool has built-in help.

dbTool is also the primary method to upload new data to the database.

Access in Modules

Once the DbService is configured with a calibration set, all the data contained that set is available to any module at any time. The intent is that only a few experts will access the database tables directly. Most uses will access data in user-friendly containers provided by high-level services, such as accessing aligned geometry through the Geometry Service, which will access the database for the user.

The code pattern is to create a handle to the table (TstCalib1 in this example) as a data member in the module

#include "DbService/inc/DbHandle.hh"
#include "DbTables/inc/TstCalib1.hh"
namespace mu2e {
  class DbServiceTest : public art::EDAnalyzer {
  private:
    mu2e::DbHandle<mu2e::TstCalib1> _testCalib1;
  };
};

in the event method, the user must update the handle with the run number:

  void DbServiceTest::analyze(const art::Event& event) {
    auto const& myTable = _testCalib1.get(event.id());
  };

once the user has the table filled with the correct content for this event, the access can be by several methods. At this point it is important to check with experts on how the table is intended to be used. There may be a channel or index column which has a certain meaning by convention. There may or may not be random access by maps. The number of rows may be guaranteed fixed or variable. Dead channels might have flag values, etc. Some examples of access are:

  int n = 0;
  for(auto const& r : myTable.rows()) {
     std::cout << "row " << n << " is channel "<<r.channel()<<" and has DtoE "<<r.dtoe()<<std::endl;
     n++;
  }

  int channel = 1;
  std::cout << "DtoE for channel "<<channel<<" is "<<myTable.row(channel).dtoe()<<std::endl;

  int index = 1;
  std::cout << "DtoE for row "<<index<<" is "<<myTable.rowAt(index).dtoe()<<std::endl;

Conditions data maintenance

During data-taking the detector experts responsible for calibrations will need to continually update the database with calibrations for the new data. This section explains their procedures.

The calibrator should be familiar with the rules of a table from when that table was created. The most important rules are those concerning the column, or columns, which makes the rows unique. This is typically a channel number. The table code and database constraints may have assumptions built in, for example, it may be required that the channel number is sequential. This channel number may be dense or sparse, or have a custom meaning. The calibrator should also know what precision is required in the floating point numbers since recording more significant digits than necessary wastes space and time. And, of course, the calibrator must know their detector, such as when and how to produce a new calibration.

All commit functions have a dry-run where the commit will be done, then rolled back, to fully check what will happen. Also, if a mistake is committed, it can always be ignored by simply performing a second commit correctly and carrying the new correct result forward to the next step. Typically, nothing is ever deleted.

The calibrator will need the permission (or role in database parlance) appropriate to their detector, such as trk_role or cal_role. They will also need val_role.


Committing a calibration

The calibrator would typically produce the calibration by writing a where each row represents a row in the database. The contents should from one logical "calibration", that is, a set of data that will be retrieved all at once for use in an event. The table name, columns, and rows must all be correctly parsed so it has a required format.

> dbTool commit-calibration --file FILENAME

If the file contained data for table TstCalib1, the response might be

created calibration for TstCalib1 with 3 rows, new cid is 3

The number labeled cid uniquely identifies this data forever. You can use this to refer to the data.

 > dbTool print-table --cid 3
TABLE TstCalib1
#  cid 3
# channel,flag,dtoe
0,32,1.3177
1,33,2.3166
2,31,3.3134

The file format allows for a interval of validity (IOV) to be included for the calibration data. This will be ignored by commit-calibration.

Committing an interval of validity

After the calibrator has committed the data to the database, they can declare what runs the data can be used with, this is called an interval of validity or IOV. The IOV is represented in a well-defined format. The IOV can be defined at the run level or at the subrun level, but not lower. A typical IOV, starting at run 1001, subrun 10 and ending on run 1002, can be represented as 1001:10-1002. The run ranges are inclusive, so run 1001, subrun 10 is included, as well as all subruns of run 1002.

An IOV is attached to calibration data, which is indicated by its CID number.

> dbTool commit-iov --cid 3 --iov 1001:10-1002

with reply

new IID is 10

The number labeled IID uniquely and permanently refers to this commit.

If the same calibration data that is valid for run X and you have committed an IOV declaring that, then it is determined that the data is also good for run Y, then the proper step is commit a second IOV declaring the same CID is valid for the run Y. In this way, many IOVs may point to the same calibration data, the same CID.

Logically, it will be important to make sure that, putting together all your relevant IOV, that all good runs have calibrations, and there is no case of overlaps, where one IOV says data A is good for run X but a second IOV says data B is is good for Run X. Once we have a good-run system in place, code will be provided to make these checks.

Committing a group

The third and final step to committing calibration data is to combine all you IOV into one group. A group is just a collection of IOV. The purpose of a group is to make it easier to refer to large sets of IOVs. For example if the same calibration data is good for both PRODUCTION and ANALYSIS calibration sets, it is easier to put the groups into both set rather recreate a new, and potentially much larger set of IOV. This layer also make repairs easier.

You provide a set of IID numbers, saved from the IOV commits. The IIDs may refer to the same table or different tables. It can even be tables in different detectors. The IOVs don't need to be adjacent in time or any other restriction. We expect that, typically, a detector expert will gather everything they need for a run, or a small set of runs, commit all those data, make IOVs, and then collect them into one group.

 > dbTool commit-group --iid 10,12,13

with reply

 new GID is 17

Note that the list of IIDs may be uploaded as a pointer to a file full of numbers (--iid mylist_of_iids.txt). The number labeled GID uniquely and permanently refers to this group of IOVs.

It is this GID which you pass to the database or production manager, who will include it in the calibration set, along with the GID from other detector groups.

Calibration set maintenance

A calibration set is

  • a purpose (entry in ValPurposes, with a PID)
  • a version (entry in ValVersions, with a VID)
  • a list (list of table types in valLists, with a LID)
  • a set of extensions (entries in ValExtensions, with an EID)
  • the groups of calibration data associated with the extensions (entries in Val)

When an executable starts the DbService set to a particular purpose and version, then it gets the data in the calibration set.

During data-taking, the database or production managers will need to continually update the calibration set contents as calibrators enter new data. This section explains these procedures. This section assumes the reader is familiar with the ConditionsDbSchema.

Extend a calibration set

This is the most common procedure. At this point there exists a calibration set - a PURPOSE and VERSION. The VERSION here refers to the major and minor numbers, like v1_1. This procedure extends this set, so takes a new set of tables represented by a GROUP number (or numbers), given to you by calibrators, and extends the calibration set. For example, an extension might take calibration set PRODUCTION v1_1_10, adding some groups of tables, to create PRODUCTION v1_1_11.

> dbTool commit-extension --purpose PRODUCTION --version v1_1 --gid 4

the result shows the new full version number with the new extension. --gid can take a list of numbers, or a file containing numbers (see the dbTool help). You can verify the calibration set with

> dbTool print-set --purpose PRODUCTION --version v1_1 --details

once an extension is created it is permanent and can't be deleted or undone. If there is a mistake, the procedure is to create a new VERSION, copy over the correct part, then create a new correct extension.

Create a new calibration set

A calibration set refers to a purpose and a version number (major and minor) and its extensions. A new calibration set will be needed if

  • the list of table types needs to change
  • a repair needs to be made to an existing extension

The first decision to me made is whether a new prupose needs to be made. This should usually be a very rare need, mostly there will be a few purposes (PRODUCTION, CALIBRATION,..) and as needs evolve, the will gain new version numbers. A need for a new purpose, for example, might be if the CRV group wants to run a calibration job on new data and needs the nominal calibration as input to the job. They don't care about any other detector calibrations so no existing calibration sets are appropriate.

> dbTool commit-purpose --name CRV_CALIBRATION --comment "input to the CRV calibration job"

If a new purpose is not needed, one of the existing can be chosen

> dbTool print-purposes

You shoudl end up with a purpose identifier number, or PID.

The second decision to be made is whether a new list of table types is needed. You can see the existing lists

> dbTool print-lists

You will need a list ID (LID). You can also see what lists are associated with current versions.

> dbTool print-versions

If a new list of tables is needed, first find the tables, including their unique numeric identifiers, their TIDs.

> dbTool print-tables

To make a list:

> dbTool commit-list --name CRV_CALIB --comment "for CRV calibration job" --tids 1,2,3

which will result in a new list id, or LID.

Finally, once you have the purpose and list, you will need to decide the version numbers. If this is a new purpose, the XXXXXXXXXXXXXXXXx

> dbTool commit-version --purpose PRODUCTION --list 3 --major 1 --minor 3 --comment "fix run 1234 of version 1_2"


A new version of a PURPOSE is needed when either the types of tables included in this calibration set needs to change, or if there was a mistake in the contents of the set.

You will need a list of tables to include in this calibration set.


> dbTool commit-version --purpose PRODUCTION --list 3 --major 1 --minor 3 --comment "fix run 1234 of version 1_2"

In this case you are creating a whole new purpose. This should be a rare procedure and would indicate a whole new procedure is being introduced, for example, CRV group wants to run a calibration job on new data and needs the nominal calibration as input to the job. They don't care about any other detector calibrations.

> dbTool commit-purpose --name CRV_CALIBRATION --comment "input to the CRV calibration job"

the result can be checked with

> dbTool print-purposes

this print shows the PID, which uniquely identifies this PURPOSE. Every PURPOSE must have a unique name.

Create a new calibration table

decision

Conventions

permissions and roles

The current (10/2018) permissions. Access to write to the database is allowed by a kerberos ticket. When a username is recorded in the database, it is the account name, which is the same as the kerberos name. Anonymous users can browse the database using SQL and the mu2e_reader read-only account (ask for password).

Permissions users
ADMIN_ROLE gandr,kutschke,rlc
VAL_ROLE, MANAGER_ROLE gandr,kutschke,rlc,brownd
TRK_ROLE, VAL_ROLE brownd,rbonvent,edmonds
CAL_ROLE, VAL_ROLE echenard,fcp
CRV_ROLE, VAL_ROLE ehrlich,oksuzian
  • ADMIN_ROLE owns all tables and has complete control. It can add or drop tables, and everything in between. Only a few experts will have this role. The only regular job will be to create new calibration tables.
  • VAL_ROLE can make intervals of validity and make groups of IOVs. All detector calibrators will have this role.
  • MANAGER_ROLE can commit to the higher-level interval of validity tables. This includes declaring a new calibration table, creating a new list of calibration tables, purposes, or versions of a purpose, and extend a calibration set. It is expected that only a few active offline production managers will have this role at any one time.
  • TRK_ROLE can commit calibration data to tables with names Trk*, and similarly for the other detector roles. Only a few experts in each detector, with the responsibility to maintain calibrations, will have this role.

Intervals of validity

Intervals are inclusive, the end points stated are in the interval. You can't create an interval where the end is before the beginning, so all intervals contain at least one subrun.

String Interpreted
EMPTY 0:0-0:0
MAX 0:0-999999:999999
ALL 0:0-999999:999999
1000 1000:0-1000:999999
1000-1000 1000:0-1000:999999
1000-MAX 1000:0-999999:999999
MIN-1000 0:0-1000:999999
MIN-MAX 0:0-999999:999999
1000-2000 1000:0-2000:999999
1000:10-2000 1000:10-2000:999999
1000:11-1001:23 1000:11-1001:23

text file format

The text file must have the following format:

TABLE <tableName> <IOV>
row1-col1, row1-col2, row1-col3
row2-col1, row2-col2, row2-col3
...

For example:

# my comment
TABLE TstCalib1 1001:2-1002
1,20,20.21
2,21,20.22
3,22,20.23

The table name is the same as the c++ class name. The IOV may be missing, in which case the table applies to all data. You can see the format of the IOV text here. The data is one line for each row in the table, typically a channel. The rows may need to be in a specific order, depending on how the table is coded and accessed. The columns are separated by commas, and in the order defined by the c++ representation of the table. Please see here for details on string columns. There may be several tables in one file.

strings

Arbitrary string input occurs at two places

  1. when adding comments to a dbTool create action, such as creating a new purpose or version
  2. when uploading a calibration table that has a string column

In general, there are three special characters to watch our for: double quotes, used for quoting ("), comma, used for column separation (,), and hash, used for comments (#).

When inputing a string to dbTool to include a comment, there are only two rules:

  1. if more than one word, use double quotes
  2. --comment "two words"
  3. when using a double quote in a string, escape it
  4. --comment "two \"words\""

When writing a file that contain calibration data, the following rules apply

  1. comments may be included by writing the hash (#) as the first character of a line. Comments are not allowed embedded in lines with data. The hash may be used in a string column.
  2. # this is a legal comment TABLE tableName1 1, 1.2 # legal comment # legal comment - first non-whitespace char is the hash 2, 1.1 # illegal comment - will crash on parse (part of number column) TABLE tableName2 1, 1.2, GOOD 2, 1.1, BAD # malformed comment - will appear in string column 3, 1.1, failed check #3 OK, legal to use hash in string
  3. commas must be quoted
  4. TABLE tableName2 1, 1.2, GOOD 2, 1.1, BAD, or not will crash on parse 3, 1.1, "BAD, or not" OK
  5. embedded double quotes must escaped or doubled
  6. TABLE tableName2 1, 1.2, GOOD OK 1, 1.2, "GOOD" OK 3, 1.1, really BAD OK, multiple words OK (as long as no commas or quotes) 3, 1.1, ain't really BAD OK, single quotes OK 2, 1.1, Joe says "BAD" OK 2, 1.1, Joe says "BAD, or not" OK, comma requires quotes 3, 1.1, "Joe says \"BAD\"" OK 4, 1.1, "Joe says ""BAD""" OK 4, 1.1, "Joe says "BAD"" will crash on parse 5, 1.1, "Joe says, ""BAD""" OK, comma requires quotes, embedded quotes must be escaped or doubled

URL format

The url access is through the Scientific Computing Division, database API group's QueryEngine system. This web server has a cache which will be used for any url that has the same text as a previous url, which loaded the cache. A cache entry expires in 26h. This cache is our main guard against overloading the database by grid job database access.

The url comes in two forms, the "no cache" version forced the database to be read directly, skipping the web cache. it does not update the cache. The base string are:

 _url =        "http://dbdata0vm.fnal.gov:9091/QE/mu2e/prod/app/SQ/query?";
 _urlNoCache = "http://dbdata0vm.fnal.gov:9090/QE/mu2e/prod/app/SQ/query?";

You can add parameters to the query

  • t=table name
  • c=selection clause
  • w=where clause
  • o=order clause

For example

http://dbdata0vm.fnal.gov:9091/QE/mu2e/prod/app/SQ/query?t=tst.calib1&c=flag,dtoe&w=cid:eq:2&w=channel:le:1&o-flag

The code returns one line for each row. It present columns as comma-separated values. If a column contains text with a comma or quotes, the code will return the cell in double quotes.

See also DbService/src/DbReader.cc


Notes

  • Art interface:
    • art talk on a db interface.
    • csv_qe_api query_engine wrapper and generic table (`upd list -aK+ csv_qe_api`)
  • ConDB - urls access to REST server
  • query_engine wiki - a generic query db url/REST interface
  • Web Data Access (wda) manual (ups product) c function wrapper, takes urls, runs curl, returns pointer to memory, has utilities for converting csv to values.
    • nu products
  • nutools - contains nova generic table
  • nucond -
  • MINOS DB manual
  • UconDB unstructured DB


  • 5/9/18 created dev conditions
psql -h ifdb04  -p 5444  mu2e_conditions_dev
  • query engine
non-cached: http://dbdata0vm.fnal.gov:9090/QE/mu2e/dev/app/SQ/query?t=test
cached: http://dbdata0vm.fnal.gov:9091/QE/mu2e/dev/app/SQ/query?t=test
  • setup postgresql v9_3_9
  • hypernews mu2e-hn-ConditionsDB@listserv.fnal.gov


CREATE ROLE admin_role;
GRANT ALL to adm_role;
GRANT admin_role to gandr,kutschke,rlc;

CREATE ROLE trk_role;
GRANT trk_role TO brownd,rbonvent,edmonds;

CREATE ROLE cal_role;
GRANT cal_role TO echenard,fcp;

CREATE ROLE crv_role;
GRANT crv_role TO ehrlich,oksuzian;