ConditionsDbCode: Difference between revisions

From Mu2eWiki
Jump to navigation Jump to search
No edit summary
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
===Create a new calibration table===


Any user can create a new table in their private code and put data into it using the text file mechanism.  Tables that will be created and in the database itself should be designed in consultation with an expert.  Only experts with "admin" privilege can create the table in the database.


====design considerations====
 
==Introduction==
 
This page describes how to create a new conditions database table.
 
Any user can create a new table in their private code and put data into it using the text file mechanism.  Tables that will be created and in the database itself should be presented to and designed in consultation with the software group.  Only people with "admin" privilege can create the table in the actual database.
 
==Creating a new table==
 
===design considerations===


There are many consideration that go into designing a calibration table. First, what data can go into a table?  A database table is a set of rows and columns.  The rows are usually connected to physical or conceptual units, such a detector components or channels. The columns are usually the data you want associated to each detector component. For example, there might be a table where each row represents a calorimeter crystal and columns are gains and pedestals. An alignment table might have a row for each tracker panel and position offsets for columns. Here are some considerations, starting with the most important.
There are many consideration that go into designing a calibration table. First, what data can go into a table?  A database table is a set of rows and columns.  The rows are usually connected to physical or conceptual units, such a detector components or channels. The columns are usually the data you want associated to each detector component. For example, there might be a table where each row represents a calorimeter crystal and columns are gains and pedestals. An alignment table might have a row for each tracker panel and position offsets for columns. Here are some considerations, starting with the most important.
Line 23: Line 30:
Name your table.  The first three characters must correspond to the schema in the database. The current schema are "Trk", "Cal", and "Crv", but more can be added.  This is important since permissions key off this text.  Then there is a string which must be unique within the schema.  So if your table contains CRV SiPM gains, the table class might be called CrvSiPMGain.  Or if it contained lots of columns concerning the SiPM, maybe call it CrvSiPM or CrvSiPMPars.
Name your table.  The first three characters must correspond to the schema in the database. The current schema are "Trk", "Cal", and "Crv", but more can be added.  This is important since permissions key off this text.  Then there is a string which must be unique within the schema.  So if your table contains CRV SiPM gains, the table class might be called CrvSiPMGain.  Or if it contained lots of columns concerning the SiPM, maybe call it CrvSiPM or CrvSiPMPars.


Decide if the number of rows in one commit of calibration data is fixed.  This would be recommended for any table indexed to a detector with a fixed number of channels. The code can enforce this length as a check. Channels without good data are included with nominal or flag values. An example where the length could not be fixed is variable length list of bad channels or bad runs.
Decide if the number of rows in one commit of calibration data is fixed.  This would be recommended for any table indexed to a detector with a fixed number of channels, for example. The code can enforce this length as a check. Channels without good data are included with nominal or flag values. An example where the length could not be fixed is variable length list of bad channels or bad runs.


====table class code====
===table class code===


# Copy the example code from TstCalib1.hh to the new class name and rename the class inside.  
# Copy the example code from TstCalib1.hh to the new class name and rename the class inside.  
Line 32: Line 39:
# The constructor has three arguments
# The constructor has three arguments
## the code class name (TstCalib1)
## the code class name (TstCalib1)
## the database table name as schema.shortname (tst.calib1).  This must be identical the table name in the SQL creation process.
## the database table name as schema.shortname (tst.calib1).  This must be identical the table name in the SQL creation process. Postgres essentially requires that table names are all lower case.
## the column names, separated by commas ("channel,flag,dtoe"), this will serve as the basis of an SQL query. The name here must be identical to the column names in the SQL table creation
## the column names, separated by commas ("channel,flag,dtoe"), this will serve as the basis of an SQL query. The name here must be identical to the column names in the SQL table creation.  Postgres essentially requires that column names are all lower case.
# update the size() method so it is an estimate of the memory space used by this class
# update the size() method so it is an estimate of the memory space used by this class
# if the table is to have a fixed length, implement it in nrowFix(), otherwise delete this method
# if the table is to have a fixed length, implement it in nrowFix(), otherwise delete this method
# change addRow() so when it receives a vector of strings, it converts the strings to int and floats in the order you established in the Row class  
# change addRow() so when it receives a vector of strings, it converts the strings to int and floats in the order you established in the Row class  
# change rowToCsv so that it can convert binary rows to text. Please put in the correct precision.
# change rowToCsv so that it can convert binary rows to text. Please put in the correct precision - writing more digits than meaningful increased the database content size needlessly, and just records noise.
# if you chose to not have a map, remove the map from the member data,  addRow() and remove the row(channel) accessor.
# if you chose to not have a map, remove the map from the member data,  addRow() and remove the row(channel) accessor.
# add the class to <code>DbTables/src/DbTableFactory.cc</code>
# add the class to <code>DbTables/src/DbTableFactory.cc</code>
Line 44: Line 51:
You do not need any actual database entries.
You do not need any actual database entries.


====create in the database====
===create in the database===


To create the table in the database, you will need an expert to run an SQL snippet.
To create the table in the database, you will need an expert to run an SQL snippet.
Line 66: Line 73:




==References==


===URL format===
===URL format===


The url access is through the Scientific Computing Division, database API group's [https://cdcvs.fnal.gov/redmine/projects/qengine QueryEngine] system.  This web server has a cache which will be used for any url that has the same text as a previous url, which loaded the cache. A cache entry expires in 26h.  This cache is our main guard against overloading the database by grid job database access.
The url access is through the Scientific Computing Division, database API group's [https://cdcvs.fnal.gov/redmine/projects/qengine QueryEngine] system.  Access to the http queries is allowed from onsite and VPN, and approved grid sites. This web server has a cache which will be used for any url that has the same text as a previous url, which loaded the cache. A cache entry expires in 26h.  This cache is our main guard against overloading the database by grid job database access.


The url comes in two forms, the "no cache" version forced the database to be read directly, skipping the web cache.  it does not update the cache. The base string are:
The url comes in two forms, the "no cache" version forced the database to be read directly, skipping the web cache.  it does not update the cache. The base string are:
<pre>
<pre>
  _url =        "http://dbdata0vm.fnal.gov:9091/QE/mu2e/prod/app/SQ/query?";
  _url =        "https://dbdata0vm.fnal.gov:8444/QE/mu2e/prod/app/SQ/query?";
  _urlNoCache = "http://dbdata0vm.fnal.gov:9090/QE/mu2e/prod/app/SQ/query?";
  _urlNoCache = "https://dbdata0vm.fnal.gov:9443/QE/mu2e/prod/app/SQ/query?";
</pre>
</pre>
You can add parameters to the query
You can add parameters to the query
* dbname=database name
* t=table name
* t=table name
* c=selection clause
* c=selection clause
Line 82: Line 92:
* o=order clause
* o=order clause
For example
For example
  http://dbdata0vm.fnal.gov:9091/QE/mu2e/prod/app/SQ/query?t=tst.calib1&c=flag,dtoe&w=cid:eq:2&w=channel:le:1&o-flag
  curl "https://dbdata0vm.fnal.gov:9443/QE/mu2e/prod/app/SQ/query?dbname=mu2e_conditions_prd&t=tst.calib1&c=flag,dtoe&w=cid:eq:2&w=channel:le:1&o-flag"


The code returns one line for each row.  It present columns as comma-separated values.  If a column contains text with a comma or quotes, the code will return the cell in double quotes.
The code returns one line for each row.  It present columns as comma-separated values.  If a column contains text with a comma or quotes, the code will return the cell in double quotes.


See also <code>DbService/src/DbReader.cc</code>
See also <code>DbService/src/DbReader.cc</code>
As of 6/2020, we are using https protocols.  In the code in DbReader, we do not require CA verification, by setting option "CURLOPT_SSL_VERIFYPEER, FALSE".  On the command line line this is "curl -k".  To activate the CA verification, set
export SSL_CERT_PATH=/etc/grid-security/certificates
  or
curl --capath /etc/grid-security/certificates
  or in code
curl_easy_setopt(curl, CURLOPT_CAPATH, capath)
not all these are verified to work.  See also [https://curl.haxx.se/docs/sslcerts.html curl docs]


===Services Order===
===Services Order===
Line 181: Line 199:


Some of these points might be addressed by precise human controls, re-uploading data multiple times labelled differently, by creating identical tables with different names, by adding accounting columns to tables, or by flattening and re-organizing our IOV structures into types of metadata calibration tables.  All of these are disfavored as counter-intuitive or inefficient.
Some of these points might be addressed by precise human controls, re-uploading data multiple times labelled differently, by creating identical tables with different names, by adding accounting columns to tables, or by flattening and re-organizing our IOV structures into types of metadata calibration tables.  All of these are disfavored as counter-intuitive or inefficient.
[[Category:Computing]]
[[Category:Code]]
[[Category:Conditions]]

Latest revision as of 14:09, 29 September 2023


Introduction

This page describes how to create a new conditions database table.

Any user can create a new table in their private code and put data into it using the text file mechanism. Tables that will be created and in the database itself should be presented to and designed in consultation with the software group. Only people with "admin" privilege can create the table in the actual database.

Creating a new table

design considerations

There are many consideration that go into designing a calibration table. First, what data can go into a table? A database table is a set of rows and columns. The rows are usually connected to physical or conceptual units, such a detector components or channels. The columns are usually the data you want associated to each detector component. For example, there might be a table where each row represents a calorimeter crystal and columns are gains and pedestals. An alignment table might have a row for each tracker panel and position offsets for columns. Here are some considerations, starting with the most important.

  1. Do the columns have the same row indexing? This is a requirement.
  2. Are the values going to be used at the same time? You don't want to have to fetch columns A and B, if you only want A.
  3. Are the values going to be updated on a similar schedule? If you have a table with a column that is updated much less often that other data, then you will end up repeating those values, which is inefficient.
  4. the largest reading time overhead is probably per-table latency, so you want to have as much data in one table as you can, subject to the above constraints.
  5. Have you thought about what else might be needed in the near future? It will add non-trivial work to the database maintenance if you need changes to your table design.

The next major decision is how to index the rows. There must be a column which is unique to the row. Typically this is a channel number, but it can be any integer. For example it might be the sparse index StrawId. There may be 3 major methods:

  1. vector access. Access all rows of your table as a vector, loop over the vector, see what the index is for each row, and apply the data to your object based on the index. For example, if there is a row of alignment data for each tracker panel, you could loop over the rows and read the index X and apply the values to the panel X object. In this case you don't need to care about the order of your rows.
  2. sequential access. You commit to always keeping your rows in a fixed order 0 to N-1. The index has these values and you always upload them in this order. When you access the table, and you need the row for index X, you simply take position X of the vector, knowing that this corresponds to index X. Once you have the row, you can double-check that the index agrees with the position. Note that if you have a sparse index, such as StrawId, you can keep in your code, apart from the database table code, a conversion between the sparse index and the dense index used to label table rows.
  3. map access. You can design your table to keep a map of the rows where the key to the map is the index of your row, and the value is the vector position, then you can do random access for index X. This map could be fixed or created on the fly as the table is filled. A map like this may have some overhead cost if it is large.

Please consider how many significant digits you will need to represent your data and use the minimum. For reproducibility, floats are essentially stored as text, so more digits require more space and time.

Name your table. The first three characters must correspond to the schema in the database. The current schema are "Trk", "Cal", and "Crv", but more can be added. This is important since permissions key off this text. Then there is a string which must be unique within the schema. So if your table contains CRV SiPM gains, the table class might be called CrvSiPMGain. Or if it contained lots of columns concerning the SiPM, maybe call it CrvSiPM or CrvSiPMPars.

Decide if the number of rows in one commit of calibration data is fixed. This would be recommended for any table indexed to a detector with a fixed number of channels, for example. The code can enforce this length as a check. Channels without good data are included with nominal or flag values. An example where the length could not be fixed is variable length list of bad channels or bad runs.

table class code

  1. Copy the example code from TstCalib1.hh to the new class name and rename the class inside.
  2. change the nested Row class to correspond to the needed row. The unique row index should be the first column.
  3. column ordering should be the same everywhere
  4. The constructor has three arguments
    1. the code class name (TstCalib1)
    2. the database table name as schema.shortname (tst.calib1). This must be identical the table name in the SQL creation process. Postgres essentially requires that table names are all lower case.
    3. the column names, separated by commas ("channel,flag,dtoe"), this will serve as the basis of an SQL query. The name here must be identical to the column names in the SQL table creation. Postgres essentially requires that column names are all lower case.
  5. update the size() method so it is an estimate of the memory space used by this class
  6. if the table is to have a fixed length, implement it in nrowFix(), otherwise delete this method
  7. change addRow() so when it receives a vector of strings, it converts the strings to int and floats in the order you established in the Row class
  8. change rowToCsv so that it can convert binary rows to text. Please put in the correct precision - writing more digits than meaningful increased the database content size needlessly, and just records noise.
  9. if you chose to not have a map, remove the map from the member data, addRow() and remove the row(channel) accessor.
  10. add the class to DbTables/src/DbTableFactory.cc

Once the table class has been created, you can use the table, providing data by the text table interface. You do not need any actual database entries.

create in the database

To create the table in the database, you will need an expert to run an SQL snippet.

CREATE TABLE tst.calib1 
  (cid INTEGER, channel INTEGER, flag INTEGER , dtoe NUMERIC,
   CONSTRAINT tst_calib1_pk PRIMARY KEY (cid,channel) );
GRANT SELECT ON tst.calib1 TO PUBLIC;
GRANT INSERT ON tst.calib1 TO val_role;
  1. the table name must be the same as the database name in the class definition
  2. you must add the CID column to the first position
  3. floats should be represented as NUMERIC, for perfect reproducibility
  4. there should be a constraint that the CID and channel number are unique. If two columns form the unique row label, then the constraint would contain both those columns, and the CID.
  5. GRANT SELECT to PUBLIC
  6. GRANT INSERT (and INSERT only) to the appropriate role. Tables named trk.* are assigned to role trk_role. The tables in the tst (test) schema are assigned the val role to avoid creating another role just for tst.

Finally you will need to add the table to the list of tables in the database:

dbTool commit-table --name TstCalib1 --dbname tst.calib1

This is how the system knows that this table can have calibration data attached to it.


References

URL format

The url access is through the Scientific Computing Division, database API group's QueryEngine system. Access to the http queries is allowed from onsite and VPN, and approved grid sites. This web server has a cache which will be used for any url that has the same text as a previous url, which loaded the cache. A cache entry expires in 26h. This cache is our main guard against overloading the database by grid job database access.

The url comes in two forms, the "no cache" version forced the database to be read directly, skipping the web cache. it does not update the cache. The base string are:

 _url =        "https://dbdata0vm.fnal.gov:8444/QE/mu2e/prod/app/SQ/query?";
 _urlNoCache = "https://dbdata0vm.fnal.gov:9443/QE/mu2e/prod/app/SQ/query?";

You can add parameters to the query

  • dbname=database name
  • t=table name
  • c=selection clause
  • w=where clause
  • o=order clause

For example

curl "https://dbdata0vm.fnal.gov:9443/QE/mu2e/prod/app/SQ/query?dbname=mu2e_conditions_prd&t=tst.calib1&c=flag,dtoe&w=cid:eq:2&w=channel:le:1&o-flag"

The code returns one line for each row. It present columns as comma-separated values. If a column contains text with a comma or quotes, the code will return the cell in double quotes.

See also DbService/src/DbReader.cc

As of 6/2020, we are using https protocols. In the code in DbReader, we do not require CA verification, by setting option "CURLOPT_SSL_VERIFYPEER, FALSE". On the command line line this is "curl -k". To activate the CA verification, set

export SSL_CERT_PATH=/etc/grid-security/certificates
  or
curl --capath /etc/grid-security/certificates
 or in code
curl_easy_setopt(curl, CURLOPT_CAPATH, capath)

not all these are verified to work. See also curl docs

Services Order

-rw-r--r-- 1 rlc mu2e 2.9K Apr 27  2018 Alignment/src/AlignmentService_service.cc
-rw-r--r-- 1 rlc mu2e 2.4K Apr 27  2018 BTrkHelper/src/BTrkHelper_service.cc
-rw-r--r-- 1 rlc mu2e 4.7K Nov 30 11:56 ConditionsService/src/ConditionsService_service.cc
-rw-r--r-- 1 rlc mu2e 1.2K Dec 14 11:51 DbExample/src/ConditionsService2_service.cc
-rw-r--r-- 1 rlc mu2e 2.2K Dec 14 11:51 DbService/src/DbService_service.cc
-rw-r--r-- 1 rlc mu2e 2.0K Apr 27  2018 G4Helper/src/G4Helper_service.cc
-rw-r--r-- 1 rlc mu2e  14K Aug 24 12:35 GeometryService/src/GeometryService_service.cc
-rw-r--r-- 1 rlc mu2e 7.4K Apr 27  2018 GeometryService/src/RecoGeometryService_service.cc
-rw-r--r-- 1 rlc mu2e  259 Apr 27  2018 GlobalConstantsService/src/GlobalConstantsService_service.cc
-rw-r--r-- 1 rlc mu2e 1.2K Dec 27 17:31 ProditionsService/src/ProditionsService_service.cc
-rw-r--r-- 1 rlc mu2e 3.5K Apr 27  2018 Sandbox/src/BarService_service.cc
-rw-r--r-- 1 rlc mu2e  947 Apr 27  2018 Sandbox/src/Bug01Service_service.cc
-rw-r--r-- 1 rlc mu2e 1.1K Apr 27  2018 Sandbox/src/FooService_service.cc
-rw-r--r-- 1 rlc mu2e 1.8K Apr 27  2018 Sandbox/src/XBarService_service.cc
-rw-r--r-- 1 rlc mu2e 9.6K Apr 27  2018 SeedService/src/SeedService_service.cc

Notes

Relating to contents

Relating to infrastructure:

  • 5/9/18 created dev conditions
psql -h ifdb04  -p 5444  mu2e_conditions_dev
  • query engine
non-cached: http://dbdata0vm.fnal.gov:9090/QE/mu2e/dev/app/SQ/query?t=test
cached: http://dbdata0vm.fnal.gov:9091/QE/mu2e/dev/app/SQ/query?t=test
  • setup postgresql v9_3_9
  • hypernews mu2e-hn-ConditionsDB@listserv.fnal.gov


CREATE ROLE admin_role;
GRANT ALL to adm_role;
GRANT admin_role to gandr,kutschke,rlc;

CREATE ROLE trk_role;
GRANT trk_role TO brownd,rbonvent,edmonds;

CREATE ROLE cal_role;
GRANT cal_role TO echenard,fcp;

CREATE ROLE crv_role;
GRANT crv_role TO ehrlich,oksuzian;

Notes on conDB

ConDB is a conditions database system provided to the neutrino experiments by the computing division. It is

  • a set of python scripts for creating data tables, in a fixed IOV schema, and uploading data
  • a web server for Minerva and one for NOvA, which are customized for each service (there may be others)
  • an http REST interface for delivering data at scale

These are combined with libwda (to hide the web url's and text data behind a c binary interface), and expriment code on top of that.

In Minerva style, the data for all the channels in a table are always uploaded at once. In the Nova style, the data for all channels are uploaded once at the start, then subsets of channels are updated occasionally. All data is retrieved based on two dates: the valid date, the date of the data where this calibration starts to be valid, and the record date, when the data was inserted.

Generally, data is retrieved by sending the event time and table name to the web server. The code in the web server searches for the last valid date before the event time, and selects the associated data. If there are two calibrations with the same valid date, the one with the newer record date is returned. A user can request data for a time interval, causing all data relevant to the interval is returned as lists of data for channels, with metadata of the valid times, which have to interpreted in the user code. As far as I can tell, Minerva retrieves data on each event based on event time. The web server caches recent queries so often can return without going to the database. (Note this cache is an internal custom cache, and is on top of the generic commercial web server cache used by Query Engine and nginx.) NOvA uses run numbers for time stamps, and assumes all data is constant over the run (if I understood). Intervals of validity are open-ended - for the newest data, the latest valid data is returned until data with a newer valid date is uploaded. This means the results can change unless there are human controls.

In the NOvA style, patches can be created which have a valid start date and a valid end date. A patch overrides the standard result.In the Nova style, snapshots are created every few weeks. This simply marks all the table active at that time, as a way to limit how far back future searches for data have to go - it is an efficiency optimization.

A user can put a tag on a table. This is a text string that marks all entries in this table up to this date. It is typically used at the end of a production pass to save what was done. A tag cannot be extended. In the future, tables can be retrieved based on this tag, and only data with this tag will be used in the time search algorithm to find data. Data may also be retrieved based on the record date. This gives another way to broadly reproduce an old result - only data entered before this date will be used in the time search algorithm.

tolerances are used by NOvA to trim data as it is uploaded, to remove channels where the new data is similar to the previous, within tolerances.

In the NOvA style, each upload of data my have a data type string attached. This is used to write MC and beam data into the same table at the same time. Since this is just a string, it could be used many ways

Thoughts for Mu2e

1) the open-ended intervals, valid for all future data until it is overridden, can cause a user's physics result to change unexpectedly.

2) There is no way to restrict the list of tables that a user sees. Imagine production pass 1 uses table A, but pass 2 uses table B. A user could select the right set of calibrations for pass 2, but unknowingly run code that accesses obsolete table A.

3) there is no way to extend a calibration set that is also repaired or replaced. Imagine production pass 1 runs for a while, the pass 2 is created and run on the same data. New data comes in and people are analyzing both pass 1 data and pass2 data, so we want to add the new data to both passes. Another example is maintaining as current both the calibration pass and the production pass, or maintaining two different alignments for testing. The conDB concepts only allow one right answer at a time.

4) there is no text interface, which is useful for prototyping and for avoiding data access altogether, for example, in very high-load situations.

Some of these points might be addressed by precise human controls, re-uploading data multiple times labelled differently, by creating identical tables with different names, by adding accounting columns to tables, or by flattening and re-organizing our IOV structures into types of metadata calibration tables. All of these are disfavored as counter-intuitive or inefficient.