straxen package

Subpackages

Submodules

straxen.bokeh_utils module

straxen.bokeh_utils.bokeh_to_wiki(fig, outputfile=None)[source]

Function which converts bokeh HTML code to a wiki readable code.

Parameters:
  • fig – Figure to be conerted

  • outputfile – String of absolute file path. If specified output is writen to the file. Else output is print to the notebook and can be simply copied into the wiki.

straxen.common module

straxen.common.check_loading_allowed(data, run_id, target, max_in_disallowed=1, disallowed=('event_positions', 'corrected_areas', 'energy_estimates'))[source]

Check that the loading of the specified targets is not disallowed.

Parameters:
  • data – chunk of data

  • run_id – run_id of the run

  • target – list of targets requested by the user

  • max_in_disallowed – the max number of targets that are in the disallowed list

  • disallowed – list of targets that are not allowed to be loaded simultaneously by the user

Returns:

data :raise: RuntimeError if more than max_in_disallowed targets are requested

straxen.common.get_dtypes(_data)[source]

Return keys/dtype names of pd.DataFrame or numpy array.

Parameters:

_data – data to get the keys/dtype names

Returns:

keys/dtype names

straxen.common.get_livetime_sec(context, run_id, things=None)[source]

Get the livetime of a run in seconds.

If it is not in the run metadata, estimate it from the data-level metadata of the data things.

straxen.common.get_resource(x: str, fmt='text')[source]
Get the resource from an online source to be opened here. We will
sequentially try the following:
  1. Load if from memory if we asked for it before;

  2. load it from a file if the path exists;

  3. (preferred option) Load it from our database

  4. Load the file from some URL (e.g. raw github content)

Parameters:
  • x – str, either it is : A.) a path to the file; B.) the identifier of the file as it’s stored under in the database; C.) A URL to the file (e.g. raw github content).

  • fmt – str, format of the resource x

Returns:

the opened resource file x opened according to the specified format

straxen.common.open_resource(file_name: str, fmt='text')[source]

Open file.

Parameters:
  • file_name – str, file to open

  • fmt – format of the file

Returns:

opened file

straxen.common.pax_file(x)[source]

Return URL to file hosted in the pax repository master branch.

straxen.common.pmt_positions(xenon1t=False)[source]

Return pandas dataframe with PMT positions columns: array (top/bottom), i (PMT number), x, y

straxen.common.pre_apply_function(data, run_id, target, function_name='pre_apply_function')[source]

Prior to returning the data (from one chunk) see if any function(s) need to be applied.

Parameters:
  • data – one chunk of data for the requested target(s)

  • run_id – Single run-id of of the chunk of data

  • target – one or more targets

  • function_name – the name of the function to be applied. The function_name.py should be stored in the database.

Returns:

Data where the function is applied.

straxen.common.remap_channels(data, verbose=True, safe_copy=False, _tqdm=False)[source]

There were some errors in the channel mapping of old data as described in https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenonnt:dsg:daq:sector_swap using this function, we can convert old data to reflect the right channel map while loading the data. We convert both the field ‘channel’ as well as anything that is an array of the same length of the number of channels.

Parameters:
  • data – numpy array of pandas dataframe

  • verbose – print messages while converting data

  • safe_copy – if True make a copy of the data prior to performing manipulations. Will prevent overwrites of the internal references but does require more memory.

  • _tqdm – bool (try to) add a tqdm wrapper to show the progress

Returns:

Correctly mapped data

straxen.common.remap_old(data, targets, run_id, works_on_target='')[source]

If the data is of before the time sectors were re-cabled, apply a software remap otherwise just return the data is it is.

Parameters:
  • data – numpy array of data with at least the field time. It is assumed the data is sorted by time

  • targets – targets in the st.get_array to get

  • run_id – required positional argument of apply_function_to_data in strax

  • works_on_target – regex match string to match any of the targets. By default set to ‘’ such that any target in the targets would be remapped (which is what we want as channels are present in most data types). If one only wants records (no raw- records) and peaks* use e.g. works_on_target = ‘records|peaks’.

straxen.common.rotate_perp_wires(x_obs: ndarray, y_obs: ndarray, angle_extra: float | int = 0)[source]

Returns x and y in the rotated plane where the perpendicular wires area vertically aligned (parallel to the y-axis). Accepts addition to the rotation angle with angle_extra [deg]

Parameters:
  • x_obs – array of x coordinates

  • y_obs – array of y coordinates

  • angle_extra – extra rotation in [deg]

Returns:

x_rotated, y_rotated

straxen.contexts module

straxen.contexts.demo()[source]

Return strax context used in the straxen demo notebook.

straxen.contexts.fake_daq()[source]

Context for processing fake DAQ data in the current directory.

straxen.contexts.find_rucio_local_path(include_rucio_local, _rucio_local_path)[source]

Check the hostname to determine which rucio local path to use. Note that access to /dali/lgrandi/rucio/ is possible only if you are on dali compute node or login node.

Parameters:
  • include_rucio_local – add the rucio local storage frontend. This is only needed if one wants to do a fuzzy search in the data the runs database is out of sync with rucio

  • _rucio_local_path – str, path of local RSE of rucio. Only use for testing!

straxen.contexts.xenon1t_dali(output_folder='./strax_data', build_lowlevel=False, **kwargs)[source]
straxen.contexts.xenon1t_led(**kwargs)[source]
straxen.contexts.xenonnt(cmt_version='global_ONLINE', xedocs_version=None, _from_cutax=False, **kwargs)[source]

XENONnT context.

straxen.contexts.xenonnt_led(**kwargs)[source]
straxen.contexts.xenonnt_online(output_folder: str = './strax_data', we_are_the_daq: bool = False, minimum_run_number: int = 7157, maximum_run_number: int | None = None, include_rucio_remote: bool = False, include_online_monitor: bool = False, include_rucio_local: bool = False, download_heavy: bool = False, _auto_append_rucio_local: bool = True, _rucio_path: str = '/dali/lgrandi/rucio/', _rucio_local_path: str | None = None, _raw_paths: List[str] = ['/dali/lgrandi/xenonnt/raw'], _processed_paths: List[str] = ['/dali/lgrandi/xenonnt/processed', '/project2/lgrandi/xenonnt/processed', '/project/lgrandi/xenonnt/processed'], _context_config_overwrite: dict | None = None, _database_init: bool = True, _forbid_creation_of: dict | None = None, **kwargs)[source]

XENONnT online processing and analysis.

Parameters:
  • output_folder – str, Path of the strax.DataDirectory where new data can be stored

  • we_are_the_daq – bool, if we have admin access to upload data

  • minimum_run_number – int, lowest number to consider

  • maximum_run_number – Highest number to consider. When None (the default) consider all runs that are higher than the minimum_run_number.

  • include_rucio_remote – add the rucio remote frontend to the context

  • include_online_monitor – add the online monitor storage frontend

  • include_rucio_local – add the rucio local storage frontend. This is only needed if one wants to do a fuzzy search in the data the runs database is out of sync with rucio

  • download_heavy – bool, whether or not to allow downloads of heavy data (raw_records*, less the aqmon)

  • _auto_append_rucio_local – bool, whether or not to automatically append the rucio local path

  • _rucio_path – str, path of rucio

  • _rucio_local_path – str, path of local RSE of rucio. Only use for testing!

  • _raw_paths – list[str], common path of the raw-data

  • _processed_paths – list[str]. common paths of output data

  • _context_config_overwrite – dict, overwrite config

  • _database_init – bool, start the database (for testing)

  • _forbid_creation_of – str/tuple, of datatypes to prevent form being written (raw_records* is always forbidden).

  • kwargs – dict, context options

Returns:

strax.Context

straxen.contexts.xenonnt_som(cmt_version='global_ONLINE', xedocs_version=None, _from_cutax=False, **kwargs)[source]

XENONnT context for the SOM.

straxen.corrections_services module

Return corrections from corrections DB.

exception straxen.corrections_services.CMTVersionError[source]

Bases: Exception

class straxen.corrections_services.CorrectionsManagementServices(username=None, password=None, mongo_url=None, is_nt=True)[source]

Bases: object

A class that returns corrections Corrections are set of parameters to be applied in the analysis stage to remove detector effects.

Information on the strax implementation can be found at https://github.com/AxFoundation/strax/blob/master/strax/corrections.py

get_config_from_cmt(run_id, model_type, version='ONLINE')[source]

Smart logic to return NN weights file name to be downloader by straxen.MongoDownloader()

Parameters:
  • run_id – run id from runDB

  • model_type – model type and neural network type; model_mlp, or model_gcn or model_cnn

  • version – version

  • return – NN weights file name

get_corrections_config(run_id, config_model=None)[source]

Get context configuration for a given correction.

Parameters:
  • run_id – run id from runDB

  • config_model – configuration model (tuple type)

Returns:

correction value(s)

get_local_versions(global_version) dict[source]

Returns a dict of local versions for a given global version.

Use ‘latest’ to get newest version

get_pmt_gains(run_id, model_type, version, cacheable_versions=('ONLINE', ), gain_dtype=<class 'numpy.float32'>)[source]

Smart logic to return pmt gains to PE values.

Parameters:
  • run_id – run id from runDB

  • model_type – to_pe_model (gain model)

  • version – version

  • cacheable_versions – versions that are allowed to be cached in ./resource_cache

  • gain_dtype – dtype of the gains to be returned as array

Returns:

array of pmt gains to PE values

get_start_time(run_id)[source]

Smart logic to return start time from runsDB.

Parameters:

run_id – run id from runDB

Returns:

run start time

property global_versions

straxen.daq_core module

Core functions of the DAQ, mostly used in straxen/bin.

class straxen.daq_core.DataBases(production=False)[source]

Bases: object

static get_admin_client()[source]
log_warning(message, priority='warning', run_id=None, production=True, user='daq_process')[source]

Report a warning to the terminal (using the logging module) and the DAQ log DB.

Parameters:
  • message – insert string into log_coll

  • priority – severity of warning. Can be: info: 1, warning: 2, <any other valid python logging level, e.g. error or fatal>: 3

  • run_id – optional run id.

straxen.daq_core.now(plus=0)[source]

Now in utc time.

straxen.entry_points module

straxen.entry_points.get_entry_points()[source]
straxen.entry_points.load_entry_points()[source]

straxen.get_corrections module

straxen.get_corrections.get_cmt_resource(run_id, conf, fmt='')[source]

Get resource with CMT correction file name.

straxen.get_corrections.get_correction_from_cmt(run_id, conf)[source]

Get correction from CMT general format is conf = (‘correction_name’, ‘version’, True) where True means looking at nT runs, e.g. get_correction_from_cmt(run_id, conf[:2]) special cases: version can be replaced by constant int, float or array when user specify value(s)

Parameters:
  • run_id – run id from runDB

  • conf – configuration

Returns:

correction value(s)

straxen.get_corrections.is_cmt_option(config)[source]

Check if the input configuration is cmt style.

straxen.holoviews_utils module

straxen.itp_map module

class straxen.itp_map.InterpolateAndExtrapolate(points, values, neighbours_to_use=None, array_valued=False)[source]

Bases: object

Linearly interpolate- and extrapolate using inverse-distance weighted averaging between nearby points.

class straxen.itp_map.InterpolatingMap(data, method='WeightedNearestNeighbors', **kwargs)[source]

Bases: object

Correction map that computes values using inverse-weighted distance interpolation.

The map must be specified as a json translating to a dictionary like this:

‘coordinate_system’ : [[x1, y1], [x2, y2], [x3, y3], [x4, y4], …], ‘map’ : [value1, value2, value3, value4, …] ‘another_map’ : idem ‘name’: ‘Nice file with maps’, ‘description’: ‘Say what the maps are, who you are, etc’, ‘timestamp’: unix epoch seconds timestamp

with the straightforward generalization to 1d and 3d.

Alternatively, a grid coordinate system can be specified as follows:

‘coordinate_system’ : [[‘x’, [x_min, x_max, n_x]], [[‘y’, [y_min, y_max, n_y]]

Alternatively, an N-vector-valued map can be specified by an array with last dimension N in ‘map’.

The default map name is ‘map’, I’d recommend you use that.

For a 0d placeholder map, use

‘points’: [], ‘map’: 42, etc

Default method return inverse-distance weighted average of nearby 2 * dim points Extra support includes RectBivariateSpline, RegularGridInterpolator in scipy by pass keyword argument like

method=’RectBivariateSpline’

The interpolators are called with

‘positions’ : [[x1, y1], [x2, y2], [x3, y3], [x4, y4], …] ‘map_name’ : key to switch to map interpolator other than the default ‘map’

metadata_field_names = ['timestamp', 'description', 'coordinate_system', 'name', 'irregular', 'compressed', 'quantized']
scale_coordinates(scaling_factor, map_name='map')[source]

Scales the coordinate system by the specified factor.

Params scaling_factor:

array (n_dim) of scaling factors if different or single scalar.

straxen.itp_map.save_interpolation_formatted_map(itp_map, coordinate_system: ~typing.List, filename: str, map_name: str | None = None, quantum: float | None = None, quantum_dtype=<class 'numpy.int16'>, map_description: str = '', compressor: ~typing.Literal['bz2', 'zstd', 'blosc', 'lz4'] = 'zstd')[source]

Make a straxen-style InterpolatingMap.

To fit the large XENONnT per-PMT maps into strax_auxiliary files, quantized them to values of 1e-5, and store the maps as 16-bit integer multiples of 1e-5, instead of 64-bit floats. :param itp_map: numpy itp_map or list of floats, should follow the shape indicated by coordinate_system :param coordinate_system: coordinate system of the itp_map, list of [[‘x’, [x_min, x_max, n_x]], [[‘y’, [y_min, y_max, n_y], …] for each dimension, or [[x1, y1], [x2, y2], [x3, y3], [x4, y4], …] :param filename: filename with ‘.pkl’ extension :param map_name: name of map :param quantum: quantum of the map if quantized :param map_description: map’s description :param compressor: key of compressor in strax.io.COMPRESSORS

straxen.matplotlib_utils module

straxen.matplotlib_utils.draw_box(x, y, **kwargs)[source]

Draw rectangle, given x-y boundary tuples.

straxen.matplotlib_utils.log_x(a=None, b=None, scalar_ticks=True, tick_at=None)[source]

Make the x axis use a log scale from a to b.

straxen.matplotlib_utils.log_y(a=None, b=None, scalar_ticks=True, tick_at=None)[source]

Make the y axis use a log scale from a to b.

straxen.matplotlib_utils.plot_on_single_pmt_array(c, array_name='top', xenon1t=False, r=68.39200000000001, pmt_label_size=8, pmt_label_color='white', show_tpc=True, log_scale=False, vmin=None, vmax=None, dead_pmts=None, dead_pmt_color='gray', **kwargs)[source]

Plot one of the PMT arrays and color it by c.

Parameters:
  • c – Array of colors to use. Must be len() of the number of TPC PMTs

  • label – Label for the color bar

  • pmt_label_size – Fontsize for the PMT number labels.

Set to 0 to disable. :param pmt_label_color: Text color of the PMT number labels. :param log_scale: If True, use a logarithmic color scale :param extend: same as plt.colorbar(extend=…) :param vmin: Minimum of color scale :param vmax: maximum of color scale Other arguments are passed to plt.scatter.

straxen.matplotlib_utils.plot_pmts(c, label='', figsize=None, xenon1t=False, show_tpc=True, extend='neither', vmin=None, vmax=None, **kwargs)[source]

Plot the PMT arrays side-by-side, coloring the PMTS with c.

Parameters:
  • c – Array of colors to use. Must have len() n_tpc_pmts

  • label – Label for the color bar

  • figsize – Figure size to use.

  • extend – same as plt.colorbar(extend=…)

  • vmin – Minimum of color scale

  • vmax – maximum of color scale

  • show_axis_labels – if True it will show x and y labels

Other arguments are passed to plot_on_single_pmt_array.

straxen.matplotlib_utils.plot_single_pulse(records, run_id, pulse_i='')[source]

Function which plots a single pulse.

Parameters:
  • records – Records which belong to the pulse.

  • run_id – Id of the run.

  • pulse_i – Index of the pulse to be plotted.

Returns:

fig, axes objects.

straxen.mini_analysis module

straxen.mini_analysis.mini_analysis(requires=(), hv_bokeh=False, warn_beyond_sec=None, default_time_selection='touching')[source]

straxen.misc module

class straxen.misc.CacheDict(*args, cache_len: int = 10, **kwargs)[source]

Bases: OrderedDict

Dict with a limited length, ejecting LRUs as needed.

copied from https://gist.github.com/davesteele/44793cd0348f59f8fadd49d7799bd306

class straxen.misc.TimeWidgets[source]

Bases: object

create_widgets()[source]

Creates time and time zone widget for simpler time querying.

Note:

Please be aware that the correct format for the time field is HH:MM.

get_start_end()[source]

Returns start and end time of the specfied time interval in nano- seconds utc unix time.

straxen.misc.convert_array_to_df(array: ndarray) DataFrame[source]

Converts the specified array into a DataFrame drops all higher dimensional fields during the process.

Parameters:

array – numpy.array to be converted.

Returns:

DataFrame with higher dimensions dropped.

straxen.misc.dataframe_to_wiki(df, float_digits=5, title='Awesome table', force_int: Tuple = ())[source]

Convert a pandas dataframe to a dokuwiki table (which you can copy-paste onto the XENON wiki)

Parameters:
  • df – dataframe to convert

  • float_digits – format float to this number of digits.

  • title – title of the table.

  • force_int – tuple of column names to force to be integers

straxen.misc.filter_kwargs(func, kwargs)[source]

Filter out keyword arguments that are not in the call signature of func and return filtered kwargs dictionary.

straxen.misc.print_versions(modules=('strax', 'straxen', 'cutax'), print_output=True, include_python=True, return_string=False, include_git=True)[source]

Print versions of modules installed.

Parameters:
  • modules – Modules to print, should be str, tuple or list. E.g. print_versions(modules=(‘numpy’, ‘dddm’,))

  • return_string – optional. Instead of printing the message, return a string

  • include_git – Include the current branch and latest commit hash

Returns:

optional, the message that would have been printed

straxen.misc.total_size(o, handlers=None, verbose=False)[source]

Returns the approximate memory footprint an object and all of its contents.

Automatically finds the contents of the following builtin containers and their subclasses: tuple, list, deque, dict, set and frozenset. To search other containers, add handlers to iterate over their contents:

handlers = {SomeContainerClass: iter,

OtherContainerClass: OtherContainerClass.get_elements}

from: https://code.activestate.com/recipes/577504/

straxen.misc.utilix_is_configured(header: str = 'RunDB', section: str = 'xent_database', warning_message: None | bool | str = None) bool[source]

Check if we have the right connection to.

Parameters:
  • header – Which header to check in the utilix config file

  • section – Which entry in the header to check to exist

  • warning_message – If utilix is not configured, warn the user. if None -> generic warning if str -> use the string to warn if False -> don’t warn

Returns:

bool, can we connect to the Mongo database?

straxen.numbafied_scipy module

straxen.numbafied_scipy.numba_betainc(x1, x2, x3)[source]
straxen.numbafied_scipy.numba_gammaln(x)[source]
straxen.numbafied_scipy.numba_loggamma(x)[source]

straxen.scada module

class straxen.scada.SCADAInterface(context=None, use_progress_bar=True)[source]

Bases: object

find_pmt_names(pmts=None, hv=True, current=False)[source]

Function which returns a list of PMT parameter names to be called in SCADAInterface.get_scada_values. The names refer to the high voltage of the PMTs, not their current.

Thanks to Hagar and Giovanni who provided the file.

Parameters:
  • pmts – Optional parameter to specify which PMT parameters should be returned. Can be either a list or array of channels or just a single one.

  • hv – Bool if true names of high voltage channels are returned.

  • current – Bool if true names for the current channels are returned.

Returns:

dictionary containing short names as keys and scada parameter names as values.

find_scada_parameter()[source]
get_new_token()[source]

Function to renew the token of the current session.

get_scada_values(parameters, start=None, end=None, run_id=None, query_type_lab=True, time_selection_kwargs=None, fill_gaps=None, filling_kwargs=None, down_sampling=False, every_nth_value=1)[source]

Function which returns XENONnT slow control values for a given set of parameters and time range.

The time range can be either defined by a start and end time or via the run_id, target and context.

Parameters:
  • parameters – dictionary containing the names of the requested scada-parameters. The keys are used as identifier of the parameters in the returned pandas.DataFrame.

  • start – int representing the start time of the interval in ns unix time.

  • end – same as start but as end.

  • run_id – Id of the run. Can also be specified as a list or tuple of run ids. In this case we will return the time range lasting between the start of the first and endtime of the second run.

  • query_type_lab – Mode on how to query data from the historians. Can be either False to get raw data or True (default) to get data which was interpolated by historian. Useful if large time ranges have to be queried.

  • time_selection_kwargs – Keyword arguments taken by st.to_absolute_time_range(). Default: {“full_range”: True}

  • fill_gaps – Decides how to fill gaps in which no data was recorded. Only needed for query_type_lab=False. Can be either None, “interpolation” or “forwardfill”.None keeps the gaps (default), “interpolation” uses pandas.interpolate and “forwardfill” pandas.ffill. See https://pandas.pydata.org/docs/ for more information. You can change the filling options of the methods with the filling_kwargs.

  • filling_kwargs – Kwargs applied to pandas .ffill() or .interpolate(). Only needed for query_type_lab=False.

  • down_sampling – Boolean which indicates whether to donw_sample result or to apply average. The averaging is deactivated in case of interpolated data. Only needed for query_type_lab=False.

  • every_nth_value – Defines over how many values we compute the average or the nth sample in case we down sample the data. In case query_type_lab=True every nth second is returned.

Returns:

pandas.DataFrame containing the data of the specified parameters.

token_expires_in()[source]

Function which displays how long until the current token expires.

straxen.scada.convert_time_zone(df, tz)[source]

Function which converts the current time zone of a given pd.DataFrame into another timezone.

Parameters:
  • df – pandas.DataFrame containing the Data. Index must be a datetime object with time zone information.

  • tz – str representing the timezone the index should be converted to. See the notes for more information.

Returns:

pandas.DataFrame with converted time index.

Notes:

1. ) The input pandas.DataFrame must be indexed via datetime objects which are timezone aware.

2.) You can find a complete list of available timezones via: ` import pytz pytz.all_timezones ` You can also specify ‘strax’ as timezone which will convert the time index into a ‘strax time’ equivalent. The default timezone of strax is UTC.

straxen.test_utils module

straxen.test_utils.download_test_data(test_data='https://raw.githubusercontent.com/XENONnT/strax_auxiliary_files/353b2c60a01e96f67e4ba544ce284bd91241964d/strax_files/strax_test_data_straxv1.1.0.tar')[source]

Downloads strax test data to strax_test_data in the current directory.

straxen.units module

Define unit system for pax (i.e., seconds, etc.)

This sets up variables for the various unit abbreviations, ensuring we always have a ‘consistent’ unit system. There are almost no cases that you should change this without talking with a maintainer.

straxen.url_config module

class straxen.url_config.URLConfig(cache=0, **kwargs)[source]

Bases: Config

Dispatch on URL protocol.

unrecognized protocol returns identity inspired by dasks Dispatch and fsspec fs protocols.

NAMESPACE_SEP = '.'
PLUGIN_ATTR_PREFIX = 'plugin.'
QUERY_SEP = '?'
SCHEME_SEP = '://'
classmethod are_equal(first, second)[source]

Return whether two URLs are equivalent (have equal ASTs)

classmethod ast_to_url(protocol: str | tuple, arg: str | tuple | None = None, kwargs: dict | None = None)[source]

Convert a protocol abstract syntax tree to a valid URL.

property cache
classmethod deref_ast(protocol, arg, kwargs, **namespace)[source]

Dereference an AST by looking up values in namespace.

classmethod eval(protocol: str, arg: str | tuple | None = None, kwargs: dict | None = None)[source]

Evaluate a URL/AST by recusively dispatching protocols by name with argument arg and keyword arguments kwargs and return the value.

If protocol does not exist, returnes arg :param protocol: name of the protocol or a URL :param arg: argument to pass to protocol, can be another (sub-protocol, arg, kwargs) tuple,

in which case sub-protocol will be evaluated and passed to protocol

Parameters:

kwargs – keyword arguments to be passed to the protocol

Returns:

(Any) The return value of the protocol on these arguments

classmethod evaluate_dry(url: str, **kwargs)[source]

Utility function to quickly test and evaluate URL configs, without the initialization of plugins (so no plugin attributes). plugin attributes can be passed as keyword arguments.

example:

from straxen import URLConfig
url_string='cmt://electron_drift_velocity?run_id=027000&version=v3'
URLConfig.evaluate_dry(url_string)

# or similarly
url_string='cmt://electron_drift_velocity?run_id=plugin.run_id&version=v3'
URLConfig.evaluate_dry(url_string, run_id='027000')

Please note that this has to be done outside of the plugin, so any attributes of the plugin are not yet note to this dry evaluation of the url-string.

Parameters:

url – URL to evaluate, see above for example.

Keyword:

any additional kwargs are passed to self.dispatch (see example)

Returns:

evaluated value of the URL.

fetch(plugin)[source]

Override the Config.fetch method this is called when the attribute is accessed from withing the Plugin instance.

classmethod format_url_kwargs(url, **kwargs)[source]

Add keyword arguments to a URL.

Sorts all arguments by key for hash consistency

classmethod kwarg_from_url(url: str, key: str)[source]
classmethod lookup_value(value, **namespace)[source]

Optionally fetch an attribute from namespace if value is a string with cls.NAMESPACE_SEP in it, the string is split and the first part is used to lookup an object in namespace and the second part is used to lookup the value in the object.

If the value is not a string or the target object is not in the namesapce, the value is returned as is.

classmethod preprocessor(func=None, precedence=0)[source]

Register a new processor to modify the config values before they are used.

classmethod preprocessor_descr()[source]
classmethod print_preprocessors()[source]
classmethod print_protocols()[source]
classmethod print_summary()[source]
classmethod protocol_descr()[source]
classmethod register(protocol, func=None)[source]

Register dispatch of func on urls starting with protocol name protocol

classmethod split_url_kwargs(url)[source]

Split a url into path and kwargs.

classmethod url_to_ast(url, **kwargs)[source]

Convert a URL to a protocol abstract syntax tree.

validate(config, run_id=None, run_defaults=None, set_defaults=True)[source]

This method is called by the context on plugin initialization at this stage, the run_id and context config are already known but the config values are not yet set on the plugin.

Therefore its the perfect place to run any preprocessors on the config values to make any needed changes before the configs are hashed.

validate_type(value)[source]

Validate the type of a value against its intended type.

straxen.url_config.clear_config_caches()[source]
straxen.url_config.config_cache_size_mb()[source]

Module contents