API Reference Documentation

Contents

class pymetrics.configuration.Configuration(version: int, publishers: List[pymetrics.publishers.base.MetricsPublisher] = NOTHING, error_logger_name: Optional[str] = None, enable_meta_metrics: bool = False)[source]

Bases: object

__init__(version: int, publishers: List[pymetrics.publishers.base.MetricsPublisher] = NOTHING, error_logger_name: Optional[str] = None, enable_meta_metrics: bool = False)[source]

Initialize self. See help(type(self)) for accurate signature.

CONFIGURATION_SCHEMA = pre-defined Conformity schema pymetrics.configuration.CONFIGURATION_SCHEMA

dictionary whose schema switches based on the value of key version: The configuration schema changes slightly based on which config version you specify.

  • version == 2 - strict dict: (no description)

    • enable_meta_metrics - boolean: If true, meta-metrics will be recorded documenting the performance of PyMetrics itself.

    • error_logger_name - unicode: By default, errors encountered when publishing metrics are suppressed and lost. If this value is truthy, a Logger is created with this name and used to log publication errors.

    • publishers - sequence: The configuration for all publishers.

      values

      dictionary with keys path and kwargs whose kwargs schema switches based on the value of path, dynamically based on class imported from path (see the configuration settings schema documentation for the class named at path). Import path and arguments for a publisher. The imported item at the specified path must be a subclass of pymetrics.publishers.base.MetricsPublisher.

    • version - constant: (no description) (additional information: {'values': [2]})

    Optional keys: enable_meta_metrics, error_logger_name

pymetrics.configuration.create_configuration(config_dict: Dict[str, Any]) → <class 'pymetrics.configuration.Configuration'>[source]

Creates a Configuration object using the provided configuration dictionary. Works in similar fashion to logging’s configuration.

Expected format of config is a dict:

{
    'version': 2,
    'error_logger_name': 'pymetrics',  # name of the error logger to use, or ``None`` (the default) to suppress
    'enable_meta_metrics': False,  # whether to enable the collection of meta-metrics
    'publishers': [
        {
            'path': 'path.to.publisher:ClassName',
            'kwargs': {
                ...  # constructor arguments for the publisher
            },
        },
    ],
}

If multiple publishers are specified, metrics will be emitted to each publisher in the order it is specified in the configuration list.

class pymetrics.instruments.Metric(name, initial_value=0, **tags)[source]

Bases: object

A base metric instrument from which all metric instruments inherit. Cannot be instantiated directly.

__init__(name: str, initial_value: Union[int, float] = 0, **tags: Union[str, bytes, int, float, bool, None]) → None[source]

Construct a metric.

Parameters
  • name – The metric name

  • initial_value – The initial value of this metric, which may be an integer or a float (which will be rounded)

  • tags – The tags associated with this metric (not that not all publishers will support tags)

property value[source]

Returns the value of this metric.

Returns

The metric value

record_over_function(f: Callable[..., ~R], *args: Any, **kwargs: Any) → ~R[source]

Records this metric around calling the specified callable with the specified positional and keyword arguments. Not all metric types support this. Raises TypeError when unsupported.

Parameters
  • f – The callable to invoke

  • args – The positional arguments to pass to the callable

  • kwargs – The keyword arguments to pass to the callable

Returns

The value the callable returns, unaltered

class pymetrics.instruments.Counter(name, initial_value=0, **tags)[source]

Bases: pymetrics.instruments.Metric

A counter, for counting the number of times some thing has happened.

__init__(name: str, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → None[source]

Construct a counter.

Parameters
  • name – The counter name

  • initial_value – The initial value of this counter, which may be only an integer

  • tags – The tags associated with this counter (not that not all publishers will support tags)

increment(amount: int = 1) → int[source]

Increments this counter’s value by the specified amount.

Parameters

amount – The amount by which to increment the counter, defaults to 1

Returns

The new value

reset(value: Optional[int] = None) → int[source]

Resets this counter to the specified value or the initial value if not specified.

Parameters

value – The value to which to reset this counter, which defaults to the initial value if not specified

Returns

The new value

property value[source]

Returns the value of this counter.

Returns

The counter value

record_over_function(f: Callable[..., ~R], *args: Any, **kwargs: Any) → ~R[source]

Increments this counter and then calls the specified callable with the specified positional and keyword arguments.

Parameters
  • f – The callable to invoke

  • args – The positional arguments to pass to the callable

  • kwargs – The keyword arguments to pass to the callable

Returns

The value the callable returns, unaltered

class pymetrics.instruments.Histogram(name, initial_value=0, **tags)[source]

Bases: pymetrics.instruments.Metric

A histogram is a metric for tracking an arbitrary number of something per named activity.

set(value: Union[int, float, None] = None) → int[source]

Sets this histogram to the specified value or the initial value if not specified.

Parameters

value – The value to which to set this histogram, which defaults to the initial value if not specified

Returns

The new value

class pymetrics.instruments.TimerResolution[source]

Bases: enum.IntEnum

An enum controlling the resolution of published timer values.

MILLISECONDS = 1000[source]

The timer value will be multiplied by 1,000 and then rounded to an integer before publication

MICROSECONDS = 1000000[source]

The timer value will be multiplied by 1,000,000 and then rounded to an integer before publication

NANOSECONDS = 1000000000[source]

The timer value will be multiplied by 1,000,000,000 and then rounded to an integer before publication

class pymetrics.instruments.Timer(name, initial_value=0, resolution=<TimerResolution.MILLISECONDS: 1000>, **tags)[source]

Bases: pymetrics.instruments.Histogram

A timer is simply a specialized histogram that tracks the arbitrary number of milliseconds, microseconds, or nanoseconds per named activity. A single timer instance can be restarted and re-stopped repeatedly, and its value will accumulate/increase by the elapsed time each time the timer is stopped. Only if the timer has never been started and stopped will the initial or set value be used for publication.

__init__(name, initial_value=0, resolution=<TimerResolution.MILLISECONDS: 1000>, **tags) → None[source]

Construct a timer.

Parameters
  • name – The timer name

  • initial_value – The initial value of this timer, which may be an integer or a float (which will be rounded)

  • resolution – The resolution of this timer, which if unset defaults to milliseconds, controls how the value is published (it is multiplied by the resolution factor and then rounded to an integer). It does not affect the initial value, only the value recorded with start / stop and context manager usage

  • tags – The tags associated with this timer (not that not all publishers will support tags)

start() → None[source]

Starts the timer.

stop() → None[source]

Stops the timer.

property value[source]

If the timer has been started and stopped, this returns the total elapsed time of the timer multiplied by the resolution and rounded to an integer. Otherwise, this returns the set or initial value of the timer, not multiplied, but rounded to an integer.

Returns

The timer value

__enter__() → <class 'pymetrics.instruments.Timer'>[source]

Starts the timer at the start of a with block.

Returns

self

__exit__(exc_type: Any, exc_value: Any, traceback: Any) → typing_extensions.Literal[False][source]

Stops the timer at the end of a with block, regardless of whether an exception occurred.

Parameters
  • exc_type – Ignored

  • exc_value – Ignored

  • traceback – Ignored

Returns

False

record_over_function(f: Callable[..., ~R], *args: Any, **kwargs: Any) → ~R[source]

Starts this timer, calls the specified callable with the specified positional and keyword arguments, and then stops this timer.

Parameters
  • f – The callable to invoke

  • args – The positional arguments to pass to the callable

  • kwargs – The keyword arguments to pass to the callable

Returns

The value the callable returns, unaltered

class pymetrics.instruments.Gauge(name, initial_value=0, **tags)[source]

Bases: pymetrics.instruments.Metric

A gauge is a metric for tracking the ongoing state of something, such as number of items waiting in a queue, size of a database or file system, etc.

__init__(name: str, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → None[source]

Construct a gauge.

Parameters
  • name – The gauge name

  • initial_value – The initial value of this gauge, which may be an integer

  • tags – The tags associated with this gauge (not that not all publishers will support tags)

set(value: Optional[int] = None) → int[source]

Sets this gauge to the specified value or the initial value if not specified.

Parameters

value – The value to which to set this gauge, which defaults to the initial value if not specified

Returns

The new value

class pymetrics.recorders.base.MetricsRecorder[source]

Bases: object

abstract clear(only_published: bool = False) → None[source]

Clear all metrics that have been recorded. However, if only_published is True, histograms (and timers) that have not been published will not be cleared.

Parameters

only_published – Whether to leave unpublished histograms

abstract counter(name: str, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Counter'>[source]

Creates a new counter and prepares it for publishing. The initial value is 0 if not specified. Increment the counter after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created counter.

abstract gauge(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Gauge'>[source]

Creates a new gauge and prepares it for publishing. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new gauge if there is already an unpublished gauge with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created gauge.

abstract histogram(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Histogram'>[source]

Creates a new histogram for recording arbitrary numbers that can be averaged and summed. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new histogram if there is already an unpublished histogram with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created histogram.

abstract publish_all() → None[source]

Publishes all metrics that have been recorded since the last publish.

abstract publish_if_full_or_old(max_metrics: int = 18, max_age: int = 10) → None[source]

Publishes all metrics if at least this many metrics have been recorded or at least this much time has elapsed from the previous publish.

Parameters
  • max_metrics – If the recorder is holding at least this many metrics, publish now (defaults to 18, which is a likely-safe amount assuming an MTU of 1500, which is the MTU for Docker containers)

  • max_age – If the recorder last published at least this many seconds ago, publish now, even if the recorder isn’t “full” (isn’t holding on to at least max_metrics).

abstract throttled_publish_all(delay: int = 10) → None[source]

Publishes all metrics that have been recorded since the last publish unless it has been lest than delay seconds since the last publish.

Parameters

delay – The minimum number of seconds between publishes

abstract timer(name, force_new=False, resolution=<TimerResolution.MILLISECONDS: 1000>, initial_value=0, **tags) → <class 'pymetrics.instruments.Timer'>[source]

Creates and starts new timer, a special type of histogram that is recorded around the passage of time. The initial value is 0 if not specified, and in most cases you should not specify an initial value. The default resolution is milliseconds, which is suitable for most use cases. The returned timer will need to be stopped before it can be published (or it will be ignored on publication). However, the returned timer is also a context manager, so it will stop itself if you surround the code you wish to measure with with timer(...).

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new timer if there is already an unpublished timer with the same name and tags.

  • resolution – The resolution at which the timer should record, which defaults to milliseconds

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created and started timer.

pymetrics.recorders.base.metric_decorator(recorder_fetcher: Callable[[], pymetrics.recorders.base.MetricsRecorder], metric_type: str, metric_name: str, *metric_args: Any, **metric_kwargs: Any) → Callable[[Callable[..., ~R]], Callable[..., ~R]][source]

This decorator can be used on a function or method to provide a shorthand for obtaining the current MetricsRecorder and using it to record a metric around the function invocation. This is an abstract decorator and should be wrapped with your own function to fetch the MetricsRecorder in the way you have designed for your application.

Example usage:

def timer(name, *args, **kwargs):
    return metrics_decorator(
        some_func_that_returns_a_metrics_recorder,
        'timer',
        name,
        *args,
        **kwargs,
    )

...

@timer('timer_name')
def some_function(...):
    do_things()

Multiple metrics can be chained this way, too:

@timer('timer_name')
@timer('other_timer_name')
@counter('some_counter')
def some_function(...):
    do_things()

If metric_decorator is called with the bool keyword-only argument include_metric, the created Metric instrument will be passed to the wrapped function or method with an extra keyword argument named metric.

Parameters
  • recorder_fetcher – A callable that returns a MetricsRecorder instance

  • metric_type – A string with value timer or counter (other instruments currently do not support being called with a decorator)

  • metric_name – The metric name to use

  • metric_args – The positional arguments passed to the metric

  • metric_kwargs – The keyword arguments passed to the metric

class pymetrics.recorders.default.DefaultMetricsRecorder(prefix, config=None)[source]

Bases: pymetrics.recorders.base.MetricsRecorder

Class Configuration Schema

strict dict: The configuration schema for the default metrics recorder constructor arguments. Without the config key, it will not be able publish metrics.

  • config - dictionary whose schema switches based on the value of key version (nullable): The configuration schema changes slightly based on which config version you specify.

    • version == 2 - strict dict: (no description)

      • enable_meta_metrics - boolean: If true, meta-metrics will be recorded documenting the performance of PyMetrics itself.

      • error_logger_name - unicode: By default, errors encountered when publishing metrics are suppressed and lost. If this value is truthy, a Logger is created with this name and used to log publication errors.

      • publishers - sequence: The configuration for all publishers.

        values

        dictionary with keys path and kwargs whose kwargs schema switches based on the value of path, dynamically based on class imported from path (see the configuration settings schema documentation for the class named at path). Import path and arguments for a publisher. The imported item at the specified path must be a subclass of pymetrics.publishers.base.MetricsPublisher.

      • version - constant: (no description) (additional information: {'values': [2]})

      Optional keys: enable_meta_metrics, error_logger_name

  • prefix - unicode (nullable): An optional prefix for all metrics names (the period delimiter will be added for you)

Optional keys: config

DjangoImproperlyConfigured[source]

alias of DefaultMetricsRecorder.StubImproperlyConfigured

exception StubImproperlyConfigured[source]

Bases: Exception

A stub exception in case Django can’t be imported.

__init__(prefix: Optional[str], config: Union[Dict[str, Any], None] = None) → None[source]

Construct a new recorder.

Parameters
  • prefix – A nullable prefix, which if non-null will be prepended to all metrics names, with a single period separating the prefix and the metrics name.

  • config – The configuration dictionary complying with the official PyMetrics Conformity configuration schema.

attempted_django_exception_import = False[source]
clear(only_published: bool = False) → None[source]

Clear all metrics that have been recorded. However, if only_published is True, histograms (and timers) that have not been published will not be cleared.

Parameters

only_published – Whether to leave unpublished histograms

configure(config: Union[Dict[str, Any], None] = None) → None[source]
counter(name: str, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Counter'>[source]

Creates a new counter and prepares it for publishing. The initial value is 0 if not specified. Increment the counter after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created counter.

django_settings = None[source]
gauge(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Gauge'>[source]

Creates a new gauge and prepares it for publishing. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new gauge if there is already an unpublished gauge with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created gauge.

get_all_metrics() → List[pymetrics.instruments.Metric][source]
classmethod get_config_from_django() → Union[Dict[str, Any], None][source]

When not in Django context, this is a no-op, returning None. Otherwise, it attempts to return the METRICS setting from Django settings, if it exists, or the metrics setting from the SOA_SERVER_SETTINGS setting from Django settings, if it exists. METRICS is a standard established by this library. SOA_SERVER_SETTINGS is a standard established in PySOA, which uses PyMetrics.

Returns

The configuration dict from Django settings, if it exists, or None

classmethod get_django_settings()[source]
histogram(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Histogram'>[source]

Creates a new histogram for recording arbitrary numbers that can be averaged and summed. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new histogram if there is already an unpublished histogram with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created histogram.

property is_configured[source]
publish_all() → None[source]

Publishes all metrics that have been recorded since the last publish.

publish_if_full_or_old(max_metrics: int = 18, max_age: int = 10) → None[source]

Publishes all metrics if at least this many metrics have been recorded or at least this much time has elapsed from the previous publish.

Parameters
  • max_metrics – If the recorder is holding at least this many metrics, publish now (defaults to 18, which is a likely-safe amount assuming an MTU of 1500, which is the MTU for Docker containers)

  • max_age – If the recorder last published at least this many seconds ago, publish now, even if the recorder isn’t “full” (isn’t holding on to at least max_metrics).

throttled_publish_all(delay: int = 10) → None[source]

Publishes all metrics that have been recorded since the last publish unless it has been lest than delay seconds since the last publish.

Parameters

delay – The minimum number of seconds between publishes

timer(name, force_new=False, resolution=<TimerResolution.MILLISECONDS: 1000>, initial_value=0, **tags) → <class 'pymetrics.instruments.Timer'>[source]

Creates and starts new timer, a special type of histogram that is recorded around the passage of time. The initial value is 0 if not specified, and in most cases you should not specify an initial value. The default resolution is milliseconds, which is suitable for most use cases. The returned timer will need to be stopped before it can be published (or it will be ignored on publication). However, the returned timer is also a context manager, so it will stop itself if you surround the code you wish to measure with with timer(...).

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new timer if there is already an unpublished timer with the same name and tags.

  • resolution – The resolution at which the timer should record, which defaults to milliseconds

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created and started timer.

class pymetrics.recorders.noop.NonOperationalMetricsRecorder[source]

Bases: pymetrics.recorders.base.MetricsRecorder

A special metrics recorder that ignores configuration and doesn’t keep track of or publish any metrics, useful for testing and defaulting a metrics variable to eliminate conditional metrics recording.

Class Configuration Schema

strict dict: The no-ops recorder has no constructor arguments.

No keys permitted.

clear(only_published: bool = False) → None[source]

Does nothing

counter(name: str, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Counter'>[source]

Creates a new counter and prepares it for publishing. The initial value is 0 if not specified. Increment the counter after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created counter.

gauge(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Gauge'>[source]

Creates a new gauge and prepares it for publishing. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new gauge if there is already an unpublished gauge with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created gauge.

histogram(name: str, force_new: bool = False, initial_value: int = 0, **tags: Union[str, bytes, int, float, bool, None]) → <class 'pymetrics.instruments.Histogram'>[source]

Creates a new histogram for recording arbitrary numbers that can be averaged and summed. The initial value is 0 if not specified. Set the value after it is returned if you do not specify an initial value.

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new histogram if there is already an unpublished histogram with the same name and tags.

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created histogram.

publish_all() → None[source]

Does nothing

publish_if_full_or_old(max_metrics: int = 18, max_age: int = 10) → None[source]

Does nothing

throttled_publish_all(delay: int = 10) → None[source]

Does nothing

timer(name, force_new=False, resolution=<TimerResolution.MILLISECONDS: 1000>, initial_value=0, **tags) → <class 'pymetrics.instruments.Timer'>[source]

Creates and starts new timer, a special type of histogram that is recorded around the passage of time. The initial value is 0 if not specified, and in most cases you should not specify an initial value. The default resolution is milliseconds, which is suitable for most use cases. The returned timer will need to be stopped before it can be published (or it will be ignored on publication). However, the returned timer is also a context manager, so it will stop itself if you surround the code you wish to measure with with timer(...).

Parameters
  • name – The name of the metric

  • force_new – Whether to force the creation of a new timer if there is already an unpublished timer with the same name and tags.

  • resolution – The resolution at which the timer should record, which defaults to milliseconds

  • initial_value – The initial value, which defaults to 0

  • tags – An additional tags you want associated with this metric

Returns

the created and started timer.

pymetrics.recorders.noop.noop_metrics = <pymetrics.recorders.noop.NonOperationalMetricsRecorder object>[source]

A singleton instance of NonOperationalMetricsRecorder.

class pymetrics.publishers.base.MetricsPublisher[source]

Bases: object

abstract publish(metrics: Iterable[pymetrics.instruments.Metric], error_logger: str = None, enable_meta_metrics: bool = False) → None[source]

Publish the provided metrics in the manner prescribed by the implementation’s documentation.

Parameters
  • metrics – An iterable of all metrics that should be published

  • error_logger – The name of the error logger that should be used if an error occurs (if no error logger name is provided, errors will be suppressed)

  • enable_meta_metrics – If True, metrics about the performance of this publisher will also be recorded (not all publishers will have meta-metrics to record).

class pymetrics.publishers.null.NullPublisher[source]

Bases: pymetrics.publishers.base.MetricsPublisher

Class Configuration Schema

strict dict: (no description)

No keys permitted.

publish(metrics, error_logger=None, enable_meta_metrics=False)[source]

Does nothing

class pymetrics.publishers.logging.LogPublisher(log_name, log_level=20)[source]

Bases: pymetrics.publishers.base.MetricsPublisher

Class Configuration Schema

strict dict: (no description)

  • log_level - any of the types bulleted below: The log level (name or int) for publishing metrics, defaults to logging.INFO

    • constant: (no description) (additional information: {'values': [10, 20, 30, 40, 50]})

    • constant: (no description) (additional information: {'values': ['CRITICAL', 'DEBUG', 'ERROR', 'INFO', 'WARNING']})

  • log_name - unicode: The name of the logger to which to publish metrics

Optional keys: log_level

__init__(log_name: str, log_level: Union[int, str] = 20) → None[source]

Initialize self. See help(type(self)) for accurate signature.

publish(metrics: Iterable[pymetrics.instruments.Metric], error_logger: str = None, enable_meta_metrics: bool = False) → None[source]

Publish the provided metrics in the manner prescribed by the implementation’s documentation.

Parameters
  • metrics – An iterable of all metrics that should be published

  • error_logger – The name of the error logger that should be used if an error occurs (if no error logger name is provided, errors will be suppressed)

  • enable_meta_metrics – If True, metrics about the performance of this publisher will also be recorded (not all publishers will have meta-metrics to record).

class pymetrics.publishers.statsd.StatsdPublisher(host, port, network_timeout=0.5, maximum_packet_size=65000)[source]

Bases: pymetrics.publishers.base.MetricsPublisher

A publisher that emits UDP metrics packets to a Statsd consumer over a network connection.

For Statsd metric type suffixes, see https://github.com/etsy/statsd/blob/master/docs/metric_types.md.

Class Configuration Schema

strict dict: (no description)

  • host - unicode: The host name or IP address on which the Statsd server is listening

  • maximum_packet_size - integer: The maximum packet size to send (packets will be fragmented above this limit), defaults to 65000 bytes.

  • network_timeout - any of the types bulleted below: The network timeout

    • float: (no description) (additional information: {'gt': 0.0})

    • integer: (no description) (additional information: {'gt': 0})

  • port - integer: The port number on which the Statsd server is listening

Optional keys: maximum_packet_size, network_timeout

MAXIMUM_PACKET_SIZE = 65000[source]

Maximum size of a localhost UDP packet, which is used as the maximum maximum and default maximum packet size for this publisher (can be configured lower to account for lower MTU).

METRIC_TYPE_COUNTER = b'c'[source]
METRIC_TYPE_GAUGE = b'g'[source]
METRIC_TYPE_HISTOGRAM = b'ms'[source]
METRIC_TYPE_TIMER = b'ms'[source]
__init__(host: str, port: int, network_timeout: Union[int, float] = 0.5, maximum_packet_size: int = 65000) → None[source]

Initialize self. See help(type(self)) for accurate signature.

get_formatted_metrics(metrics: Iterable[pymetrics.instruments.Metric], enable_meta_metrics: bool = False) → List[bytes][source]
publish(metrics: Iterable[pymetrics.instruments.Metric], error_logger: str = None, enable_meta_metrics: bool = False) → None[source]

Publish the provided metrics in the manner prescribed by the implementation’s documentation.

Parameters
  • metrics – An iterable of all metrics that should be published

  • error_logger – The name of the error logger that should be used if an error occurs (if no error logger name is provided, errors will be suppressed)

  • enable_meta_metrics – If True, metrics about the performance of this publisher will also be recorded (not all publishers will have meta-metrics to record).

class pymetrics.publishers.datadog.DogStatsdPublisher(host, port, network_timeout=0.5, global_tags=None, extra_gauge_tags=None, use_distributions=False, maximum_packet_size=8000)[source]

Bases: pymetrics.publishers.statsd.StatsdPublisher

A special version of the Statsd publisher than understands the DataDog extensions to the Statsd protocol (histograms, distributions, and tags).

For DogStatsd metric type suffixes, see https://docs.datadoghq.com/developers/dogstatsd/#datagram-format.

Class Configuration Schema

strict dict: (no description)

  • extra_gauge_tags - flexible dict: Extra datadog tags, in addition to global_tags if applicable, to apply to all published gauges. This is necessary when multiple processes are simultaneously publishing gauges with the same name and you need to create charts or monitors that sum the values of all of these gauges across all processes (because Datadog does not support identical distributed gauge names+tags and will eliminate duplicates).

    keys

    unicode: (no description)

    values

    any of the types bulleted below (nullable): (no description)

    • unicode: (no description)

    • bytes: (no description)

    • integer: (no description)

    • float: (no description)

    • boolean: (no description)

  • global_tags - flexible dict: Datadog tags to apply to all published metrics.

    keys

    unicode: (no description)

    values

    any of the types bulleted below (nullable): (no description)

    • unicode: (no description)

    • bytes: (no description)

    • integer: (no description)

    • float: (no description)

    • boolean: (no description)

  • host - unicode: The host name or IP address on which the Dogstatsd server is listening

  • maximum_packet_size - integer: The maximum packet size to send (packets will be fragmented above this limit), defaults to 8000 bytes.

  • network_timeout - any of the types bulleted below: The network timeout

    • float: (no description) (additional information: {'gt': 0.0})

    • integer: (no description) (additional information: {'gt': 0})

  • port - integer: The port number on which the Dogstatsd server is listening

  • use_distributions - boolean: Whether to publish histograms and timers as Datadog distributions. For more information about Datadog distributions, see https://docs.datadoghq.com/graphing/metrics/distributions/.Defaults to False.

Optional keys: extra_gauge_tags, global_tags, maximum_packet_size, network_timeout, use_distributions

MAXIMUM_PACKET_SIZE = 8000[source]

Maximum size of the DogStatsd packet buffer, which is used as the maximum and default maximum packet size for this publisher (can be configured lower to account for lower MTU). See https://github.com/DataDog/dd-agent/issues/2638 and https://github.com/DataDog/dd-agent/blob/e805a9a2022803d832cf7f7d8fa8895fd686945c/dogstatsd.py#L367.

METRIC_TYPE_DISTRIBUTION = b'd'[source]
METRIC_TYPE_HISTOGRAM = b'h'[source]
__init__(host: str, port: int, network_timeout: Union[int, float] = 0.5, global_tags: Dict[str, Union[str, bytes, int, float, bool, None]] = None, extra_gauge_tags: Dict[str, Union[str, bytes, int, float, bool, None]] = None, use_distributions: bool = False, maximum_packet_size: int = 8000) → None[source]

Initialize self. See help(type(self)) for accurate signature.

get_formatted_metrics(metrics: Iterable[pymetrics.instruments.Metric], enable_meta_metrics: bool = False) → List[bytes][source]
class pymetrics.publishers.sql.SqlPublisher[source]

Bases: pymetrics.publishers.base.MetricsPublisher

Abstract base class for publishers that publish to SQL databases of any type. Subclasses should implement all the backend-specific logic.

database_type = None[source]

The name of the database backend type, used when logging errors.

exception_type[source]

alias of builtins.Exception

abstract execute_statement_multiple_times(statement: str, arguments: Generator[Tuple[Any, ...], None, None]) → None[source]

Execute a given statement multiple times (possibly prepared) using the given generator of argument tuples.

Parameters
  • statement – The SQL statement

  • arguments – A generator of tuples of arguments, one tuple for each time the statement should be executed

abstract initialize_if_necessary() → None[source]

Initialize the database connection, schema, etc., if necessary.

insert_counters(counters: Iterable[pymetrics.instruments.Counter]) → None[source]
insert_histograms(histograms: Iterable[pymetrics.instruments.Histogram]) → None[source]
insert_or_update_gauges(gauges: Iterable[pymetrics.instruments.Gauge]) → None[source]
insert_timers(timers: Iterable[pymetrics.instruments.Timer]) → None[source]
publish(metrics: Iterable[pymetrics.instruments.Metric], error_logger: str = None, enable_meta_metrics: bool = False) → None[source]

Publish the provided metrics in the manner prescribed by the implementation’s documentation.

Parameters
  • metrics – An iterable of all metrics that should be published

  • error_logger – The name of the error logger that should be used if an error occurs (if no error logger name is provided, errors will be suppressed)

  • enable_meta_metrics – If True, metrics about the performance of this publisher will also be recorded (not all publishers will have meta-metrics to record).

class pymetrics.publishers.sqlite.SqlitePublisher(database_name=':memory:', use_uri=False)[source]

Bases: pymetrics.publishers.sql.SqlPublisher

A publisher that emits metrics to a Sqlite database file or in-memory database. Especially useful for use in tests where you need to actually evaluate your metrics.

Class Configuration Schema

strict dict: (no description)

  • database_name - unicode: The name of the Sqlite database to use

  • use_uri - boolean: Whether the database name should be treated as a URI (Python 3+ only)

Optional keys: database_name, use_uri

MEMORY_DATABASE_NAME = ':memory:'[source]
__init__(database_name=':memory:', use_uri=False) → None[source]

Initialize self. See help(type(self)) for accurate signature.

classmethod clear_metrics_from_database(connection: <class 'sqlite3.Connection'>) → None[source]
static connection_context(connection: <class 'sqlite3.Connection'>) → Generator[sqlite3.Cursor, None, None][source]
database_context() → Generator[sqlite3.Cursor, None, None][source]
database_type = 'Sqlite'[source]
exception_type[source]

alias of sqlite3.Error

execute_statement_multiple_times(statement: str, arguments: Generator[Tuple[Any, ...], None, None]) → None[source]

Execute a given statement multiple times (possibly prepared) using the given generator of argument tuples.

Parameters
  • statement – The SQL statement

  • arguments – A generator of tuples of arguments, one tuple for each time the statement should be executed

classmethod get_connection(database_name=':memory:', use_uri=False) → <class 'pymetrics.publishers.sqlite.Sqlite3Connection'>[source]
initialize_if_necessary() → None[source]

Initialize the database connection, schema, etc., if necessary.

Copyright © 2019 Eventbrite, freely licensed under Apache License, Version 2.0.

Documentation generated 2019 December 12 04:57 UTC.