Code Monkey home page Code Monkey logo

autometrics-rs's Introduction

GitHub_headerImage

Documentation Crates.io Discord Shield

A Rust macro that makes it easy to understand the error rate, response time, and production usage of any function in your code.

Jump from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.

autometrics.mp4

use autometrics::autometrics;

#[autometrics]
pub async fn create_user() {
  // Now this function will be producing metrics!
}

Features

  • โœจ #[autometrics] macro instruments any function or impl block to track the most useful metrics
  • ๐Ÿ’ก Writes Prometheus queries so you can understand the data generated without knowing PromQL
  • ๐Ÿ”— Injects links to live Prometheus charts directly into each function's doc comments
  • ๐Ÿ“Š Grafana dashboards to visualize the performance of all instrumented functions
  • ๐Ÿšจ Enable Prometheus alerts using SLO best practices from simple annotations in your code
  • โš™๏ธ Configurable metric collection library (opentelemetry, prometheus, or metrics)
  • โšก Minimal runtime overhead

See Why Autometrics? for more details on the ideas behind autometrics.

Examples

To see autometrics in action:

  1. Install prometheus locally
  2. Run the complete example:
cargo run -p example-full-api
  1. Hover over the function names to see the generated query links (like in the image above) and try clicking on them to go straight to that Prometheus chart.

See the other examples for details on how to use the various features and integrations.

Or run the example in Gitpod:

Open in Gitpod

Exporting Prometheus Metrics

Prometheus works by polling a specific HTTP endpoint on your server to collect the current state of all the metrics it has in memory.

For projects not currently using Prometheus metrics

Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

In your Cargo.toml file, enable the optional prometheus-exporter feature:

autometrics = { version = "*", features = ["prometheus-exporter"] }

Then, call the global_metrics_exporter function in your main function:

pub fn main() {
  let _exporter = autometrics::global_metrics_exporter();
  // ...
}

And create a route on your API (probably mounted under /metrics) that returns the following:

pub fn get_metrics() -> (http::StatusCode, String) {
  match autometrics::encode_global_metrics() {
    Ok(metrics) => (http::StatusCode::OK, metrics),
    Err(err) => (http::StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
  }
}

For projects already using custom Prometheus metrics

Autometrics uses existing metrics libraries (see below) to produce and collect metrics.

If you are already using one of these to collect and export metrics, simply configure autometrics to use the same library and the metrics it produces will be exported alongside yours. You do not need to use the Prometheus exporter functions this library provides and you do not need a separate endpoint for autometrics' metrics.

Dashboards

Autometrics provides Grafana dashboards that will work for any project instrumented with the library.

Alerts / SLOs

Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.

This works using pre-defined Prometheus alerting rules. By default, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.

To use autometrics SLOs and alerts, create one or multiple Objectives based on the function(s) success rate and/or latency, as shown below. The Objective can be passed as an argument to the autometrics macro to include the given function in that objective.

use autometrics::autometrics;
use autometrics::objectives::{Objective, ObjectiveLatency, ObjectivePercentile};

const API_SLO: Objective = Objective::new("api")
    .success_rate(ObjectivePercentile::P99_9)
    .latency(ObjectiveLatency::Ms250, ObjectivePercentile::P99);

#[autometrics(objective = API_SLO)]
pub fn api_handler() {
  // ...
}

Once you've added objectives to your code, you can use the Autometrics Service-Level Objectives(SLO) Dashboard to visualize the current status of your objective(s).

Configuring Autometrics

Custom Prometheus URL

By default, Autometrics creates Prometheus query links that point to http://localhost:9090.

You can configure a custom Prometheus URL using a build-time environment in your build.rs file:

// build.rs

fn main() {
  let prometheus_url = "https://your-prometheus-url.example";
  println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}

When using Rust Analyzer, you may need to reload the workspace in order for URL changes to take effect.

Note that the Prometheus URL is only included in function documentation comments so changing it will have no impact on the final compiled binary.

Feature flags

  • prometheus-exporter - exports a Prometheus metrics collector and exporter (compatible with any of the Metrics Libraries)
  • custom-objective-latency - by default, Autometrics only supports a fixed set of latency thresholds for objectives. Enable this to use custom latency thresholds. Note, however, that the custom latency must match one of the buckets configured for your histogram, meaning you will not be able to use the default Prometheus exporter. This is not currently compatible with the prometheus or prometheus-exporter feature.
  • custom-objective-percentile by default, Autometrics only supports a fixed set of objective percentiles. Enable this to use a custom percentile. Note, however, that using custom percentiles requires generating a different recording and alerting rules file using the CLI + Sloth.

Metrics Libraries

Configure the crate that autometrics will use to produce metrics by using one of the following feature flags:

  • opentelemetry (enabled by default) - use the opentelemetry crate for producing metrics
  • metrics - use the metrics crate for producing metrics
  • prometheus - use the prometheus crate for producing metrics

autometrics-rs's People

Contributors

emschwartz avatar gagbo avatar mies avatar jeanp413 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.