2018-11-02 00:16:44 +00:00
|
|
|
# The dipstick handbook
|
2018-12-18 19:17:12 +00:00
|
|
|
This handbook's purpose is to get you started instrumenting your apps and give an idea of what's possible.
|
|
|
|
It is not a full-on cookbook (yet) - some reader experimentation may be required!
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
# Background
|
2018-12-18 19:17:12 +00:00
|
|
|
Dipstick is a structured metrics library that allows to combine, select from, and switch between multiple metrics backends.
|
2019-01-12 23:21:47 +00:00
|
|
|
Counters, timers and gauges declared in the code are not tied to a specific implementation. This has multiple benefits.
|
2018-12-18 19:17:12 +00:00
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
- Simplified instrumentation:
|
2019-01-12 23:21:47 +00:00
|
|
|
For example, a single Counter instrument can be used to send metrics to the log and to a network aggregator.
|
|
|
|
This prevents duplication of instrumentation which in turn prevents errors.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
- Flexible configuration:
|
2019-01-12 23:21:47 +00:00
|
|
|
Let's say we had an application defining a "fixed" metrics stack on initialization.
|
|
|
|
We could upgrade to a configuration file defined metrics stack without altering the instruments in every module.
|
|
|
|
Or we could make the configuration hot-reloadable, suddenly using different output formats.
|
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
- Easier metrics testing:
|
2018-12-18 19:17:12 +00:00
|
|
|
For example, using a compile-time feature switch, metrics could be collected directly to hash map at test time
|
|
|
|
but be sent over the network and written to a file at runtime, as described by external configuration.
|
|
|
|
|
|
|
|
# API Overview
|
2018-11-02 00:16:44 +00:00
|
|
|
Dipstick's API is split between _input_ and _output_ layers.
|
|
|
|
The input layer provides named metrics such as counters and timers to be used by the application.
|
|
|
|
The output layer controls how metric values will be recorded and emitted by the configured backend(s).
|
|
|
|
Input and output layers are decoupled, making code instrumentation independent of output configuration.
|
2018-12-18 19:17:12 +00:00
|
|
|
Intermediates can also be added between input and output for specific features or performance characteristics.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2018-12-18 19:17:12 +00:00
|
|
|
Although this handbook covers input before output, implementation of metrics can certainly be performed the other way around.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
For more details, consult the [docs](https://docs.rs/dipstick/).
|
|
|
|
|
|
|
|
|
|
|
|
## Metrics Input
|
|
|
|
A metrics library first job is to help a program collect measurements about its operations.
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Dipstick provides a restricted but robust set of _five_ instrument types, taking a stance against
|
2018-11-02 00:16:44 +00:00
|
|
|
an application's functional code having to pick what statistics should be tracked for each defined metric.
|
|
|
|
This helps to enforce contracts with downstream metrics systems and keeps code free of configuration elements.
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
### Counters
|
|
|
|
Counters a quantity of elements processed, for example, the number of bytes received in a read operation.
|
|
|
|
Counters only accepts positive values.
|
|
|
|
|
|
|
|
### Markers
|
|
|
|
Markers counters that can only be incremented by one (i.e. they are _monotonic_ counters).
|
|
|
|
Markers are useful to count the processing of individual events, or the occurrence of errors.
|
|
|
|
The default statistics for markers are not the same as those for counters.
|
|
|
|
Markers offer a safer API than counters, preventing values other than "1" from being passed.
|
|
|
|
|
|
|
|
### Timers
|
|
|
|
Timers measure an operation's duration.
|
|
|
|
Timers can be used in code with the `time!` macro, wrap around a closure or with explicit calls to `start()` and `stop()`.
|
|
|
|
|
2019-01-23 14:07:32 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
let metrics = Stream::write_to_stdout().metrics();
|
2019-04-09 12:43:14 +00:00
|
|
|
let timer = metrics.timer("my_timer");
|
2019-01-23 14:07:32 +00:00
|
|
|
|
2019-04-09 12:43:14 +00:00
|
|
|
// using macro
|
|
|
|
time!(timer, {/* timed code here ... */} );
|
2019-01-23 14:07:32 +00:00
|
|
|
|
2019-04-09 12:43:14 +00:00
|
|
|
// using closure
|
|
|
|
timer.time(|| {/* timed code here ... */} );
|
|
|
|
|
|
|
|
// using start/stop
|
|
|
|
let handle = timer.start();
|
|
|
|
/* timed code here ... */
|
|
|
|
timer.stop(handle);
|
|
|
|
|
|
|
|
// directly reporting microseconds
|
2019-01-23 14:07:32 +00:00
|
|
|
timer.interval_us(123_456);
|
|
|
|
}
|
2018-11-02 00:16:44 +00:00
|
|
|
```
|
2019-01-13 17:49:24 +00:00
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Time intervals are measured in microseconds, and can be scaled down (milliseconds, seconds...) on output.
|
|
|
|
Internally, timers use nanoseconds precision but their actual accuracy will depend on the platform's OS and hardware.
|
|
|
|
|
|
|
|
Note that Dipstick's embedded and always-on nature make its time measurement goals different from those of a full-fledged profiler.
|
|
|
|
Simplicity, flexibility and low impact on application performance take precedence over accuracy.
|
|
|
|
Timers should still offer more than reasonable performance for most I/O and high-level CPU operations.
|
|
|
|
|
|
|
|
### Levels
|
|
|
|
Levels are relative, cumulative counters.
|
|
|
|
Compared to counters:
|
|
|
|
- Levels accepts positive and negative values.
|
|
|
|
- Level's aggregation statistics will track the minimum and maximum _sum_ of values at any point,
|
|
|
|
rather than min/max observed individual values.
|
|
|
|
|
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
let metrics = Stream::write_to_stdout().metrics();
|
2019-04-11 04:41:44 +00:00
|
|
|
let queue_length = metrics.level("queue_length");
|
|
|
|
queue_length.adjust(-2);
|
|
|
|
queue_length.adjust(4);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
Levels are halfway between counters and gauges and may be preferred to either in some situations.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
### Gauges
|
|
|
|
Gauges are use to record instant observation of a resource's value.
|
|
|
|
Gauges values can be positive or negative, but are non-cumulative.
|
|
|
|
As such, a gauge's aggregated statistics are simply the mean, max and min values.
|
|
|
|
Values can be observed for gauges at any moment, like any other metric.
|
2019-04-09 12:43:14 +00:00
|
|
|
|
2019-04-10 15:00:46 +00:00
|
|
|
```rust
|
2019-04-09 12:43:14 +00:00
|
|
|
use dipstick::*;
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
let metrics = Stream::write_to_stdout().metrics();
|
2019-04-11 04:41:44 +00:00
|
|
|
let uptime = metrics.gauge("uptime");
|
2019-04-10 15:00:46 +00:00
|
|
|
uptime.value(2);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
### Observers
|
|
|
|
The observation of values for any metric can be triggered on schedule or upon publication.
|
2019-04-10 15:00:46 +00:00
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
This mechanism can be used for automatic reporting of gauge values:
|
2019-04-10 15:00:46 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
use std::time::{Duration, Instant};
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
let metrics = Stream::write_to_stdout().metrics();
|
2019-04-09 12:43:14 +00:00
|
|
|
|
|
|
|
// observe a constant value before each flush
|
|
|
|
let uptime = metrics.gauge("uptime");
|
2019-04-10 15:00:46 +00:00
|
|
|
metrics.observe(uptime, |_| 6).on_flush();
|
2019-04-09 12:43:14 +00:00
|
|
|
|
|
|
|
// observe a function-provided value periodically
|
2019-04-10 15:00:46 +00:00
|
|
|
let _handle = metrics
|
2019-04-09 12:43:14 +00:00
|
|
|
.observe(metrics.gauge("threads"), thread_count)
|
|
|
|
.every(Duration::from_secs(1));
|
|
|
|
}
|
|
|
|
|
2019-04-10 15:00:46 +00:00
|
|
|
fn thread_count(_now: Instant) -> MetricValue {
|
2019-04-09 12:43:14 +00:00
|
|
|
6
|
|
|
|
}
|
2019-04-10 15:00:46 +00:00
|
|
|
```
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Observations triggered `on_flush` take place _before_ metrics are published, allowing last-moment insertion of metric values.
|
|
|
|
|
|
|
|
Scheduling could also be used to setup a "heartbeat" metric:
|
2019-04-10 15:00:46 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
use std::time::{Duration};
|
|
|
|
|
|
|
|
fn main() {
|
|
|
|
let metrics = Graphite::send_to("localhost:2003")
|
|
|
|
.expect("Connected")
|
|
|
|
.metrics();
|
|
|
|
let heartbeat = metrics.marker("heartbeat");
|
|
|
|
// update a metric with a constant value every 5 sec
|
|
|
|
let _handle = metrics.observe(heartbeat, |_| 1)
|
|
|
|
.every(Duration::from_secs(5));
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Scheduled operations can be cancelled at any time using the returned `CancelHandle`.
|
|
|
|
Also, scheduled operations are canceled automatically when the metrics input scope they were attached to is `Drop`ped,
|
|
|
|
making them more useful with persistent, statically declared `metrics!()`.
|
|
|
|
|
|
|
|
Observation scheduling is done on a best-effort basis by a simple but efficient internal single-thread scheduler.
|
|
|
|
Be mindful of measurement callback performance to prevent slippage of following observation tasks.
|
|
|
|
The scheduler only runs if scheduling is used.
|
|
|
|
Once started, the scheduler thread will run a low-overhead wait loop until the application is terminated.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
### Names
|
2019-04-11 04:41:44 +00:00
|
|
|
Each metric is given a simple name upon instantiation.
|
2018-11-02 00:16:44 +00:00
|
|
|
Names are opaque to the application and are used only to identify the metrics upon output.
|
|
|
|
|
2019-04-09 12:43:14 +00:00
|
|
|
Names may be prepended with a application-namespace shared across all backends.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-23 14:07:32 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
let stdout = Stream::write_to_stdout();
|
2019-04-11 04:41:44 +00:00
|
|
|
|
|
|
|
let metrics = stdout.metrics();
|
2019-04-09 12:43:14 +00:00
|
|
|
|
|
|
|
// plainly name "timer"
|
|
|
|
let _timer = metrics.timer("timer");
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
// prepend metrics namespace
|
2019-04-09 12:43:14 +00:00
|
|
|
let db_metrics = metrics.named("database");
|
|
|
|
|
|
|
|
// qualified name will be "database.counter"
|
|
|
|
let _db_counter = db_metrics.counter("counter");
|
2019-01-23 14:07:32 +00:00
|
|
|
}
|
2018-11-02 00:16:44 +00:00
|
|
|
```
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Names may also be prepended with a namespace by each configured backend.
|
|
|
|
For example, the metric named `success`, declared under the namespace `request` could appear under different qualified names:
|
2019-04-09 12:43:14 +00:00
|
|
|
- logging as `app_module.request.success`
|
|
|
|
- statsd as `environment.hostname.pid.module.request.success`
|
|
|
|
|
|
|
|
Aggregation statistics may also append identifiers to the metric's name, such as `counter_mean` or `marker_rate`.
|
|
|
|
|
|
|
|
Names should exclude characters that can interfere with namespaces, separator and output protocols.
|
|
|
|
A good convention is to stick with lowercase alphanumeric identifiers of less than 12 characters.
|
|
|
|
|
2019-04-11 04:41:44 +00:00
|
|
|
Note that highly dynamic elements in metric names are usually better handled using `Labels`.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
### Labels
|
|
|
|
|
|
|
|
Some backends (such as Prometheus) allow "tagging" the metrics with labels to provide additional context,
|
|
|
|
such as the URL or HTTP method requested from a web server.
|
|
|
|
Dipstick offers the thread-local ThreadLabel and global AppLabel context maps to transparently carry
|
|
|
|
metadata to the backends configured to use it.
|
|
|
|
|
|
|
|
Notes about labels:
|
|
|
|
- Using labels may incur a significant runtime cost because
|
|
|
|
of the additional implicit parameter that has to be carried around.
|
|
|
|
- Labels runtime costs may be even higher if async queuing is used
|
|
|
|
since current context has to be persisted across threads.
|
|
|
|
- While internally supported, single metric labels are not yet part of the input API.
|
|
|
|
If this is important to you, consider using dynamically defined metrics or open a GitHub issue!
|
|
|
|
|
|
|
|
|
|
|
|
### Static vs dynamic metrics
|
|
|
|
|
|
|
|
Metric inputs are usually setup statically upon application startup.
|
|
|
|
|
2019-01-23 14:07:32 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
|
2018-11-02 00:16:44 +00:00
|
|
|
metrics!("my_app" => {
|
|
|
|
COUNTER_A: Counter = "counter_a";
|
|
|
|
});
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
Proxy::default_target(Stream::write_to_stdout().metrics());
|
2018-11-02 00:16:44 +00:00
|
|
|
COUNTER_A.count(11);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
The static metric definition macro is just `lazy_static!` wrapper.
|
|
|
|
|
|
|
|
## Dynamic metrics
|
|
|
|
|
2018-12-31 22:03:34 +00:00
|
|
|
If necessary, metrics can also be defined "dynamically".
|
|
|
|
This is more flexible but has a higher runtime cost, which may be alleviated with the optional caching mechanism.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-23 14:07:32 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
fn main() {
|
|
|
|
let user_name = "john_day";
|
2019-01-23 17:58:13 +00:00
|
|
|
let app_metrics = Log::to_log().cached(512).metrics();
|
2019-01-23 14:07:32 +00:00
|
|
|
app_metrics.gauge(&format!("gauge_for_user_{}", user_name)).value(44);
|
|
|
|
}
|
2018-11-02 00:16:44 +00:00
|
|
|
```
|
|
|
|
|
2018-12-31 22:03:34 +00:00
|
|
|
Alternatively, you may use `Labels` to output context-dependent metrics.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
## Metrics Output
|
|
|
|
A metrics library's second job is to help a program emit metric values that can be used in further systems.
|
|
|
|
|
|
|
|
Dipstick provides an assortment of drivers for network or local metrics output.
|
|
|
|
Multiple outputs can be used at a time, each with its own configuration.
|
|
|
|
|
|
|
|
### Types
|
|
|
|
These output type are provided, some are extensible, you may write your own if you need to.
|
|
|
|
|
2020-05-15 04:28:40 +00:00
|
|
|
- Stream: Write values to any Write trait implementer, including files, stderr and stdout.
|
|
|
|
- Log: Write values to the log using the `log` crate.
|
|
|
|
- Map: Insert metric values in a map. Useful for testing or programmatic retrieval of stats.
|
|
|
|
- Statsd: Send metrics over UDP using the statsd format. Allows sampling of values.
|
|
|
|
- Graphite: Send metrics over TCP using the graphite format.
|
|
|
|
- Prometheus: Send metrics to a Prometheus "PushGateway" using the Prometheus 2.0 text format.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
### Attributes
|
|
|
|
Attributes change the outputs behavior.
|
|
|
|
|
|
|
|
#### Prefixes
|
|
|
|
Outputs can be given Prefixes.
|
|
|
|
Prefixes are prepended to the Metrics names emitted by this output.
|
|
|
|
With network outputs, a typical use of Prefixes is to identify the network host,
|
|
|
|
environment and application that metrics originate from.
|
|
|
|
|
|
|
|
#### Formatting
|
|
|
|
Stream and Log outputs have configurable formatting that enables usage of custom templates.
|
|
|
|
Other outputs, such as Graphite, have a fixed format because they're intended to be processed by a downstream system.
|
|
|
|
|
|
|
|
#### Buffering
|
|
|
|
Most outputs provide optional buffering, which can be used to optimized throughput at the expense of higher latency.
|
|
|
|
If enabled, buffering is usually a best-effort affair, to safely limit the amount of memory that is used by the metrics.
|
|
|
|
|
|
|
|
#### Sampling
|
2020-05-15 04:28:40 +00:00
|
|
|
Some outputs such as statsd also have the ability to sample metrics values.
|
2018-11-02 00:16:44 +00:00
|
|
|
If enabled, sampling is done using pcg32, a fast random algorithm with reasonable entropy.
|
|
|
|
|
2019-01-23 14:07:32 +00:00
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
fn main() {
|
|
|
|
let _app_metrics = Statsd::send_to("localhost:8125").expect("connected")
|
|
|
|
.sampled(Sampling::Random(0.01))
|
2019-01-23 17:58:13 +00:00
|
|
|
.metrics();
|
2019-01-23 14:07:32 +00:00
|
|
|
}
|
2018-11-02 00:16:44 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
## Intermediates
|
|
|
|
|
|
|
|
### Proxy
|
|
|
|
Because the input's actual _implementation_ depends on the output configuration,
|
|
|
|
it is necessary to create an output channel before defining any metrics.
|
|
|
|
This is often not possible because metrics configuration could be dynamic (e.g. loaded from a file),
|
|
|
|
which might happen after the static initialization phase in which metrics are defined.
|
2019-06-22 12:52:10 +00:00
|
|
|
To get around this catch-22, Dipstick provides `Proxy` which acts as intermediate output,
|
2018-11-02 00:16:44 +00:00
|
|
|
allowing redirection to the effective output after it has been set up.
|
|
|
|
|
2019-06-22 12:52:10 +00:00
|
|
|
A default `Proxy` is defined when using the simplest form of the `metrics!` macro:
|
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
|
|
|
|
metrics!(
|
|
|
|
COUNTER: Counter = "another_counter";
|
|
|
|
);
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
dipstick::Proxy::default_target(Stream::write_to_stdout().metrics());
|
2019-06-22 12:52:10 +00:00
|
|
|
COUNTER.count(456);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
On occasion it might be useful to define your own `Proxy` for independent configuration:
|
|
|
|
```rust
|
|
|
|
use dipstick::*;
|
|
|
|
|
|
|
|
metrics! {
|
|
|
|
pub MY_PROXY: Proxy = "my_proxy" => {
|
|
|
|
COUNTER: Counter = "some_counter";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn main() {
|
2020-05-15 04:28:40 +00:00
|
|
|
MY_PROXY.target(Stream::write_to_stdout().metrics());
|
2019-06-22 12:52:10 +00:00
|
|
|
COUNTER.count(456);
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
The performance overhead incurred by the proxy's dynamic dispatching of metrics will be negligible
|
|
|
|
in most applications in regards to the flexibility and convenience provided.
|
2018-11-13 14:09:09 +00:00
|
|
|
|
2018-11-02 00:16:44 +00:00
|
|
|
### Bucket
|
2019-04-11 04:41:44 +00:00
|
|
|
The `AtomicBucket` can be used to aggregate metric values.
|
2018-11-02 00:16:44 +00:00
|
|
|
Bucket aggregation is performed locklessly and is very fast.
|
2019-04-11 04:41:44 +00:00
|
|
|
The tracked statistics vary across metric types:
|
|
|
|
|
|
|
|
| |Counter|Marker | Level | Gauge | Timer |
|
|
|
|
|-------|-------|--- |--- |--- |--- |
|
|
|
|
| count | x | x | x | | x |
|
|
|
|
| sum | x | | | | x |
|
|
|
|
| min | x | | s | x | x |
|
|
|
|
| max | x | | s | x | x |
|
|
|
|
| rate | | x | | | x |
|
|
|
|
| mean | x | | x | x | x |
|
|
|
|
|
|
|
|
Some notes on statistics:
|
|
|
|
|
|
|
|
- The count is the "hit count" - the number of times values were recorded.
|
|
|
|
If no values were recorded, no statistics are emitted for this metric.
|
|
|
|
|
|
|
|
- Markers have no `sum` as it would always be equal to the `count`.
|
|
|
|
|
|
|
|
- The mean is derived from the sum divided by the count of values.
|
|
|
|
Because count and sum are read sequentially but not atomically, there is _very small_ chance that the
|
|
|
|
calculation could be off in scenarios of high concurrency. It is assumed that the amount of data collected
|
|
|
|
in such situations will make up for the slight error.
|
|
|
|
|
|
|
|
- Min and max are for individual values except for level where the sum of values is tracked instead.
|
|
|
|
|
|
|
|
- The Rate is derived from the sum of values divided by the duration of the aggregation.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
#### Preset bucket statistics
|
2018-12-19 01:29:19 +00:00
|
|
|
Published statistics can be selected with presets such as `all_stats`, `summary`, `average`.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
#### Custom bucket statistics
|
2018-12-19 01:29:19 +00:00
|
|
|
For more control over published statistics, you can provide your own strategy.
|
|
|
|
Consult the `custom_publish` [example](https://github.com/fralalonde/dipstick/blob/master/examples/custom_publish.rs)
|
|
|
|
to see how this can be done.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2018-11-13 14:09:09 +00:00
|
|
|
|
2018-11-02 00:16:44 +00:00
|
|
|
#### Scheduled publication
|
2018-12-19 01:29:19 +00:00
|
|
|
Buffered and aggregated (bucket) metrics can be scheduled to be
|
|
|
|
[periodically published](https://github.com/fralalonde/dipstick/blob/master/examples/bucket_summary.rs) as a background task.
|
|
|
|
The schedule runs on a dedicated thread and follows a recurrent `Duration`.
|
|
|
|
It can be cancelled at any time using the `CancelHandle` returned by the `flush_every()` method.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
|
|
|
### Multi
|
2018-12-19 01:29:19 +00:00
|
|
|
Just like Constructicons, multiple metrics channels can assemble, creating a unified facade
|
|
|
|
that transparently dispatches metrics to every constituent.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2018-12-19 01:29:19 +00:00
|
|
|
This can be done using multiple [inputs](https://github.com/fralalonde/dipstick/blob/master/examples/multi_input.rs)
|
|
|
|
or multiple [outputs](https://github.com/fralalonde/dipstick/blob/master/examples/multi_output.rs)
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
### Asynchronous Queue
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
Metrics can be collected asynchronously using a queue.
|
2018-11-02 00:16:44 +00:00
|
|
|
The async queue uses a Rust channel and a standalone thread.
|
2018-12-19 01:29:19 +00:00
|
|
|
If the queue ever fills up under heavy load, it reverts to blocking (rather than dropping metrics).
|
|
|
|
I'm sure [an example](https://github.com/fralalonde/dipstick/blob/master/examples/async_queue.rs) would help.
|
2018-11-02 00:16:44 +00:00
|
|
|
|
2019-01-13 17:49:24 +00:00
|
|
|
This is a tradeoff, lowering app latency by taking any metrics I/O off the thread but increasing overall metrics reporting latency.
|
|
|
|
Using async metrics should not be required if using only aggregated metrics such as an `AtomicBucket`.
|
2019-06-22 12:52:10 +00:00
|
|
|
|