Merge branch 'master' of github.com:fralalonde/dipstick

This commit is contained in:
Francis Lalonde 2019-06-22 08:52:10 -04:00
commit 3147ca034f
7 changed files with 109 additions and 11 deletions

View File

@ -1,7 +1,11 @@
# Latest changes + history
## version 0.7.9
- Prometheus uses HTTP POST, not GET
- Add proxy_multi_output example
## version 0.7.8
- Fixed Prometheus output https://github.com/fralalonde/dipstick/issues/70
- Fix Prometheus output https://github.com/fralalonde/dipstick/issues/70
## version 0.7.6
- Move to Rust 2018 using cargo fix --edition and some manual help
@ -9,7 +13,7 @@
- Give each flush listener a unique id
## version 0.7.5
- Fix leak on observers when registering same metric twice.
- Fix leak on observers when registering same metric twice.
- Add `metric_id()` on `InputMetric`
## version 0.7.4

View File

@ -1,6 +1,6 @@
[package]
name = "dipstick"
version = "0.7.8-alpha.0"
version = "0.7.9"
authors = ["Francis Lalonde <fralalonde@gmail.com>"]
description = """A fast, all-purpose metrics library decoupling instrumentation from reporting backends.

View File

@ -331,9 +331,41 @@ Because the input's actual _implementation_ depends on the output configuration,
it is necessary to create an output channel before defining any metrics.
This is often not possible because metrics configuration could be dynamic (e.g. loaded from a file),
which might happen after the static initialization phase in which metrics are defined.
To get around this catch-22, Dipstick provides a Proxy which acts as intermediate output,
To get around this catch-22, Dipstick provides `Proxy` which acts as intermediate output,
allowing redirection to the effective output after it has been set up.
A default `Proxy` is defined when using the simplest form of the `metrics!` macro:
```rust
use dipstick::*;
metrics!(
COUNTER: Counter = "another_counter";
);
fn main() {
dipstick::Proxy::default_target(Stream::to_stdout().metrics());
COUNTER.count(456);
}
```
On occasion it might be useful to define your own `Proxy` for independent configuration:
```rust
use dipstick::*;
metrics! {
pub MY_PROXY: Proxy = "my_proxy" => {
COUNTER: Counter = "some_counter";
}
}
fn main() {
MY_PROXY.target(Stream::to_stdout().metrics());
COUNTER.count(456);
}
```
The performance overhead incurred by the proxy's dynamic dispatching of metrics will be negligible
in most applications in regards to the flexibility and convenience provided.
### Bucket
The `AtomicBucket` can be used to aggregate metric values.
@ -396,3 +428,4 @@ I'm sure [an example](https://github.com/fralalonde/dipstick/blob/master/example
This is a tradeoff, lowering app latency by taking any metrics I/O off the thread but increasing overall metrics reporting latency.
Using async metrics should not be required if using only aggregated metrics such as an `AtomicBucket`.

View File

@ -77,7 +77,7 @@ To use Dipstick in your project, add the following line to your `Cargo.toml`
in the `[dependencies]` section:
```toml
dipstick = "0.7.8"
dipstick = "0.7.9"
```
## External features

View File

@ -0,0 +1,60 @@
//! Use the proxy to pipeline metrics to multiple outputs
extern crate dipstick;
/// Create a pipeline that branches out...
/// The key here is to use AtomicBucket to read
/// from the proxy and aggregate and flush metrics
///
/// Proxy
/// -> AtomicBucket
/// -> MultiOutput
/// -> Prometheus
/// -> Statsd
/// -> stdout
use dipstick::*;
use std::time::Duration;
metrics! {
pub PROXY: Proxy = "my_proxy" => {}
}
fn main() {
// Placeholder to collect output targets
// This will prefix all metrics with "my_stats"
// before flushing them.
let mut targets = MultiOutput::new().named("my_stats");
// Skip the metrics here... we just use this for the output
// Follow the same pattern for Statsd, Graphite, etc.
let prometheus = Prometheus::push_to("http://localhost:9091/metrics/job/dipstick_example")
.expect("Prometheus Socket");
targets = targets.add_target(prometheus);
// Add stdout
targets = targets.add_target(Stream::to_stdout());
// Create the bucket and drain targets
let bucket = AtomicBucket::new();
bucket.drain(targets);
// Crucial, set the flush interval, otherwise risk hammering targets
bucket.flush_every(Duration::from_secs(3));
// Now wire up the proxy target with the bucket and you're all set
let proxy = Proxy::default();
proxy.target(bucket.clone());
// Example using the macro! Proxy sugar
PROXY.target(bucket.named("global"));
loop {
// Using the default proxy
proxy.counter("beans").count(100);
proxy.timer("braincells").interval_us(420);
// global example
PROXY.counter("my_proxy_counter").count(123);
PROXY.timer("my_proxy_timer").interval_us(2000000);
std::thread::sleep(Duration::from_millis(100));
}
}

View File

@ -153,16 +153,17 @@ impl PrometheusScope {
return Ok(());
}
// println!("{}", buf.as_str());
// buf.clear();
// Ok(())
match minreq::get(self.push_url.as_str())
match minreq::post(self.push_url.as_str())
.with_body(buf.as_str())
.send()
{
Ok(_res) => {
Ok(http_result) => {
metrics::PROMETHEUS_SENT_BYTES.count(buf.len());
trace!("Sent {} bytes to Prometheus", buf.len());
trace!(
"Sent {} bytes to Prometheus (resp status code: {})",
buf.len(),
http_result.status_code
);
buf.clear();
Ok(())
}