for great justice

This commit is contained in:
Francis Lalonde 2018-01-08 00:32:32 -05:00
parent aecc1ecc2a
commit 39358e0a39
42 changed files with 1453 additions and 1119 deletions

46
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at steve@steveklabnik.com or ashley666ashley@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

398
Cargo.lock generated
View File

@ -2,7 +2,7 @@
name = "aggregate_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
@ -18,6 +18,27 @@ name = "ansi_term"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "backtrace"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace-sys 0.1.16 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-demangle 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "backtrace-sys"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cc 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "badlog"
version = "1.0.0"
@ -25,23 +46,60 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bitflags"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bitflags"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bytecount"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cargo_metadata"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"error-chain 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"semver 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cc"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cfg-if"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "counter_timer_gauge"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
name = "custom_publish"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
@ -56,15 +114,21 @@ dependencies = [
[[package]]
name = "dipstick"
version = "0.5.2-alpha.0"
version = "0.6.0-alpha.0"
dependencies = [
"derivative 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"num 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)",
"skeptic 0.13.2 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "dtoa"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "either"
version = "1.4.0"
@ -75,16 +139,43 @@ name = "env_logger"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "error-chain"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"backtrace 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon-sys"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "glob"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "graphite"
version = "0.0.0"
dependencies = [
"badlog 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
@ -95,6 +186,11 @@ dependencies = [
"either 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "itoa"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "kernel32-sys"
version = "0.2.2"
@ -109,39 +205,55 @@ name = "lazy_static"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.33"
version = "0.2.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.3.8"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "log"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "multi_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
name = "num"
version = "0.1.40"
version = "0.1.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -149,7 +261,7 @@ name = "num-integer"
version = "0.1.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -158,19 +270,27 @@ version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-traits"
version = "0.1.40"
version = "0.1.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "pulldown-cmark"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "queued"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
@ -178,47 +298,141 @@ name = "quote"
version = "0.3.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rand"
version = "0.3.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "raw_sink"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
name = "redox_syscall"
version = "0.1.32"
version = "0.1.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "regex"
version = "0.2.3"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.4.1"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rustc-demangle"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "same-file"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "sampling"
version = "0.0.0"
dependencies = [
"dipstick 0.6.0-alpha.0",
]
[[package]]
name = "scoped_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
]
[[package]]
name = "semver"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "semver-parser"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde"
version = "1.0.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde_derive"
version = "1.0.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_derive_internals 0.19.0 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.11.11 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_derive_internals"
version = "0.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"syn 0.11.11 (registry+https://github.com/rust-lang/crates.io-index)",
"synom 0.11.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_json"
version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"dtoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"itoa 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "skeptic"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bytecount 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"cargo_metadata 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"error-chain 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_json 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"walkdir 1.0.7 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "static_metrics"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"dipstick 0.6.0-alpha.0",
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -232,23 +446,48 @@ dependencies = [
]
[[package]]
name = "thread_local"
version = "0.3.4"
name = "syn"
version = "0.11.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"synom 0.11.3 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "synom"
version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "tempdir"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand 0.3.20 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "thread_local"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "time"
version = "0.1.38"
version = "0.1.39"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.32 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -274,43 +513,106 @@ name = "void"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "walkdir"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"same-file 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-build"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[metadata]
"checksum aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)" = "d6531d44de723825aa81398a6415283229725a00fa30713812ab9323faa82fc4"
"checksum ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "23ac7c30002a5accbf7e8987d0632fa6de155b7c3d39d0067317a391e00a2ef6"
"checksum backtrace 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "ebbbf59b1c43eefa8c3ede390fcc36820b4999f7914104015be25025e0d62af2"
"checksum backtrace-sys 0.1.16 (registry+https://github.com/rust-lang/crates.io-index)" = "44585761d6161b0f57afc49482ab6bd067e4edef48c12a152c237eb0203f7661"
"checksum badlog 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "69ff7ee77081b9de4c7ec6721321c892f02b02e9d29107f469e88b42468740cc"
"checksum bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)" = "4efd02e230a02e18f92fc2735f44597385ed02ad8f831e7c1c1156ee5e1ab3a5"
"checksum bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b3c30d3802dfb7281680d6285f2ccdaa8c2d8fee41f93805dba5c4cf50dc23cf"
"checksum bytecount 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "af27422163679dea46a1a7239dffff64d3dcdc3ba5fe9c49c789fbfe0eb949de"
"checksum cargo_metadata 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "1f56ec3e469bca7c276f2eea015aa05c5e381356febdbb0683c2580189604537"
"checksum cc 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)" = "deaf9ec656256bb25b404c51ef50097207b9cbb29c933d31f92cae5a8a0ffee0"
"checksum cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "d4c819a1287eb618df47cc647173c5c4c66ba19d888a6e50d605672aed3140de"
"checksum derivative 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "67b3d6d0e84e53a5bdc263cc59340541877bb541706a191d762bfac6a481bdde"
"checksum dtoa 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "09c3753c3db574d215cba4ea76018483895d7bff25a31b49ba45db21c48e50ab"
"checksum either 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "740178ddf48b1a9e878e6d6509a1442a2d42fd2928aae8e7a6f8a36fb01981b3"
"checksum env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3ddf21e73e016298f5cb37d6ef8e8da8e39f91f9ec8b0df44b7deb16a9f8cd5b"
"checksum error-chain 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ff511d5dc435d703f4971bc399647c9bc38e20cb41452e3b9feb4765419ed3f3"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum glob 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "8be18de09a56b60ed0edf84bc9df007e30040691af7acd1c41874faac5895bfb"
"checksum itertools 0.5.10 (registry+https://github.com/rust-lang/crates.io-index)" = "4833d6978da405305126af4ac88569b5d71ff758581ce5a987dbfa3755f694fc"
"checksum itoa 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "8324a32baf01e2ae060e9de58ed0bc2320c9a2833491ee36cd3b4c414de4db8c"
"checksum kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7507624b29483431c0ba2d82aece8ca6cdba9382bff4ddd0f7490560c056098d"
"checksum lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "76f033c7ad61445c5b347c7382dd1237847eb1bce590fe50365dcb33d546be73"
"checksum libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)" = "5ba3df4dcb460b9dfbd070d41c94c19209620c191b0340b929ce748a2bcd42d2"
"checksum log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)" = "880f77541efa6e5cc74e76910c9884d9859683118839d6a1dc3b11e63512565b"
"checksum lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c8f31047daa365f19be14b47c29df4f7c3b581832407daabe6ae77397619237d"
"checksum libc 0.2.35 (registry+https://github.com/rust-lang/crates.io-index)" = "96264e9b293e95d25bfcbbf8a88ffd1aedc85b754eba8b7d78012f638ba220eb"
"checksum log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "e19e8d5c34a3e0e2223db8e060f9e8264aeeb5c5fc64a4ee9965c062211c024b"
"checksum log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "89f010e843f2b1a31dbd316b3b8d443758bc634bed37aabade59c686d644e0a2"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "a311b77ebdc5dd4cf6449d81e4135d9f0e3b153839ac90e648a8ef538f923525"
"checksum num 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)" = "cc4083e14b542ea3eb9b5f33ff48bd373a92d78687e74f4cc0a30caeb754f0ca"
"checksum num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)" = "d1452e8b06e448a07f0e6ebb0bb1d92b8890eea63288c0b627331d53514d0fba"
"checksum num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)" = "7485fcc84f85b4ecd0ea527b14189281cf27d60e583ae65ebc9c088b13dffe01"
"checksum num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "99843c856d68d8b4313b03a17e33c4bb42ae8f6610ea81b28abe076ac721b9b0"
"checksum num-traits 0.1.41 (registry+https://github.com/rust-lang/crates.io-index)" = "cacfcab5eb48250ee7d0c7896b51a2c5eec99c1feea5f32025635f5ae4b00070"
"checksum pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "a656fdb8b6848f896df5e478a0eb9083681663e37dcb77dd16981ff65329fe8b"
"checksum quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)" = "7a6e920b65c65f10b2ae65c831a81a073a89edd28c7cce89475bff467ab4167a"
"checksum redox_syscall 0.1.32 (registry+https://github.com/rust-lang/crates.io-index)" = "ab105df655884ede59d45b7070c8a65002d921461ee813a024558ca16030eea0"
"checksum regex 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "ac6ab4e9218ade5b423358bbd2567d1617418403c7a512603630181813316322"
"checksum regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ad890a5eef7953f55427c50575c680c42841653abd2b028b68cd223d157f62db"
"checksum rand 0.3.20 (registry+https://github.com/rust-lang/crates.io-index)" = "512870020642bb8c221bf68baa1b2573da814f6ccfe5c9699b1c303047abe9b1"
"checksum redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)" = "0d92eecebad22b767915e4d529f89f28ee96dbbf5a4810d2b844373f136417fd"
"checksum regex 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)" = "744554e01ccbd98fff8c457c3b092cd67af62a555a43bfe97ae8a0451f7799fa"
"checksum regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "8e931c58b93d86f080c734bfd2bce7dd0079ae2331235818133c8be7f422e20e"
"checksum rustc-demangle 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "aee45432acc62f7b9a108cc054142dac51f979e69e71ddce7d6fc7adf29e817e"
"checksum same-file 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d931a44fdaa43b8637009e7632a02adc4f2b2e0733c08caa4cf00e8da4a117a7"
"checksum semver 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "bee2bc909ab2d8d60dab26e8cad85b25d795b14603a0dcb627b78b9d30b6454b"
"checksum semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
"checksum serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)" = "db99f3919e20faa51bb2996057f5031d8685019b5a06139b1ce761da671b8526"
"checksum serde_derive 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)" = "f4ba7591cfe93755e89eeecdbcc668885624829b020050e6aec99c2a03bd3fd0"
"checksum serde_derive_internals 0.19.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6e03f1c9530c3fb0a0a5c9b826bdd9246a5921ae995d75f512ac917fc4dd55b5"
"checksum serde_json 1.0.9 (registry+https://github.com/rust-lang/crates.io-index)" = "c9db7266c7d63a4c4b7fe8719656ccdd51acf1bed6124b174f933b009fb10bcb"
"checksum skeptic 0.13.2 (registry+https://github.com/rust-lang/crates.io-index)" = "c8431f8fca168e2db4be547bd8329eac70d095dff1444fee4b0fa0fabc7df75a"
"checksum syn 0.10.8 (registry+https://github.com/rust-lang/crates.io-index)" = "58fd09df59565db3399efbba34ba8a2fec1307511ebd245d0061ff9d42691673"
"checksum thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "1697c4b57aeeb7a536b647165a2825faddffb1d3bad386d507709bd51a90bb14"
"checksum time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)" = "d5d788d3aa77bc0ef3e9621256885555368b47bd495c13dd2e7413c89f845520"
"checksum syn 0.11.11 (registry+https://github.com/rust-lang/crates.io-index)" = "d3b891b9015c88c576343b9b3e41c2c11a51c219ef067b264bd9c8aa9b441dad"
"checksum synom 0.11.3 (registry+https://github.com/rust-lang/crates.io-index)" = "a393066ed9010ebaed60b9eafa373d4b1baac186dd7e008555b0f702b51945b6"
"checksum tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "87974a6f5c1dfb344d733055601650059a3363de2a6104819293baff662132d6"
"checksum thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "279ef31c19ededf577bfd12dfae728040a21f635b06a24cd670ff510edd38963"
"checksum time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)" = "a15375f1df02096fb3317256ce2cee6a1f42fc84ea5ad5fc8c421cfe40c73098"
"checksum unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)" = "8c1f860d7d29cf02cb2f3f359fd35991af3d30bac52c57d265a3c461074cb4dc"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum walkdir 1.0.7 (registry+https://github.com/rust-lang/crates.io-index)" = "bb08f9e670fab86099470b97cd2b252d6527f0b3cc1401acdb595ffc9dd288ff"
"checksum winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a"
"checksum winapi 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "b09fb3b6f248ea4cd42c9a65113a847d612e17505d6ebd1f7357ad68a8bf8693"
"checksum winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc"
"checksum winapi-i686-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "ec6667f60c23eca65c561e63a13d81b44234c2e38a6b6c959025ee907ec614cc"
"checksum winapi-x86_64-pc-windows-gnu 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "98f12c52b2630cd05d2c3ffd8e008f7f48252c042b4871c72aed9dc733b96668"

View File

@ -10,11 +10,12 @@ members = [
"examples/graphite/",
"examples/static_metrics/",
"examples/scoped_print/",
"examples/sampling/",
]
[package]
name = "dipstick"
version = "0.5.2-alpha.0"
version = "0.6.0-alpha.0"
authors = ["Francis Lalonde <fralalonde@gmail.com>"]
description = """A fast and modular metrics library decoupling app instrumentation from reporting backend.
@ -27,6 +28,7 @@ repository = "https://github.com/fralalonde/dipstick"
readme = "README.md"
keywords = ["metrics", "statsd", "graphite", "timer", "monitoring"]
license = "MIT/Apache-2.0"
build = "build.rs"
[badges]
travis-ci = { repository = "fralalonde/dipstick", branch = "master" }
@ -34,12 +36,25 @@ travis-ci = { repository = "fralalonde/dipstick", branch = "master" }
[dependencies]
log = { version = "0.3" }
time = "0.1"
lazy_static = "0.2.9"
lazy_static = "0.2"
derivative = "1.0"
[build-dependencies]
skeptic = "0.13"
[dev-dependencies]
skeptic = "0.13"
[dependencies.num]
default-features = false
version = "0.1"
[features]
bench = []
[package.metadata.release]
#sign-commit = true
#upload-doc = true
pre-release-replacements = [
{file="README.md", search="^dipstick = \"[a-z0-9\\.-]+\"", replace="dipstick = \"{{version}}\""}
]

View File

@ -1,28 +0,0 @@
# Customizing release-flow
[tasks.default]
description = "GAGAGA"
alias = "GAGA-flow"
[tasks.pre-publish]
dependencies = [
"update-readme",
"git-add",
]
[tasks.update-readme]
description = "Update the README using cargo readme."
script = [
"#!/usr/bin/env bash",
"cargo readme > README.md",
]
[tasks.post-publish]
dependencies = [
"update-examples",
]
[tasks.update-examples]
description = "Update the examples dependencies."
command = "cargo"
args = ["update", "-p", "dipstick"]

154
README.md
View File

@ -1,12 +1,17 @@
A quick, modular metrics toolkit for Rust applications; similar to popular logging frameworks,
# dipstick
A quick, modular metrics toolkit for Rust applications of all types. Similar to popular logging frameworks,
but with counters, markers, gauges and timers.
[![Build Status](https://travis-ci.org/fralalonde/dipstick.svg?branch=master)](https://travis-ci.org/fralalonde/dipstick)
[![crates.io](https://img.shields.io/crates/v/dipstick.svg)](https://crates.io/crates/dipstick)
Dipstick's main attraction is the ability to send metrics to multiple customized outputs.
For example, captured metrics can be written immediately to the log _and_
For example, metrics could be written immediately to the log _and_
sent over the network after a period of aggregation.
Dipstick builds on stable Rust with minimal dependencies
and is published as a [crate.](https://crates.io/crates/dipstick)
Dipstick promotes structured metrics for clean, safe code and good performance.
Dipstick builds on stable Rust with minimal dependencies.
## Features
@ -18,80 +23,102 @@ and is published as a [crate.](https://crates.io/crates/dipstick)
- Customizable output statistics and formatting
- Global or scoped (e.g. per request) metrics
- Per-application and per-output metric namespaces
- Predefined or ad-hoc metrics
## Examples
## Cookbook
For complete applications see the [examples](https://github.com/fralalonde/dipstick/tree/master/examples).
Dipstick is easy to add to your code:
```rust
use dipstick::*;
let app_metrics = metrics(to_graphite("host.com:2003"));
app_metrics.counter("my_counter").count(3);
To use Dipstick in your project, add the following line to your `Cargo.toml`
in the `[dependencies]` section:
```toml
dipstick = "0.4.18"
```
Metrics can be sent to multiple outputs at the same time:
```rust
let app_metrics = metrics((to_stdout(), to_statsd("localhost:8125", "app1.host.")));
Then add it to your code:
```rust,skt-fail,no_run
let metrics = app_metrics(to_graphite("host.com:2003")?);
let counter = metrics.counter("my_counter");
counter.count(3);
```
Send metrics to multiple outputs:
```rust,skt-fail,no_run
let _app_metrics = app_metrics((
to_stdout(),
to_statsd("localhost:8125")?.with_namespace(&["my", "app"])
));
```
Since instruments are decoupled from the backend, outputs can be swapped easily.
Metrics can be aggregated and scheduled to be published periodically in the background:
```rust
Aggregate metrics and schedule to be periodical publication in the background:
```rust,skt-run
use std::time::Duration;
let (to_aggregate, from_aggregate) = aggregate();
publish_every(Duration::from_secs(10), from_aggregate, to_log("last_ten_secs:"), all_stats);
let app_metrics = metrics(to_aggregate);
let app_metrics = app_metrics(aggregate(all_stats, to_stdout()));
app_metrics.flush_every(Duration::from_secs(3));
```
Aggregation is performed locklessly and is very fast.
Count, sum, min, max and average are tracked where they make sense.
Published statistics can be selected with presets such as `all_stats` (see previous example),
`summary`, `average`.
For more control over published statistics, a custom filter can be provided:
```rust
let (_to_aggregate, from_aggregate) = aggregate();
publish(from_aggregate, to_log("my_custom_stats:"),
|metric_kind, metric_name, metric_score|
match metric_score {
HitCount(hit_count) => Some((Counter, vec![metric_name, ".per_thousand"], hit_count / 1000)),
For more control over published statistics, provide your own strategy:
```rust,skt-run
app_metrics(aggregate(
|_kind, name, score|
match score {
ScoreType::Count(count) =>
Some((Kind::Counter, vec![name, ".per_thousand"], count / 1000)),
_ => None
});
},
to_log()));
```
Metrics can be statistically sampled:
```rust
let app_metrics = metrics(sample(0.001, to_statsd("server:8125", "app.sampled.")));
Apply statistical sampling to metrics:
```rust,skt-fail
let _app_metrics = app_metrics(to_statsd("server:8125")?.with_sampling_rate(0.01));
```
A fast random algorithm is used to pick samples.
Outputs can use sample rate to expand or format published data.
Metrics can be recorded asynchronously:
```rust
let app_metrics = metrics(async(48, to_stdout()));
```rust,skt-run
let _app_metrics = app_metrics(to_stdout()).with_async_queue(64);
```
The async queue uses a Rust channel and a standalone thread.
The current behavior is to block when full.
Metric definitions can be cached to make using _ad-hoc metrics_ faster:
```rust
let app_metrics = metrics(cache(512, to_log()));
```rust,skt-run
let app_metrics = app_metrics(to_log().with_cache(512));
app_metrics.gauge(format!("my_gauge_{}", 34)).value(44);
```
The preferred way is to _predefine metrics_,
possibly in a [lazy_static!](https://crates.io/crates/lazy_static) block:
```rust
#[macro_use] external crate lazy_static;
```rust,skt-plain
#[macro_use] extern crate lazy_static;
extern crate dipstick;
use dipstick::*;
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = metrics(to_stdout());
pub static ref COUNTER_A: Counter<Aggregate> = METRICS.counter("counter_a");
pub static ref METRICS: AppMetrics<String> = app_metrics(to_stdout());
pub static ref COUNTER_A: AppCounter<String> = METRICS.counter("counter_a");
}
fn main() {
COUNTER_A.count(11);
}
COUNTER_A.count(11);
```
Timers can be used multiple ways:
```rust
```rust,skt-run
let app_metrics = app_metrics(to_stdout());
let timer = app_metrics.timer("my_timer");
time!(timer, {/* slow code here */} );
timer.time(|| {/* slow code here */} );
@ -104,47 +131,14 @@ timer.interval_us(123_456);
```
Related metrics can share a namespace:
```rust
let db_metrics = app_metrics.with_prefix("database.");
let db_timer = db_metrics.timer("db_timer");
let db_counter = db_metrics.counter("db_counter");
```rust,skt-run
let app_metrics = app_metrics(to_stdout());
let db_metrics = app_metrics.with_prefix("database");
let _db_timer = db_metrics.timer("db_timer");
let _db_counter = db_metrics.counter("db_counter");
```
## License
## Design
Dipstick's design goals are to:
- support as many metrics backends as possible while favoring none
- support all types of applications, from embedded to servers
- promote metrics conventions that facilitate app monitoring and maintenance
- stay out of the way in the code and at runtime (ergonomic, fast, resilient)
Dipstick is licensed under the terms of the Apache 2.0 and MIT license.
## Performance
Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once.
Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use.
Defining a metric `cache()` reduces that cost for recurring metrics.
Run benchmarks with `cargo +nightly bench --features bench`.
## TODO
Although already usable, Dipstick is still under heavy development and makes no guarantees
of any kind at this point. See the following list for any potential caveats :
- META turn TODOs into GitHub issues
- generic publisher / sources
- feature flags
- time measurement units in metric kind (us, ms, etc.) for naming & scaling
- heartbeat metric on publish
- logger templates
- configurable aggregation (?)
- non-aggregating buffers
- framework glue (rocket, iron, gotham, indicatif, etc.)
- more tests & benchmarks
- complete doc / inline samples
- more example apps
- A cool logo
- method annotation processors `#[timer("name")]`
- fastsinks (M / &M) vs. safesinks (Arc<M>)
- `static_metric!` macro to replace `lazy_static!` blocks and handle generics boilerplate.
License: MIT/Apache-2.0
_this file was generated using [cargo readme](https://github.com/livioribeiro/cargo-readme)_

30
README.md.skt.md Normal file
View File

@ -0,0 +1,30 @@
Templates
```rust,skt-run
extern crate dipstick;
use dipstick::*;
fn main() {{
{}
}}
```
```rust,skt-fail
extern crate dipstick;
use dipstick::*;
use std::result::Result;
fn main() {{
run().ok();
}}
fn run() -> Result<(), Error> {{
{}
Ok(())
}}
```
```rust,skt-plain
{}
```

View File

@ -1,40 +0,0 @@
{{readme}}
## Design
Dipstick's design goals are to:
- support as many metrics backends as possible while favoring none
- support all types of applications, from embedded to servers
- promote metrics conventions that facilitate app monitoring and maintenance
- stay out of the way in the code and at runtime (ergonomic, fast, resilient)
## Performance
Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once.
Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use.
Defining a metric `cache()` reduces that cost for recurring metrics.
Run benchmarks with `cargo +nightly bench --features bench`.
## TODO
Although already usable, Dipstick is still under heavy development and makes no guarantees
of any kind at this point. See the following list for any potential caveats :
- META turn TODOs into GitHub issues
- fix rustdocs https://docs.rs/dipstick/0.4.15/dipstick/
- generic publisher / sources
- add feature flags
- time measurement units in metric kind (us, ms, etc.) for naming & scaling
- heartbeat metric on publish
- logger templates
- configurable aggregation (?)
- non-aggregating buffers
- framework glue (rocket, iron, gotham, indicatif, etc.)
- more tests & benchmarks
- complete doc / inline samples
- more example apps
- A cool logo
- method annotation processors `#[timer("name")]`
- fastsinks (M / &M) vs. safesinks (Arc<M>)
- `static_metric!` macro to replace `lazy_static!` blocks and handle generics boilerplate.
License: {{license}}
_this file was generated using [cargo readme](https://github.com/livioribeiro/cargo-readme)_

6
build.rs Normal file
View File

@ -0,0 +1,6 @@
extern crate skeptic;
fn main() {
// generates doc tests for `README.md`.
skeptic::generate_doc_tests(&["README.md"]);
}

View File

@ -7,9 +7,9 @@ use std::time::Duration;
use dipstick::*;
fn main() {
let to_quick_aggregate = aggregate(16, summary, to_stdout());
let to_aggregate = aggregate(summary, to_stdout());
let app_metrics = global_metrics(to_quick_aggregate);
let app_metrics = app_metrics(to_aggregate);
app_metrics.flush_every(Duration::from_secs(3));
@ -34,5 +34,4 @@ fn main() {
marker.mark();
}
}

View File

@ -8,8 +8,7 @@ use std::time::Duration;
use dipstick::*;
fn main() {
let metrics = global_metrics(async(0, to_stdout()));
let metrics = app_metrics(to_stdout()).with_async_queue(0);
let counter = metrics.counter("counter_a");
let timer = metrics.timer("timer_b");
@ -29,5 +28,4 @@ fn main() {
time!(timer, sleep(Duration::from_millis(5)));
timer.time(|| sleep(Duration::from_millis(5)));
}
}

View File

@ -9,7 +9,7 @@ use dipstick::*;
fn main() {
// for this demo, print metric values to the console
let app_metrics = global_metrics(to_stdout());
let app_metrics = app_metrics(to_stdout());
// metrics can be predefined by type and name
let counter = app_metrics.counter("counter_a");
@ -41,5 +41,4 @@ fn main() {
let start_time = timer.start();
Duration::from_millis(5);
timer.stop(start_time);
}

View File

@ -7,15 +7,21 @@ use std::time::Duration;
use dipstick::*;
fn main() {
fn custom_statistics(kind: Kind, name: &str, score:ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
fn custom_statistics(
kind: Kind,
name: &str,
score: ScoreType,
) -> Option<(Kind, Vec<&str>, Value)> {
match (kind, score) {
// do not export gauge scores
(Kind::Gauge, _) => None,
// prepend and append to metric name
(_, ScoreType::Count(count)) => Some((Kind::Counter,
vec!["name customized_with_prefix:", &name, " and a suffix: "], count)),
(_, ScoreType::Count(count)) => Some((
Kind::Counter,
vec!["name customized_with_prefix:", &name, " and a suffix: "],
count,
)),
// scaling the score value and appending unit to name
(kind, ScoreType::Sum(sum)) => Some((kind, vec![&name, "_millisecond"], sum * 1000)),
@ -29,9 +35,9 @@ fn main() {
}
// send application metrics to aggregator
let to_aggregate = aggregate(16, custom_statistics, to_stdout());
let to_aggregate = aggregate(custom_statistics, to_stdout());
let app_metrics = global_metrics(to_aggregate);
let app_metrics = app_metrics(to_aggregate);
// schedule aggregated metrics to be printed every 3 seconds
app_metrics.flush_every(Duration::from_secs(3));
@ -44,5 +50,4 @@ fn main() {
timer.interval_us(654654);
gauge.value(3534);
}
}

View File

@ -1,7 +1,7 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
extern crate dipstick;
extern crate badlog;
extern crate dipstick;
use dipstick::*;
use std::time::Duration;
@ -9,9 +9,9 @@ use std::time::Duration;
fn main() {
badlog::init(Some("info"));
let metrics = global_metrics(
let metrics = app_metrics(
to_graphite("localhost:2003").expect("Connect to graphite")
);
.with_namespace(&["my", "app"]));
loop {
metrics.counter("counter_a").count(123);

View File

@ -6,27 +6,24 @@ use dipstick::*;
use std::time::Duration;
fn main() {
let different_type_metrics = app_metrics((
// combine metrics of different types in a tuple
to_statsd("localhost:8125").expect("connect"),
to_stdout(),
));
let metrics1 = global_metrics(
(
// use tuples to combine metrics of different types
to_statsd("localhost:8125").expect("connect"),
to_stdout()
)
);
let metrics2 = global_metrics(
&[
let same_type_metrics = app_metrics(
&[
// use slices to combine multiple metrics of the same type
prefix("yeah.", to_stdout()),
prefix("ouch.", to_stdout()),
prefix("nooo.", to_stdout()),
][..]
to_stdout().with_prefix("yeah"),
to_stdout().with_prefix("ouch"),
to_stdout().with_sampling_rate(0.5),
][..],
);
loop {
metrics1.counter("counter_a").count(123);
metrics2.timer("timer_a").interval_us(2000000);
different_type_metrics.counter("counter_a").count(123);
same_type_metrics.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(40));
}
}

View File

@ -15,5 +15,5 @@ pub fn raw_write() {
// define and send metrics using raw channel API
let counter = metrics_log.define_metric(Kind::Counter, "count_a", FULL_SAMPLING_RATE);
metrics_log.open_scope(true)(ScopeCmd::Write(&counter, 1));
metrics_log.open_scope(true).write(&counter, 1);
}

View File

@ -0,0 +1,7 @@
[package]
name = "sampling"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -0,0 +1,17 @@
//! An app demonstrating the basics of the metrics front-end.
//! Defines metrics of each kind and use them to print values to the console in multiple ways.
extern crate dipstick;
use dipstick::*;
fn main() {
// print only 1 out of every 10000 metrics recorded
let app_metrics = app_metrics(to_stdout().with_sampling_rate(0.0001));
let marker = app_metrics.marker("marker_a");
loop {
marker.mark();
}
}

View File

@ -9,7 +9,7 @@ use std::thread::sleep;
use dipstick::*;
fn main() {
let metrics = scoped_metrics(to_stdout());
let metrics = to_stdout();
let counter = metrics.counter("counter_a");
let timer = metrics.timer("timer_a");
@ -19,7 +19,8 @@ fn main() {
loop {
// add counts forever, non-stop
println!("\n------- open scope");
let ref mut scope = metrics.open_scope();
let ref mut scope = metrics.open_scope(true);
counter.count(scope, 11);
counter.count(scope, 12);
@ -39,9 +40,9 @@ fn main() {
sleep(Duration::from_millis(1000));
println!("------- close scope: ")
println!("------- close scope: ");
// scope metrics are printed at the end of every cycle as scope is dropped
// use scope.flush_on_drop(false) and scope.flush() to control flushing if required
}
}

View File

@ -14,10 +14,10 @@ pub mod metric {
// This makes it uglier than it should be when working with generics...
// and is even more work because IDE's such as IntelliJ can not yet see through macro blocks :(
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = global_metrics(to_stdout());
pub static ref METRICS: AppMetrics<String> = app_metrics(to_stdout());
pub static ref COUNTER_A: Counter<String> = METRICS.counter("counter_a");
pub static ref TIMER_B: Timer<String> = METRICS.timer("timer_b");
pub static ref COUNTER_A: AppCounter<String> = METRICS.counter("counter_a");
pub static ref TIMER_B: AppTimer<String> = METRICS.timer("timer_b");
}
}

View File

@ -18,28 +18,33 @@ use std::sync::{Arc, RwLock};
/// metrics.marker("my_event").mark();
/// metrics.marker("my_event").mark();
/// ```
pub fn aggregate<E, M>(capacity: usize, stat_fn: E, to_chain: Chain<M>) -> Chain<Aggregate>
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + Debug + 'static,
pub fn aggregate<E, M>(stat_fn: E, to_chain: Chain<M>) -> Chain<Aggregate>
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + Debug + 'static,
{
let metrics = Arc::new(RwLock::new(HashMap::with_capacity(capacity)));
let metrics = Arc::new(RwLock::new(HashMap::new()));
let metrics0 = metrics.clone();
let publish = Arc::new(Publisher::new(stat_fn, to_chain));
Chain::new(
move |kind, name, _rate| {
metrics.write().unwrap()
metrics
.write()
.unwrap()
.entry(name.to_string())
.or_insert_with(|| Arc::new(Scoreboard::new(kind, name.to_string()))
).clone()
.or_insert_with(|| Arc::new(Scoreboard::new(kind, name.to_string())))
.clone()
},
move |_auto_flush| {
move |_buffered| {
let metrics = metrics0.clone();
let publish = publish.clone();
Arc::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => metric.update(value),
ControlScopeFn::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => {
let metric: &Aggregate = metric;
metric.update(value)
},
ScopeCmd::Flush => {
let metrics = metrics.read().expect("Lock scoreboards for a snapshot.");
let snapshot = metrics.values().flat_map(|score| score.reset()).collect();
@ -60,7 +65,6 @@ pub struct Aggregator {
}
impl Aggregator {
/// Build a new metric aggregation point with specified initial capacity of metrics to aggregate.
pub fn with_capacity(size: usize, publish: Arc<Publish>) -> Aggregator {
Aggregator {
@ -78,13 +82,13 @@ impl Aggregator {
.collect();
if !orphans.is_empty() {
let mut remover = self.metrics.write().unwrap();
orphans.iter().for_each(|k| {remover.remove(k);});
orphans.iter().for_each(|k| {
remover.remove(k);
});
}
}
}
/// The type of metric created by the Aggregator.
pub type Aggregate = Arc<Scoreboard>;
@ -96,34 +100,32 @@ mod bench {
use core::Kind::*;
use output::*;
#[bench]
fn time_bench_write_event(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let sink = aggregate(summary, to_void());
let metric = sink.define_metric(Marker, "event_a", 1.0);
let scope = sink.open_scope(false);
b.iter(|| test::black_box(scope(ScopeCmd::Write(&metric, 1))));
b.iter(|| test::black_box(scope.write(&metric, 1)));
}
#[bench]
fn time_bench_write_count(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let sink = aggregate(summary, to_void());
let metric = sink.define_metric(Counter, "count_a", 1.0);
let scope = sink.open_scope(false);
b.iter(|| test::black_box(scope(ScopeCmd::Write(&metric, 1))));
b.iter(|| test::black_box(scope.write(&metric, 1)));
}
#[bench]
fn time_bench_read_event(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let sink = aggregate(summary, to_void());
let metric = sink.define_metric(Marker, "marker_a", 1.0);
b.iter(|| test::black_box(metric.reset()));
}
#[bench]
fn time_bench_read_count(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let sink = aggregate(summary, to_void());
let metric = sink.define_metric(Counter, "count_a", 1.0);
b.iter(|| test::black_box(metric.reset()));
}

View File

@ -5,10 +5,14 @@
//! Compared to [ScopeMetrics], static metrics are easier to use and provide satisfactory metrics
//! in many applications.
//!
//! If multiple [GlobalMetrics] are defined, they'll each have their scope.
//! If multiple [AppMetrics] are defined, they'll each have their scope.
//!
use core::*;
use core::ScopeCmd::*;
use namespace::*;
use cache::*;
use async_queue::*;
use sample::*;
use core::Kind::*;
use std::sync::Arc;
use std::time::Duration;
@ -18,15 +22,27 @@ use schedule::*;
pub use num::ToPrimitive;
/// Wrap the metrics backend to provide an application-friendly interface.
pub fn global_metrics<M, IC>(chain: IC) -> GlobalMetrics<M>
/// Open a metric scope to share across the application.
#[deprecated(since = "0.5.0", note = "Use `app_metrics` instead.")]
pub fn metrics<M, IC>(chain: IC) -> AppMetrics<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
app_metrics(chain)
}
/// Wrap the metrics backend to provide an application-friendly interface.
/// Open a metric scope to share across the application.
pub fn app_metrics<M, IC>(chain: IC) -> AppMetrics<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
let static_scope = chain.open_scope(true);
GlobalMetrics {
prefix: "".to_string(),
let static_scope = chain.open_scope(false);
AppMetrics {
scope: static_scope,
chain: Arc::new(chain),
}
@ -37,54 +53,51 @@ where
/// preventing programming errors.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Marker<M> {
pub struct AppMarker<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
#[derivative(Debug = "ignore")] scope: ControlScopeFn<M>,
}
impl<M> Marker<M> {
impl<M> AppMarker<M> {
/// Record a single event occurence.
pub fn mark(&self) {
self.scope.as_ref()(Write(&self.metric, 1));
self.scope.write(&self.metric, 1);
}
}
/// A counter that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Counter<M> {
pub struct AppCounter<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
#[derivative(Debug = "ignore")] scope: ControlScopeFn<M>,
}
impl<M> Counter<M> {
impl<M> AppCounter<M> {
/// Record a value count.
pub fn count<V>(&self, count: V)
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, count.to_u64().unwrap()));
self.scope.write(&self.metric, count.to_u64().unwrap());
}
}
/// A gauge that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Gauge<M> {
pub struct AppGauge<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
#[derivative(Debug = "ignore")] scope: ControlScopeFn<M>,
}
impl<M> Gauge<M> {
impl<M> AppGauge<M> {
/// Record a value point for this gauge.
pub fn value<V>(&self, value: V)
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, value.to_u64().unwrap()));
self.scope.write(&self.metric, value.to_u64().unwrap());
}
}
@ -96,20 +109,19 @@ impl<M> Gauge<M> {
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Timer<M> {
pub struct AppTimer<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
#[derivative(Debug = "ignore")] scope: ControlScopeFn<M>,
}
impl<M> Timer<M> {
impl<M> AppTimer<M> {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
pub fn interval_us<V>(&self, interval_us: V) -> V
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, interval_us.to_u64().unwrap()));
self.scope.write(&self.metric, interval_us.to_u64().unwrap());
interval_us
}
@ -147,119 +159,99 @@ impl<M> Timer<M> {
/// Variations of this should also provide control of the metric recording scope.
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct GlobalMetrics<M> {
prefix: String,
pub struct AppMetrics<M> {
chain: Arc<Chain<M>>,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
#[derivative(Debug = "ignore")] scope: ControlScopeFn<M>,
}
impl<M> GlobalMetrics<M>
impl<M> AppMetrics<M>
where
M: Clone + Send + Sync + 'static,
{
fn qualified_name<AS>(&self, name: AS) -> String
where
AS: Into<String> + AsRef<str>,
{
// FIXME is there a way to return <S> in both cases?
if self.prefix.is_empty() {
return name.into();
}
let mut buf: String = self.prefix.clone();
buf.push_str(name.as_ref());
buf.to_string()
}
/// Get an event counter of the provided name.
pub fn marker<AS>(&self, name: AS) -> Marker<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Marker,
&self.qualified_name(name),
1.0,
);
Marker {
pub fn marker<AS: AsRef<str>>(&self, name: AS) -> AppMarker<M> {
let metric = self.chain.define_metric(Marker, name.as_ref(), 1.0);
AppMarker {
metric,
scope: self.scope.clone(),
}
}
/// Get a counter of the provided name.
pub fn counter<AS>(&self, name: AS) -> Counter<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Counter,
&self.qualified_name(name),
1.0,
);
Counter {
pub fn counter<AS: AsRef<str>>(&self, name: AS) -> AppCounter<M> {
let metric = self.chain.define_metric(Counter, name.as_ref(), 1.0);
AppCounter {
metric,
scope: self.scope.clone(),
}
}
/// Get a timer of the provided name.
pub fn timer<AS>(&self, name: AS) -> Timer<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Timer,
&self.qualified_name(name),
1.0,
);
Timer {
pub fn timer<AS: AsRef<str>>(&self, name: AS) -> AppTimer<M> {
let metric = self.chain.define_metric(Timer, name.as_ref(), 1.0);
AppTimer {
metric,
scope: self.scope.clone(),
}
}
/// Get a gauge of the provided name.
pub fn gauge<AS>(&self, name: AS) -> Gauge<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Gauge,
&self.qualified_name(name),
1.0,
);
Gauge {
pub fn gauge<AS: AsRef<str>>(&self, name: AS) -> AppGauge<M> {
let metric = self.chain.define_metric(Gauge, name.as_ref(), 1.0);
AppGauge {
metric,
scope: self.scope.clone(),
}
}
/// Prepend the metrics name with a prefix.
/// Does not affect metrics that were already obtained.
pub fn with_prefix<IS>(&self, prefix: IS) -> Self
where
IS: Into<String>,
{
GlobalMetrics {
prefix: prefix.into(),
chain: self.chain.clone(),
scope: self.scope.clone(),
}
}
/// Forcefully flush the backing metrics scope.
/// This is usually not required since static metrics use auto flushing scopes.
/// The effect, if any, of this method depends on the selected metrics backend.
pub fn flush(&self) {
self.scope.as_ref()(Flush);
self.scope.flush();
}
/// Schedule for the metrics aggregated of buffered by downstream metrics sinks to be
/// sent out at regular intervals.
pub fn flush_every(&self, period: Duration) -> CancelHandle {
let scope = self.scope.clone();
schedule(period, move || (scope)(Flush))
schedule(period, move || scope.flush())
}
}
impl<M: Send + Sync + Clone + 'static> WithNamespace for AppMetrics<M> {
fn with_namespace(&self, nspace: &[&str]) -> Self {
AppMetrics {
chain: Arc::new(self.chain.with_namespace(nspace)),
scope: self.scope.clone(),
}
}
}
impl<M: Send + Sync + Clone + 'static> WithCache for AppMetrics<M> {
fn with_cache(&self, cache_size: usize) -> Self {
AppMetrics {
chain: Arc::new(self.chain.with_cache(cache_size)),
scope: self.scope.clone(),
}
}
}
impl<M: Send + Sync + Clone + 'static> WithSamplingRate for AppMetrics<M> {
fn with_sampling_rate(&self, sampling_rate: Rate) -> Self {
AppMetrics {
chain: Arc::new(self.chain.with_sampling_rate(sampling_rate)),
scope: self.scope.clone(),
}
}
}
impl<M: Send + Sync + Clone + 'static> WithAsyncQueue for AppMetrics<M> {
fn with_async_queue(&self, queue_size: usize) -> Self {
AppMetrics {
chain: Arc::new(self.chain.with_async_queue(queue_size)),
scope: self.scope.clone(),
}
}
}
@ -271,8 +263,8 @@ mod bench {
#[bench]
fn time_bench_direct_dispatch_event(b: &mut test::Bencher) {
let sink = aggregate(5, summary, to_void());
let metrics = global_metrics(sink);
let sink = aggregate(summary, to_void());
let metrics = app_metrics(sink);
let marker = metrics.marker("aaa");
b.iter(|| test::black_box(marker.mark()));
}

View File

@ -1,79 +0,0 @@
//! Queue metrics for write on a separate thread,
//! Metrics definitions are still synchronous.
//! If queue size is exceeded, calling code reverts to blocking.
//!
use core::*;
use self_metrics::*;
use std::sync::Arc;
use std::sync::mpsc;
use std::thread;
/// Cache metrics to prevent them from being re-defined on every use.
/// Use of this should be transparent, this has no effect on the values.
/// Stateful sinks (i.e. Aggregate) may naturally cache their definitions.
pub fn async<M, IC>(queue_size: usize, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_scope(|next| {
// setup channel
let (sender, receiver) = mpsc::sync_channel::<QueueCommand<M>>(queue_size);
// start queue processor thread
thread::spawn(move || loop {
while let Ok(cmd) = receiver.recv() {
match cmd {
QueueCommand { cmd: Some((metric, value)), next_scope } =>
next_scope(ScopeCmd::Write(&metric, value)),
QueueCommand { cmd: None, next_scope } =>
next_scope(ScopeCmd::Flush),
}
}
});
Arc::new(move |auto_flush| {
// open next scope, make it Arc to move across queue
let next_scope: Arc<ControlScopeFn<M>> = Arc::from(next(auto_flush));
let sender = sender.clone();
// forward any scope command through the channel
Arc::new(move |cmd| {
let send_cmd = match cmd {
ScopeCmd::Write(metric, value) => Some((metric.clone(), value)),
ScopeCmd::Flush => None,
};
sender
.send(QueueCommand {
cmd: send_cmd,
next_scope: next_scope.clone(),
})
.unwrap_or_else(|e| {
SEND_FAILED.mark();
trace!("Async metrics could not be sent: {}", e);
})
})
})
})
}
lazy_static! {
static ref QUEUE_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("async.");
static ref SEND_FAILED: Marker<Aggregate> = QUEUE_METRICS.marker("send_failed");
}
/// Carry the scope command over the queue, from the sender, to be executed by the receiver.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct QueueCommand<M> {
/// If Some(), the metric and value to write.
/// If None, flush the scope
cmd: Option<(M, Value)>,
/// The scope to write the metric to
#[derivative(Debug = "ignore")]
next_scope: Arc<ControlScopeFn<M>>,
}

98
src/async_queue.rs Normal file
View File

@ -0,0 +1,98 @@
//! Queue metrics for write on a separate thread,
//! Metrics definitions are still synchronous.
//! If queue size is exceeded, calling code reverts to blocking.
//!
use core::*;
use self_metrics::*;
use std::sync::Arc;
use std::sync::mpsc;
use std::thread;
/// Enqueue collected metrics for dispatch on background thread.
pub trait WithAsyncQueue
where
Self: Sized,
{
/// Enqueue collected metrics for dispatch on background thread.
fn with_async_queue(&self, queue_size: usize) -> Self;
}
impl<M: Send + Sync + Clone + 'static> WithAsyncQueue for Chain<M> {
fn with_async_queue(&self, queue_size: usize) -> Self {
self.mod_scope(|next| {
// setup channel
let (sender, receiver) = mpsc::sync_channel::<QueueCommand<M>>(queue_size);
// start queue processor thread
thread::spawn(move || loop {
while let Ok(cmd) = receiver.recv() {
match cmd {
QueueCommand {
cmd: Some((metric, value)),
next_scope,
} => next_scope.write(&metric, value),
QueueCommand {
cmd: None,
next_scope,
} => next_scope.flush(),
}
}
});
Arc::new(move |buffered| {
// open next scope, make it Arc to move across queue
let next_scope: ControlScopeFn<M> = next(buffered);
let sender = sender.clone();
// forward any scope command through the channel
ControlScopeFn::new(move |cmd| {
let send_cmd = match cmd {
ScopeCmd::Write(metric, value) => {
let metric: &M = metric;
Some((metric.clone(), value))
},
ScopeCmd::Flush => None,
};
sender
.send(QueueCommand {
cmd: send_cmd,
next_scope: next_scope.clone(),
})
.unwrap_or_else(|e| {
SEND_FAILED.mark();
trace!("Async metrics could not be sent: {}", e);
})
})
})
})
}
}
/// Enqueue collected metrics for dispatch on background thread.
#[deprecated(since = "0.5.0", note = "Use `with_async_queue` instead.")]
pub fn async<M, IC>(queue_size: usize, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.with_async_queue(queue_size)
}
lazy_static! {
static ref QUEUE_METRICS: AppMetrics<Aggregate> = SELF_METRICS.with_prefix("async");
static ref SEND_FAILED: AppMarker<Aggregate> = QUEUE_METRICS.marker("send_failed");
}
/// Carry the scope command over the queue, from the sender, to be executed by the receiver.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct QueueCommand<M> {
/// If Some(), the metric and value to write.
/// If None, flush the scope
cmd: Option<(M, Value)>,
/// The scope to write the metric to
#[derivative(Debug = "ignore")]
next_scope: ControlScopeFn<M>,
}

View File

@ -7,27 +7,47 @@ use lru_cache::LRUCache;
/// Cache metrics to prevent them from being re-defined on every use.
/// Use of this should be transparent, this has no effect on the values.
/// Stateful sinks (i.e. Aggregate) may naturally cache their definitions.
pub fn cache<M, IC>(size: usize, chain: IC) -> Chain<M>
pub trait WithCache
where
Self: Sized,
{
/// Cache metrics to prevent them from being re-defined on every use.
fn with_cache(&self, cache_size: usize) -> Self;
}
// TODO add selfmetrics cache stats
impl<M: Send + Sync + Clone + 'static> WithCache for Chain<M> {
fn with_cache(&self, cache_size: usize) -> Self {
self.mod_metric(|next| {
let cache: RwLock<LRUCache<String, M>> =
RwLock::new(LRUCache::with_capacity(cache_size));
Arc::new(move |kind, name, rate| {
let mut cache = cache.write().expect("Lock metric cache");
let name_str = String::from(name);
// FIXME lookup should use straight &str
if let Some(value) = cache.get(&name_str) {
return value.clone();
}
let new_value = (next)(kind, name, rate).clone();
cache.insert(name_str, new_value.clone());
new_value
})
})
}
}
/// Cache metrics to prevent them from being re-defined on every use.
/// Use of this should be transparent, this has no effect on the values.
/// Stateful sinks (i.e. Aggregate) may naturally cache their definitions.
#[deprecated(since = "0.5.0", note = "Use `with_cache` instead.")]
pub fn cache<M, IC>(cache_size: usize, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_metric(|next| {
let cache: RwLock<LRUCache<String, M>> = RwLock::new(LRUCache::with_capacity(size));
Arc::new(move |kind, name, rate| {
let mut cache = cache.write().expect("Lock metric cache");
let name_str = String::from(name);
// FIXME lookup should use straight &str
if let Some(value) = cache.get(&name_str) {
return value.clone()
}
let new_value = (next)(kind, name, rate).clone();
cache.insert(name_str, new_value.clone());
new_value
})
})
chain.with_cache(cache_size)
}

View File

@ -2,9 +2,15 @@
//! This is mostly centered around the backend.
//! Application-facing types are in the `app` module.
use self::Kind::*;
use self::ScopeCmd::*;
use time;
use std::sync::Arc;
// TODO define an 'AsValue' trait + impl for supported number types, then drop 'num' crate
pub use num::ToPrimitive;
/// Base type for recorded metric values.
// TODO should this be f64? f32?
pub type Value = u64;
@ -50,7 +56,6 @@ pub enum Kind {
Timer,
}
/// Dynamic metric definition function.
/// Metrics can be defined from any thread, concurrently (Fn is Sync).
/// The resulting metrics themselves can be also be safely shared across threads (<M> is Send + Sync).
@ -69,7 +74,14 @@ pub type OpenScopeFn<M> = Arc<Fn(bool) -> ControlScopeFn<M> + Send + Sync>;
/// Complex applications may define a new scope fo each operation or request.
/// Scopes can be moved acrossed threads (Send) but are not required to be thread-safe (Sync).
/// Some implementations _may_ be 'Sync', otherwise queue()ing or threadlocal() can be used.
pub type ControlScopeFn<M> = Arc<Fn(ScopeCmd<M>) + Send + Sync>;
#[derive(Clone)]
pub struct ControlScopeFn<M> {
flush_on_drop: bool,
scope_fn: Arc<Fn(ScopeCmd<M>)>,
}
unsafe impl<M> Sync for ControlScopeFn<M> {}
unsafe impl<M> Send for ControlScopeFn<M> {}
/// An method dispatching command enum to manipulate metric scopes.
/// Replaces a potential `Writer` trait that would have methods `write` and `flush`.
@ -83,20 +95,79 @@ pub enum ScopeCmd<'a, M: 'a> {
Flush,
}
impl<M> ControlScopeFn<M> {
/// Create a new metric scope based on the provided scope function.
///
/// ```rust
/// use dipstick::ControlScopeFn;
/// let ref mut scope: ControlScopeFn<String> = ControlScopeFn::new(|_cmd| { /* match cmd {} */ });
/// ```
///
pub fn new<F>(scope_fn: F) -> Self
where F: Fn(ScopeCmd<M>) + 'static
{
ControlScopeFn {
flush_on_drop: true,
scope_fn: Arc::new(scope_fn)
}
}
/// Write a value to this scope.
///
/// ```rust
/// let ref mut scope = dipstick::to_log().open_scope(false);
/// scope.write(&"counter".to_string(), 6);
/// ```
///
#[inline]
pub fn write(&self, metric: &M, value: Value) {
(self.scope_fn)(Write(metric, value))
}
/// Flush this scope, if buffered.
///
/// ```rust
/// let ref mut scope = dipstick::to_log().open_scope(true);
/// scope.flush();
/// ```
///
#[inline]
pub fn flush(&self) {
(self.scope_fn)(Flush)
}
/// If scope is buffered, controls whether to flush the scope one last time when it is dropped.
/// The default is true.
///
/// ```rust
/// let ref mut scope = dipstick::to_log().open_scope(true).flush_on_drop(false);
/// ```
///
pub fn flush_on_drop(mut self, enable: bool) -> Self {
self.flush_on_drop = enable;
self
}
}
impl<M> Drop for ControlScopeFn<M> {
fn drop(&mut self) {
if self.flush_on_drop {
self.flush()
}
}
}
/// A pair of functions composing a twin "chain of command".
/// This is the building block for the metrics backend.
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct Chain<M> {
#[derivative(Debug = "ignore")]
define_metric_fn: DefineMetricFn<M>,
#[derivative(Debug = "ignore")] define_metric_fn: DefineMetricFn<M>,
#[derivative(Debug = "ignore")]
scope_metric_fn: OpenScopeFn<M>,
#[derivative(Debug = "ignore")] scope_metric_fn: OpenScopeFn<M>,
}
impl<M: Send + Sync> Chain<M> {
impl<M: Send + Sync + Clone + 'static> Chain<M> {
/// Define a new metric.
#[allow(unused_variables)]
pub fn define_metric(&self, kind: Kind, name: &str, sampling: Rate) -> M {
@ -104,16 +175,43 @@ impl<M: Send + Sync> Chain<M> {
}
/// Open a new metric scope.
#[allow(unused_variables)]
pub fn open_scope(&self, auto_flush: bool) -> ControlScopeFn<M> {
(self.scope_metric_fn)(auto_flush)
/// Scope metrics allow an application to emit per-operation statistics,
/// For example, producing a per-request performance log.
///
/// Although the scope metrics can be predefined like in ['AppMetrics'], the application needs to
/// create a scope that will be passed back when reporting scoped metric values.
///
/// ```rust
/// use dipstick::*;
/// let scope_metrics = to_log();
/// let request_counter = scope_metrics.counter("scope_counter");
/// {
/// let ref mut request_scope = scope_metrics.open_scope(true);
/// request_counter.count(request_scope, 42);
/// }
/// ```
///
pub fn open_scope(&self, buffered: bool) -> ControlScopeFn<M> {
(self.scope_metric_fn)(buffered)
}
/// Open a buffered scope.
#[inline]
pub fn buffered_scope(&self) -> ControlScopeFn<M> {
self.open_scope(true)
}
/// Open an unbuffered scope.
#[inline]
pub fn unbuffered_scope(&self) -> ControlScopeFn<M> {
self.open_scope(false)
}
/// Create a new metric chain with the provided metric definition and scope creation functions.
pub fn new<MF, WF>(make_metric: MF, make_scope: WF) -> Self
where
MF: Fn(Kind, &str, Rate) -> M + Send + Sync + 'static,
WF: Fn(bool) -> ControlScopeFn<M> + Send + Sync + 'static,
where
MF: Fn(Kind, &str, Rate) -> M + Send + Sync + 'static,
WF: Fn(bool) -> ControlScopeFn<M> + Send + Sync + 'static,
{
Chain {
// capture the provided closures in Arc to provide cheap clones
@ -122,6 +220,30 @@ impl<M: Send + Sync> Chain<M> {
}
}
/// Get an event counter of the provided name.
pub fn marker<AS: AsRef<str>>(&self, name: AS) -> ScopeMarker<M> {
let metric = self.define_metric(Marker, name.as_ref(), 1.0);
ScopeMarker { metric }
}
/// Get a counter of the provided name.
pub fn counter<AS: AsRef<str>>(&self, name: AS) -> ScopeCounter<M> {
let metric = self.define_metric(Counter, name.as_ref(), 1.0);
ScopeCounter { metric }
}
/// Get a timer of the provided name.
pub fn timer<AS: AsRef<str>>(&self, name: AS) -> ScopeTimer<M> {
let metric = self.define_metric(Timer, name.as_ref(), 1.0);
ScopeTimer { metric }
}
/// Get a gauge of the provided name.
pub fn gauge<AS: AsRef<str>>(&self, name: AS) -> ScopeGauge<M> {
let metric = self.define_metric(Gauge, name.as_ref(), 1.0);
ScopeGauge { metric }
}
/// Intercept metric definition without changing the metric type.
pub fn mod_metric<MF>(&self, mod_fn: MF) -> Chain<M>
where
@ -129,7 +251,7 @@ impl<M: Send + Sync> Chain<M> {
{
Chain {
define_metric_fn: mod_fn(self.define_metric_fn.clone()),
scope_metric_fn: self.scope_metric_fn.clone()
scope_metric_fn: self.scope_metric_fn.clone(),
}
}
@ -139,10 +261,11 @@ impl<M: Send + Sync> Chain<M> {
MF: Fn(DefineMetricFn<M>, OpenScopeFn<M>) -> (DefineMetricFn<N>, OpenScopeFn<N>),
N: Clone + Send + Sync,
{
let (metric_fn, scope_fn) = mod_fn(self.define_metric_fn.clone(), self.scope_metric_fn.clone());
let (metric_fn, scope_fn) =
mod_fn(self.define_metric_fn.clone(), self.scope_metric_fn.clone());
Chain {
define_metric_fn: metric_fn,
scope_metric_fn: scope_fn
scope_metric_fn: scope_fn,
}
}
@ -153,7 +276,119 @@ impl<M: Send + Sync> Chain<M> {
{
Chain {
define_metric_fn: self.define_metric_fn.clone(),
scope_metric_fn: mod_fn(self.scope_metric_fn.clone())
scope_metric_fn: mod_fn(self.scope_metric_fn.clone()),
}
}
}
/// A monotonic counter metric.
/// Since value is only ever increased by one, no value parameter is provided,
/// preventing programming errors.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeMarker<M> {
metric: M,
}
impl<M> ScopeMarker<M> {
/// Record a single event occurence.
#[inline]
pub fn mark(&self, scope: &mut ControlScopeFn<M>) {
scope.write(&self.metric, 1);
}
}
/// A counter that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeCounter<M> {
metric: M,
}
impl<M> ScopeCounter<M> {
/// Record a value count.
#[inline]
pub fn count<V>(&self, scope: &mut ControlScopeFn<M>, count: V)
where
V: ToPrimitive,
{
scope.write(&self.metric, count.to_u64().unwrap());
}
}
/// A gauge that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeGauge<M> {
metric: M,
}
impl<M: Clone> ScopeGauge<M> {
/// Record a value point for this gauge.
#[inline]
pub fn value<V>(&self, scope: &mut ControlScopeFn<M>, value: V)
where
V: ToPrimitive,
{
scope.write(&self.metric, value.to_u64().unwrap());
}
}
/// A timer that sends values to the metrics backend
/// Timers can record time intervals in multiple ways :
/// - with the time! macrohich wraps an expression or block with start() and stop() calls.
/// - with the time(Fn) methodhich wraps a closure with start() and stop() calls.
/// - with start() and stop() methodsrapping around the operation to time
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeTimer<M> {
metric: M,
}
impl<M: Clone> ScopeTimer<M> {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
#[inline]
pub fn interval_us<V>(&self, scope: &mut ControlScopeFn<M>, interval_us: V) -> V
where
V: ToPrimitive,
{
scope.write(&self.metric, interval_us.to_u64().unwrap());
interval_us
}
/// Obtain a opaque handle to the current time.
/// The handle is passed back to the stop() method to record a time interval.
/// This is actually a convenience method to the TimeHandle::now()
/// Beware, handles obtained here are not bound to this specific timer instance
/// _for now_ but might be in the future for safety.
/// If you require safe multi-timer handles, get them through TimeType::now()
#[inline]
pub fn start(&self) -> TimeHandle {
TimeHandle::now()
}
/// Record the time elapsed since the start_time handle was obtained.
/// This call can be performed multiple times using the same handle,
/// reporting distinct time intervals each time.
/// Returns the microsecond interval value that was recorded.
#[inline]
pub fn stop(&self, scope: &mut ControlScopeFn<M>, start_time: TimeHandle) -> u64 {
let elapsed_us = start_time.elapsed_us();
self.interval_us(scope, elapsed_us)
}
/// Record the time taken to execute the provided closure
#[inline]
pub fn time<F, R>(&self, scope: &mut ControlScopeFn<M>, operations: F) -> R
where
F: FnOnce() -> R,
{
let start_time = self.start();
let value: R = operations();
self.stop(scope, start_time);
value
}
}

View File

@ -43,4 +43,3 @@ impl From<io::Error> for Error {
IO(err)
}
}

View File

@ -23,7 +23,7 @@ where
Ok(Chain::new(
move |kind, name, rate| {
let mut prefix = String::with_capacity(32);
prefix.push_str(name.as_ref());
prefix.push_str(name);
prefix.push(' ');
let mut scale = match kind {
@ -35,27 +35,25 @@ where
if rate < FULL_SAMPLING_RATE {
// graphite does not do sampling, so we'll upsample before sending
let upsample = (1.0 / rate).round() as u64;
warn!("Metric {:?} '{}' being sampled at rate {} will be upsampled \
by a factor of {} when sent to graphite.", kind, name, rate, upsample);
warn!(
"Metric {:?} '{}' being sampled at rate {} will be upsampled \
by a factor of {} when sent to graphite.",
kind, name, rate, upsample
);
scale = scale * upsample;
}
Graphite {
prefix,
scale,
}
Graphite { prefix, scale }
},
move |auto_flush| {
move |buffered| {
let buf = ScopeBuffer {
buffer: Arc::new(RwLock::new(String::new())),
socket: socket.clone(),
auto_flush,
buffered,
};
Arc::new(move |cmd| {
match cmd {
ScopeCmd::Write(metric, value) => buf.write(metric, value),
ScopeCmd::Flush => buf.flush(),
}
ControlScopeFn::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => buf.write(metric, value),
ScopeCmd::Flush => buf.flush(),
})
},
))
@ -63,13 +61,13 @@ where
/// Its hard to see how a single scope could get more metrics than this.
// TODO make configurable?
const BUFFER_FLUSH_THRESHOLD: usize = 65536;
const BUFFER_FLUSH_THRESHOLD: usize = 65_536;
lazy_static! {
static ref GRAPHITE_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("graphite.");
static ref SEND_ERR: Marker<Aggregate> = GRAPHITE_METRICS.marker("send_failed");
static ref SENT_BYTES: Counter<Aggregate> = GRAPHITE_METRICS.counter("sent_bytes");
static ref TRESHOLD_EXCEEDED: Marker<Aggregate> = GRAPHITE_METRICS.marker("bufsize_exceeded");
static ref GRAPHITE_METRICS: AppMetrics<Aggregate> = SELF_METRICS.with_prefix("graphite");
static ref SEND_ERR: AppMarker<Aggregate> = GRAPHITE_METRICS.marker("send_failed");
static ref SENT_BYTES: AppCounter<Aggregate> = GRAPHITE_METRICS.counter("sent_bytes");
static ref TRESHOLD_EXCEEDED: AppMarker<Aggregate> = GRAPHITE_METRICS.marker("bufsize_exceeded");
}
/// Key of a graphite metric.
@ -84,7 +82,7 @@ pub struct Graphite {
struct ScopeBuffer {
buffer: Arc<RwLock<String>>,
socket: Arc<RwLock<RetrySocket>>,
auto_flush: bool,
buffered: bool,
}
/// Any remaining buffered data is flushed on Drop.
@ -95,7 +93,7 @@ impl Drop for ScopeBuffer {
}
impl ScopeBuffer {
fn write (&self, metric: &Graphite, value: Value) {
fn write(&self, metric: &Graphite, value: Value) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
@ -112,17 +110,19 @@ impl ScopeBuffer {
if buf.len() > BUFFER_FLUSH_THRESHOLD {
TRESHOLD_EXCEEDED.mark();
warn!("Flushing metrics scope buffer to graphite because its size exceeds \
the threshold of {} bytes. ", BUFFER_FLUSH_THRESHOLD);
warn!(
"Flushing metrics scope buffer to graphite because its size exceeds \
the threshold of {} bytes. ",
BUFFER_FLUSH_THRESHOLD
);
self.flush_inner(&mut buf);
} else if self.auto_flush {
} else if !self.buffered {
self.flush_inner(&mut buf);
}
},
}
Err(e) => {
warn!("Could not compute epoch timestamp. {}", e);
},
}
};
}
@ -148,7 +148,6 @@ impl ScopeBuffer {
fn flush(&self) {
let mut buf = self.buffer.write().expect("Lock graphite buffer.");
self.flush_inner(&mut buf);
}
}
@ -164,7 +163,7 @@ mod bench {
let timer = sd.define_metric(Kind::Timer, "timer", 1000000.0);
let scope = sd.open_scope(false);
b.iter(|| test::black_box(scope.as_ref()(ScopeCmd::Write(&timer, 2000))));
b.iter(|| test::black_box(scope.write(&timer, 2000)));
}
}

View File

@ -1,116 +1,9 @@
/*!
A quick, modular metrics toolkit for Rust applications; similar to popular logging frameworks,
but with counters, markers, gauges and timers.
Dipstick builds on stable Rust with minimal dependencies
and is published as a [crate.](https://crates.io/crates/dipstick)
# Features
- Send metrics to stdout, log, statsd or graphite (one or many)
- Synchronous, asynchronous or mixed operation
- Optional fast random statistical sampling
- Immediate propagation or local aggregation of metrics (count, sum, average, min/max)
- Periodic or programmatic publication of aggregated metrics
- Customizable output statistics and formatting
- Global or scoped (e.g. per request) metrics
- Per-application and per-output metric namespaces
- Predefined or ad-hoc metrics
# Cookbook
Dipstick is easy to add to your code:
```rust
use dipstick::*;
let app_metrics = metrics(to_graphite("host.com:2003"));
app_metrics.counter("my_counter").count(3);
```
Metrics can be sent to multiple outputs at the same time:
```rust
let app_metrics = metrics((to_stdout(), to_statsd("localhost:8125", "app1.host.")));
```
Since instruments are decoupled from the backend, outputs can be swapped easily.
Metrics can be aggregated and scheduled to be published periodically in the background:
```rust
use std::time::Duration;
let (to_aggregate, from_aggregate) = aggregate();
publish_every(Duration::from_secs(10), from_aggregate, to_log("last_ten_secs:"), all_stats);
let app_metrics = metrics(to_aggregate);
```
Aggregation is performed locklessly and is very fast.
Count, sum, min, max and average are tracked where they make sense.
Published statistics can be selected with presets such as `all_stats` (see previous example),
`summary`, `average`.
For more control over published statistics, a custom filter can be provided:
```rust
let (_to_aggregate, from_aggregate) = aggregate();
publish(from_aggregate, to_log("my_custom_stats:"),
|metric_kind, metric_name, metric_score|
match metric_score {
HitCount(hit_count) => Some((Counter, vec![metric_name, ".per_thousand"], hit_count / 1000)),
_ => None
});
```
Metrics can be statistically sampled:
```rust
let app_metrics = metrics(sample(0.001, to_statsd("server:8125", "app.sampled.")));
```
A fast random algorithm is used to pick samples.
Outputs can use sample rate to expand or format published data.
Metrics can be recorded asynchronously:
```rust
let app_metrics = metrics(async(48, to_stdout()));
```
The async queue uses a Rust channel and a standalone thread.
The current behavior is to block when full.
Metric definitions can be cached to make using _ad-hoc metrics_ faster:
```rust
let app_metrics = metrics(cache(512, to_log()));
app_metrics.gauge(format!("my_gauge_{}", 34)).value(44);
```
The preferred way is to _predefine metrics_,
possibly in a [lazy_static!](https://crates.io/crates/lazy_static) block:
```rust
#[macro_use] external crate lazy_static;
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = metrics(to_stdout());
pub static ref COUNTER_A: Counter<Aggregate> = METRICS.counter("counter_a");
}
COUNTER_A.count(11);
```
Timers can be used multiple ways:
```rust
let timer = app_metrics.timer("my_timer");
time!(timer, {/* slow code here */} );
timer.time(|| {/* slow code here */} );
let start = timer.start();
/* slow code here */
timer.stop(start);
timer.interval_us(123_456);
```
Related metrics can share a namespace:
```rust
let db_metrics = app_metrics.with_prefix("database.");
let db_timer = db_metrics.timer("db_timer");
let db_counter = db_metrics.counter("db_counter");
```
A quick, modular metrics toolkit for Rust applications.
*/
#![cfg_attr(feature = "bench", feature(test))]
#![warn(
missing_copy_implementations,
missing_docs,
@ -128,12 +21,12 @@ extern crate test;
#[macro_use]
extern crate log;
extern crate time;
extern crate num;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate derivative;
#[macro_use]
extern crate lazy_static;
extern crate num;
extern crate time;
mod pcg32;
mod lru_cache;
@ -149,11 +42,8 @@ pub mod macros;
mod output;
pub use output::*;
mod global_metrics;
pub use global_metrics::*;
mod scope_metrics;
pub use scope_metrics::*;
mod app_metrics;
pub use app_metrics::*;
mod sample;
pub use sample::*;
@ -167,6 +57,9 @@ pub use aggregate::*;
mod publish;
pub use publish::*;
//mod with_buffer;
//pub use with_buffer::*;
mod statsd;
pub use statsd::*;
@ -185,8 +78,8 @@ pub use cache::*;
mod multi;
pub use multi::*;
mod async;
pub use async::*;
mod async_queue;
pub use async_queue::*;
mod schedule;
pub use schedule::*;

View File

@ -22,7 +22,7 @@
//! A fixed-size cache with LRU expiration criteria.
//! Adapted from https://github.com/cwbriones/lrucache
//!
use std::hash::Hash;
use std::collections::HashMap;
@ -38,8 +38,8 @@ pub struct LRUCache<K, V> {
table: HashMap<K, usize>,
entries: Vec<CacheEntry<K, V>>,
first: Option<usize>,
last: Option<usize>,
capacity: usize
last: Option<usize>,
capacity: usize,
}
impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
@ -76,7 +76,7 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
key: key.clone(),
value: Some(value),
next: self.first,
prev: None
prev: None,
});
self.first = Some(idx);
self.last = self.last.or(self.first);
@ -89,9 +89,9 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// without promoting it.
pub fn peek(&mut self, key: &K) -> Option<&V> {
let entries = &self.entries;
self.table.get(key).and_then(move |i| {
entries[*i].value.as_ref()
})
self.table
.get(key)
.and_then(move |i| entries[*i].value.as_ref())
}
/// Retrieves a reference to the item associated with `key` from the cache.
@ -109,7 +109,7 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// Promotes the specified key to the top of the cache.
fn access(&mut self, key: &K) {
let i = *self.table.get(key).unwrap();
let i = *self.table[key];
self.remove_from_list(i);
self.first = Some(i);
}
@ -133,15 +133,15 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
}
let second = &mut self.entries[k];
second.prev = prev;
},
}
// Item was at the end of the list
(Some(j), None) => {
let first = &mut self.entries[j];
first.next = None;
self.last = prev;
},
}
// Item was at front
_ => ()
_ => (),
}
}

View File

@ -1,39 +1,39 @@
//! Dispatch metrics to multiple sinks.
use core::*;
use std::sync::Arc;
/// Two chains of different types can be combined in a tuple.
/// The chains will act as one, each receiving calls in the order the appear in the tuple.
/// For more than two types, make tuples of tuples, "Yo Dawg" style.
impl<M1, M2> From<(Chain<M1>, Chain<M2>)> for Chain<(M1, M2)>
where
M1: 'static + Clone + Send + Sync,
M2: 'static + Clone + Send + Sync,
where
M1: 'static + Clone + Send + Sync,
M2: 'static + Clone + Send + Sync,
{
fn from(combo: (Chain<M1>, Chain<M2>)) -> Chain<(M1, M2)> {
let combo0 = combo.0.clone();
let combo1 = combo.1.clone();
Chain::new(
move |kind, name, rate| (
combo.0.define_metric(kind, name, rate),
combo.1.define_metric(kind, &name, rate),
),
move |kind, name, rate| {
(
combo.0.define_metric(kind, name, rate),
combo.1.define_metric(kind, &name, rate),
)
},
move |buffered| {
let scope0 = combo0.open_scope(buffered);
let scope1 = combo1.open_scope(buffered);
move |auto_flush| {
let scope0 = combo0.open_scope(auto_flush);
let scope1 = combo1.open_scope(auto_flush);
Arc::new(move |cmd| match cmd {
ControlScopeFn::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => {
scope0(ScopeCmd::Write(&metric.0, value));
scope1(ScopeCmd::Write(&metric.1, value));
let metric: &(M1, M2) = metric;
scope0.write(&metric.0, value);
scope1.write(&metric.1, value);
}
ScopeCmd::Flush => {
scope0(ScopeCmd::Flush);
scope1(ScopeCmd::Flush);
scope0.flush();
scope1.flush();
}
})
},
@ -44,11 +44,10 @@ impl<M1, M2> From<(Chain<M1>, Chain<M2>)> for Chain<(M1, M2)>
/// Multiple chains of the same type can be combined in a slice.
/// The chains will act as one, each receiving calls in the order the appear in the slice.
impl<'a, M> From<&'a [Chain<M>]> for Chain<Box<[M]>>
where
M: 'static + Clone + Send + Sync,
where
M: 'static + Clone + Send + Sync,
{
fn from(chains: &'a [Chain<M>]) -> Chain<Box<[M]>> {
let chains = chains.to_vec();
let chains2 = chains.clone();
@ -60,24 +59,22 @@ impl<'a, M> From<&'a [Chain<M>]> for Chain<Box<[M]>>
}
metric.into_boxed_slice()
},
move |auto_flush| {
move |buffered| {
let mut scopes = Vec::with_capacity(chains2.len());
for chain in &chains2 {
scopes.push(chain.open_scope(auto_flush));
scopes.push(chain.open_scope(buffered));
}
Arc::new(move |cmd| match cmd {
ControlScopeFn::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => {
let metric: &Box<[M]> = metric;
for (i, scope) in scopes.iter().enumerate() {
(scope)(ScopeCmd::Write(&metric[i], value))
scope.write(&metric[i], value)
}
}
ScopeCmd::Flush => {
for scope in &scopes {
(scope)(ScopeCmd::Flush)
}
}
},
ScopeCmd::Flush => for scope in &scopes {
scope.flush()
},
})
},
)

View File

@ -1,8 +1,37 @@
//! Metrics name manipulation functions.
use core::*;
use std::sync::Arc;
/// Insert prefix in newly defined metrics.
/// Prepend metric names with custom prefix.
pub trait WithNamespace
where
Self: Sized,
{
/// Insert prefix in newly defined metrics.
fn with_prefix(&self, prefix: &str) -> Self {
self.with_namespace(&[prefix])
}
/// Join namespace and prepend in newly defined metrics.
fn with_namespace(&self, names: &[&str]) -> Self;
}
impl<M: Send + Sync + Clone + 'static> WithNamespace for Chain<M> {
fn with_namespace(&self, names: &[&str]) -> Self {
self.mod_metric(|next| {
let nspace = names.join(".");
Arc::new(move |kind, name, rate| {
let name = [nspace.as_ref(), name].join(".");
(next)(kind, name.as_ref(), rate)
})
})
}
}
/// deprecated, use with_prefix() omitting any previously supplied separator
#[deprecated(since = "0.5.0",
note = "Use `with_prefix` instead, omitting any previously supplied separator.")]
pub fn prefix<M, IC>(prefix: &str, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
@ -17,15 +46,3 @@ where
})
})
}
/// Join namespace and prepend in newly defined metrics.
pub fn namespace<M, IC>(names: &[&str], chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
let nspace = names.join(".");
prefix(nspace.as_ref(), chain)
}

View File

@ -1,28 +1,70 @@
//! Standard stateless metric outputs.
// TODO parameterize templates
use core::*;
use std::sync::Arc;
use std::sync::RwLock;
/// Write metric values to stdout using `println!`.
pub fn to_stdout() -> Chain<String> {
Chain::new(
|_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|cmd| if let ScopeCmd::Write(m, v) = cmd { println!("{}: {}", m, v) })
|buffered| {
if !buffered {
ControlScopeFn::new(|cmd| {
if let ScopeCmd::Write(m, v) = cmd {
println!("{}: {}", m, v)
}
})
} else {
let buf = RwLock::new(String::new());
ControlScopeFn::new(move |cmd| {
let mut buf = buf.write().expect("Lock string buffer.");
match cmd {
ScopeCmd::Write(metric, value) => buf.push_str(format!("{}: {}\n", metric, value).as_ref()),
ScopeCmd::Flush => {
println!("{}", buf);
buf.clear();
}
}
})
}
},
)
}
/// Write metric values to the standard log using `info!`.
// TODO parameterize log level
pub fn to_log() -> Chain<String> {
Chain::new(
|_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|cmd| if let ScopeCmd::Write(m, v) = cmd { info!("{}: {}", m, v) })
|buffered| {
if !buffered {
ControlScopeFn::new(|cmd| {
if let ScopeCmd::Write(m, v) = cmd {
info!("{}: {}", m, v)
}
})
} else {
let buf = RwLock::new(String::new());
ControlScopeFn::new(move |cmd| {
let mut buf = buf.write().expect("Lock string buffer.");
match cmd {
ScopeCmd::Write(metric, value) => buf.push_str(format!("{}: {}\n", metric, value).as_ref()),
ScopeCmd::Flush => {
info!("{}", buf);
buf.clear();
}
}
})
}
},
)
}
/// Special sink that discards all metric values sent to it.
/// Discard all metric values sent to it.
pub fn to_void() -> Chain<String> {
Chain::new(
move |_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|_cmd| {})
|_buffered| ControlScopeFn::new(|_cmd| {}),
)
}
@ -34,21 +76,21 @@ mod test {
fn sink_print() {
let c = super::to_stdout();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
c.open_scope(true).write(&m, 33);
}
#[test]
fn test_to_log() {
let c = super::to_log();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
c.open_scope(true).write(&m, 33);
}
#[test]
fn test_to_void() {
let c = super::to_void();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
c.open_scope(true).write(&m, 33);
}
}

View File

@ -9,9 +9,8 @@ fn seed() -> u64 {
let seed = seed.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407)
.wrapping_add(time::precise_time_ns());
seed.wrapping_mul(6364136223846793005).wrapping_add(
1442695040888963407,
)
seed.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407)
}
/// quickly return a random int
@ -23,9 +22,9 @@ fn pcg32_random() -> u32 {
PCG32_STATE.with(|state| {
let oldstate: u64 = *state.borrow();
// XXX could generate the increment from the thread ID
*state.borrow_mut() = oldstate.wrapping_mul(6364136223846793005).wrapping_add(
1442695040888963407,
);
*state.borrow_mut() = oldstate
.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407);
((((oldstate >> 18) ^ oldstate) >> 27) as u32).rotate_right((oldstate >> 59) as u32)
})
}

View File

@ -21,13 +21,12 @@
use core::*;
use core::Kind::*;
use scores::{ScoreType, ScoreSnapshot};
use scores::{ScoreSnapshot, ScoreType};
use scores::ScoreType::*;
use std::fmt::Debug;
/// A trait to publish metrics.
pub trait Publish: Send + Sync + Debug {
/// Publish the provided metrics data downstream.
fn publish(&self, scores: Vec<ScoreSnapshot>);
}
@ -38,15 +37,14 @@ pub trait Publish: Send + Sync + Debug {
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct Publisher<E, M> {
#[derivative(Debug = "ignore")]
statistics: Box<E>,
#[derivative(Debug = "ignore")] statistics: Box<E>,
target_chain: Chain<M>,
}
impl<E, M> Publisher<E, M>
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + 'static,
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + 'static,
{
/// Define a new metrics publishing strategy, from a transformation
/// function and a target metric chain.
@ -60,7 +58,7 @@ impl<E, M> Publisher<E, M>
impl<E, M> Publish for Publisher<E, M>
where
M: Clone + Send + Sync + Debug,
M: Clone + Send + Sync + Debug + 'static,
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
{
fn publish(&self, snapshot: Vec<ScoreSnapshot>) {
@ -72,9 +70,10 @@ where
} else {
for metric in snapshot {
for score in metric.2 {
if let Some(ex) = self.statistics.as_ref()(metric.0, metric.1.as_ref(), score) {
let temp_metric = self.target_chain.define_metric(ex.0, &ex.1.concat(), 1.0);
publish_scope_fn(ScopeCmd::Write(&temp_metric, ex.2));
if let Some(ex) = (self.statistics)(metric.0, metric.1.as_ref(), score) {
let temp_metric =
self.target_chain.define_metric(ex.0, &ex.1.concat(), 1.0);
publish_scope_fn.write(&temp_metric, ex.2);
}
}
}
@ -82,7 +81,7 @@ where
// TODO parameterize whether to keep ad-hoc metrics after publish
// source.cleanup();
publish_scope_fn(ScopeCmd::Flush)
publish_scope_fn.flush()
}
}
@ -95,7 +94,7 @@ pub fn all_stats(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<
Mean(mean) => Some((kind, vec![name, ".mean"], mean.round() as Value)),
Max(max) => Some((Gauge, vec![name, ".max"], max)),
Min(min) => Some((Gauge, vec![name, ".min"], min)),
Rate(rate) => Some((Gauge, vec![name, ".rate"], rate.round() as Value))
Rate(rate) => Some((Gauge, vec![name, ".rate"], rate.round() as Value)),
}
}
@ -105,18 +104,14 @@ pub fn all_stats(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<
/// and so exported stats copy their metric's name.
pub fn average(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match kind {
Marker => {
match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
}
}
_ => {
match score {
Mean(avg) => Some((Gauge, vec![name], avg.round() as Value)),
_ => None,
}
}
Marker => match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
},
_ => match score {
Mean(avg) => Some((Gauge, vec![name], avg.round() as Value)),
_ => None,
},
}
}
@ -128,23 +123,17 @@ pub fn average(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&s
/// and so exported stats copy their metric's name.
pub fn summary(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match kind {
Marker => {
match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
}
}
Counter | Timer => {
match score {
Sum(sum) => Some((kind, vec![name], sum)),
_ => None,
}
}
Gauge => {
match score {
Mean(mean) => Some((Gauge, vec![name], mean.round() as Value)),
_ => None,
}
}
Marker => match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
},
Counter | Timer => match score {
Sum(sum) => Some((kind, vec![name], sum)),
_ => None,
},
Gauge => match score {
Mean(mean) => Some((Gauge, vec![name], mean.round() as Value)),
_ => None,
},
}
}

View File

@ -2,47 +2,61 @@
use core::*;
use pcg32;
use std::sync::Arc;
/// The metric sampling key also holds the sampling rate to apply to it.
#[derive(Debug, Clone)]
pub struct Sample<M> {
target: M,
int_sampling_rate: u32,
/// Apply statistical sampling to collected metrics data.
pub trait WithSamplingRate
where
Self: Sized,
{
/// Perform random sampling of values according to the specified rate.
fn with_sampling_rate(&self, sampling_rate: Rate) -> Self;
}
impl<M: Send + Sync + 'static + Clone> WithSamplingRate for Chain<M> {
fn with_sampling_rate(&self, sampling_rate: Rate) -> Self {
let int_sampling_rate = pcg32::to_int_rate(sampling_rate);
self.mod_both(|metric_fn, scope_fn| {
(
Arc::new(move |kind, name, rate| {
// TODO override only if FULL_SAMPLING else warn!()
if rate != FULL_SAMPLING_RATE {
info!(
"Metric {} will be downsampled again {}, {}",
name, rate, sampling_rate
);
}
let new_rate = sampling_rate * rate;
metric_fn(kind, name, new_rate)
}),
Arc::new(move |buffered| {
let next_scope = scope_fn(buffered);
ControlScopeFn::new(move |cmd| {
match cmd {
ScopeCmd::Write(metric, value) => {
if pcg32::accept_sample(int_sampling_rate) {
next_scope.write(metric, value)
}
},
ScopeCmd::Flush => next_scope.flush()
}
})
}),
)
})
}
}
/// Perform random sampling of values according to the specified rate.
pub fn sample<M, IC>(sampling_rate: Rate, chain: IC) -> Chain<Sample<M>>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
#[deprecated(since = "0.5.0", note = "Use `with_sampling_rate` instead.")]
pub fn sample<M, IC>(sampling_rate: Rate, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_both(|metric_fn, scope_fn|
(Arc::new(move |kind, name, rate| {
// TODO override only if FULL_SAMPLING else warn!()
if rate != FULL_SAMPLING_RATE {
info!("Metric {} will be downsampled again {}, {}", name, rate, sampling_rate);
}
let new_rate = sampling_rate * rate;
Sample {
target: metric_fn(kind, name, new_rate),
int_sampling_rate: pcg32::to_int_rate(new_rate),
}
}),
Arc::new(move |auto_flush| {
let next_scope = scope_fn(auto_flush);
Arc::new(move |cmd| {
if let ScopeCmd::Write(metric, value) = cmd {
if pcg32::accept_sample(metric.int_sampling_rate) {
next_scope(ScopeCmd::Write(&metric.target, value))
}
}
next_scope(ScopeCmd::Flush)
})
}))
)
chain.with_sampling_rate(sampling_rate)
}

View File

@ -1,291 +0,0 @@
//! Scope metrics allow an application to emit per-operation statistics,
//! like generating a per-request performance log.
//!
//! Although the scope metrics can be predefined like in [GlobalMetrics], the application needs to
//! create a scope that will be passed back when reporting scoped metric values.
/*!
Per-operation metrics can be recorded and published using `scope_metrics`:
```rust
let scope_metrics = scope_metrics(to_log());
let request_counter = scope_metrics.counter("scope_counter");
{
let request_scope = scope_metrics.open_scope();
request_counter.count(request_scope, 42);
request_counter.count(request_scope, 42);
}
```
*/
use core::*;
use core::ScopeCmd::*;
use std::sync::{Arc, RwLock};
// TODO define an 'AsValue' trait + impl for supported number types, then drop 'num' crate
pub use num::ToPrimitive;
/// Wrap the metrics backend to provide an application-friendly interface.
/// When reporting a value, scoped metrics also need to be passed a [Scope].
pub fn scoped_metrics<M>(chain: Chain<M>) -> ScopedMetrics<M>
where
M: 'static + Clone + Send + Sync,
{
ScopedMetrics {
prefix: "".to_string(),
chain: Arc::new(chain),
}
}
/// A monotonic counter metric.
/// Since value is only ever increased by one, no value parameter is provided,
/// preventing programming errors.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeMarker<M> {
metric: M,
}
impl<M> ScopeMarker<M> {
/// Record a single event occurence.
pub fn mark(&self, scope: &mut ControlScopeFn<M>) {
(scope)(Write(&self.metric, 1));
}
}
/// A counter that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeCounter<M> {
metric: M,
}
impl<M> ScopeCounter<M> {
/// Record a value count.
pub fn count<V>(&self, scope: &mut ControlScopeFn<M>, count: V)
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, count.to_u64().unwrap()));
}
}
/// A gauge that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeGauge<M> {
metric: M,
}
impl<M: Clone> ScopeGauge<M> {
/// Record a value point for this gauge.
pub fn value<V>(&self, scope: &mut ControlScopeFn<M>, value: V)
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, value.to_u64().unwrap()));
}
}
/// A timer that sends values to the metrics backend
/// Timers can record time intervals in multiple ways :
/// - with the time! macrohich wraps an expression or block with start() and stop() calls.
/// - with the time(Fn) methodhich wraps a closure with start() and stop() calls.
/// - with start() and stop() methodsrapping around the operation to time
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeTimer<M> {
metric: M,
}
impl<M: Clone> ScopeTimer<M> {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
pub fn interval_us<V>(&self, scope: &mut ControlScopeFn<M>, interval_us: V) -> V
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, interval_us.to_u64().unwrap()));
interval_us
}
/// Obtain a opaque handle to the current time.
/// The handle is passed back to the stop() method to record a time interval.
/// This is actually a convenience method to the TimeHandle::now()
/// Beware, handles obtained here are not bound to this specific timer instance
/// _for now_ but might be in the future for safety.
/// If you require safe multi-timer handles, get them through TimeType::now()
pub fn start(&self) -> TimeHandle {
TimeHandle::now()
}
/// Record the time elapsed since the start_time handle was obtained.
/// This call can be performed multiple times using the same handle,
/// reporting distinct time intervals each time.
/// Returns the microsecond interval value that was recorded.
pub fn stop(&self, scope: &mut ControlScopeFn<M>, start_time: TimeHandle) -> u64 {
let elapsed_us = start_time.elapsed_us();
self.interval_us(scope, elapsed_us)
}
/// Record the time taken to execute the provided closure
pub fn time<F, R>(&self, scope: &mut ControlScopeFn<M>, operations: F) -> R
where
F: FnOnce() -> R,
{
let start_time = self.start();
let value: R = operations();
self.stop(scope, start_time);
value
}
}
/// Variations of this should also provide control of the metric recording scope.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopedMetrics<M> {
prefix: String,
chain: Arc<Chain<M>>,
}
impl<M> ScopedMetrics<M>
where
M: 'static + Clone + Send + Sync,
{
fn qualified_name<AS>(&self, name: AS) -> String
where
AS: Into<String> + AsRef<str>,
{
// FIXME is there a way to return <S> in both cases?
if self.prefix.is_empty() {
return name.into();
}
let mut buf: String = self.prefix.clone();
buf.push_str(name.as_ref());
buf.to_string()
}
/// Get an event counter of the provided name.
pub fn marker<AS>(&self, name: AS) -> ScopeMarker<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Marker,
&self.qualified_name(name),
1.0,
);
ScopeMarker { metric }
}
/// Get a counter of the provided name.
pub fn counter<AS>(&self, name: AS) -> ScopeCounter<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Counter,
&self.qualified_name(name),
1.0,
);
ScopeCounter { metric }
}
/// Get a timer of the provided name.
pub fn timer<AS>(&self, name: AS) -> ScopeTimer<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Timer,
&self.qualified_name(name),
1.0,
);
ScopeTimer { metric }
}
/// Get a gauge of the provided name.
pub fn gauge<AS>(&self, name: AS) -> ScopeGauge<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Gauge,
&self.qualified_name(name),
1.0,
);
ScopeGauge { metric }
}
/// Prepend the metrics name with a prefix.
/// Does not affect metrics that were already obtained.
pub fn with_prefix<IS>(&self, prefix: IS) -> Self
where
IS: Into<String>,
{
ScopedMetrics {
prefix: prefix.into(),
chain: self.chain.clone(),
}
}
/// Create a new scope to report metric values.
pub fn open_scope(&self) -> ControlScopeFn<M> {
let scope_buffer = RwLock::new(ScopeBuffer {
buffer: Vec::new(),
scope: self.chain.open_scope(false),
});
Arc::new(move |cmd: ScopeCmd<M>| {
let mut buf = scope_buffer.write().expect("Lock metric scope.");
match cmd {
Write(metric, value) => {
buf.buffer.push(ScopeCommand {
metric: (*metric).clone(),
value,
})
}
Flush => buf.flush(),
}
})
}
}
/// Save the metrics for delivery upon scope close.
struct ScopeCommand<M> {
metric: M,
value: Value,
}
struct ScopeBuffer<M: Clone> {
buffer: Vec<ScopeCommand<M>>,
scope: ControlScopeFn<M>,
}
impl<M: Clone> ScopeBuffer<M> {
fn flush(&mut self) {
for cmd in self.buffer.drain(..) {
(self.scope)(Write(&cmd.metric, cmd.value))
}
(self.scope)(Flush)
}
}
impl<M: Clone> Drop for ScopeBuffer<M> {
fn drop(&mut self) {
self.flush()
}
}
#[cfg(feature = "bench")]
mod bench {
use ::*;
use test;
#[bench]
fn time_bench_direct_dispatch_event(b: &mut test::Bencher) {
let sink = aggregate(5, summary, to_stdout());
let metrics = global_metrics(sink);
let marker = metrics.marker("aaa");
b.iter(|| test::black_box(marker.mark()));
}
}

View File

@ -21,9 +21,9 @@ pub enum ScoreType {
Max(u64),
/// Smallest value reported.
Min(u64),
/// Approximative average value (hit count / sum, non-atomic)
/// Average value (hit count / sum, non-atomic)
Mean(f64),
/// Approximative mean rate (hit count / period length in seconds, non-atomic)
/// Mean rate (hit count / period length in seconds, non-atomic)
Rate(f64),
}
@ -40,7 +40,7 @@ pub struct Scoreboard {
/// The metric's name.
name: String,
scores: [AtomicUsize; 5]
scores: [AtomicUsize; 5],
}
impl Scoreboard {
@ -50,7 +50,7 @@ impl Scoreboard {
Scoreboard {
kind,
name,
scores: unsafe { mem::transmute(Scoreboard::blank(now)) }
scores: unsafe { mem::transmute(Scoreboard::blank(now)) },
}
}
@ -65,7 +65,7 @@ impl Scoreboard {
let value = value as usize;
self.scores[1].fetch_add(1, Acquire);
match self.kind {
Marker => {},
Marker => {}
_ => {
// optimization - these fields are unused for Marker stats
self.scores[2].fetch_add(value, Acquire);
@ -77,16 +77,19 @@ impl Scoreboard {
/// Reset scores to zero, return previous values
fn snapshot(&self, now: usize, scores: &mut [usize; 5]) -> bool {
// SNAPSHOT OF ATOMICS IN PROGRESS, HANG TIGHT
// NOTE copy timestamp, count AND sum _before_ testing for data to reduce concurrent discrepancies
scores[0] = self.scores[0].swap(now, Release);
scores[1] = self.scores[1].swap(0, Release);
scores[2] = self.scores[2].swap(0, Release);
scores[3] = self.scores[3].swap(usize::MIN, Release);
scores[4] = self.scores[4].swap(usize::MAX, Release);
// SNAPSHOT COMPLETE, YOU CAN RELAX NOW
// if hit count is zero, then no values were recorded.
scores[1] != 0
if scores[1] == 0 {
return false;
}
scores[3] = self.scores[3].swap(usize::MIN, Release);
scores[4] = self.scores[4].swap(usize::MAX, Release);
true
}
/// Map raw scores (if any) to applicable statistics
@ -101,12 +104,12 @@ impl Scoreboard {
Marker => {
snapshot.push(Count(scores[1] as u64));
snapshot.push(Rate(scores[2] as f64 / duration_seconds))
},
}
Gauge => {
snapshot.push(Max(scores[3] as u64));
snapshot.push(Min(scores[4] as u64));
snapshot.push(Mean(scores[2] as f64 / scores[1] as f64));
},
}
Timer | Counter => {
snapshot.push(Count(scores[1] as u64));
snapshot.push(Sum(scores[2] as u64));
@ -115,15 +118,13 @@ impl Scoreboard {
snapshot.push(Min(scores[4] as u64));
snapshot.push(Mean(scores[2] as f64 / scores[1] as f64));
snapshot.push(Rate(scores[2] as f64 / duration_seconds))
},
}
}
Some((self.kind, self.name.clone(), snapshot))
} else {
None
}
}
}
/// Spinlock until success or clear loss to concurrent update.
@ -131,7 +132,9 @@ impl Scoreboard {
fn swap_if_more(counter: &AtomicUsize, new_value: usize) {
let mut current = counter.load(Acquire);
while current < new_value {
if counter.compare_and_swap(current, new_value, Release) == new_value { break }
if counter.compare_and_swap(current, new_value, Release) == new_value {
break;
}
current = counter.load(Acquire);
}
}
@ -141,12 +144,13 @@ fn swap_if_more(counter: &AtomicUsize, new_value: usize) {
fn swap_if_less(counter: &AtomicUsize, new_value: usize) {
let mut current = counter.load(Acquire);
while current > new_value {
if counter.compare_and_swap(current, new_value, Release) == new_value { break }
if counter.compare_and_swap(current, new_value, Release) == new_value {
break;
}
current = counter.load(Acquire);
}
}
#[cfg(feature = "bench")]
mod bench {
@ -159,7 +163,6 @@ mod bench {
b.iter(|| test::black_box(metric.update(1)));
}
#[bench]
fn bench_score_update_count(b: &mut test::Bencher) {
let metric = Scoreboard::new(Counter, "event_a".to_string());
@ -174,5 +177,4 @@ mod bench {
b.iter(|| test::black_box(metric.snapshot(now, &mut scores)));
}
}
}

View File

@ -2,15 +2,17 @@
//! Collect statistics about various metrics modules at runtime.
//! Stats can can be obtained for publication from `selfstats::SOURCE`.
pub use global_metrics::*;
pub use app_metrics::*;
pub use aggregate::*;
pub use publish::*;
pub use scores::*;
pub use core::*;
pub use namespace::*;
use output::to_void;
fn build_aggregator() -> Chain<Aggregate> {
aggregate(32, summary, to_void())
aggregate(summary, to_void())
}
/// Capture a snapshot of Dipstick's internal metrics since the last snapshot.
@ -18,10 +20,9 @@ pub fn snapshot() -> Vec<ScoreSnapshot> {
vec![]
}
fn build_self_metrics() -> GlobalMetrics<Aggregate> {
fn build_self_metrics() -> AppMetrics<Aggregate> {
// TODO send to_map() when snapshot() is called
// let agg = aggregate(summary, to_void());
global_metrics(AGGREGATOR.clone()).with_prefix("dipstick.")
app_metrics(AGGREGATOR.clone()).with_prefix("dipstick")
}
lazy_static! {
@ -29,6 +30,6 @@ lazy_static! {
static ref AGGREGATOR: Chain<Aggregate> = build_aggregator();
/// Application metrics are collected to the aggregator
pub static ref SELF_METRICS: GlobalMetrics<Aggregate> = build_self_metrics();
pub static ref SELF_METRICS: AppMetrics<Aggregate> = build_self_metrics();
}

View File

@ -1,5 +1,5 @@
use std::net::TcpStream;
use std::net::{ToSocketAddrs, SocketAddr};
use std::net::{SocketAddr, ToSocketAddrs};
use std::io;
use std::time::{Duration, Instant};
use std::fmt::{Debug, Formatter};
@ -7,7 +7,7 @@ use std::fmt;
use std::io::Write;
const MIN_RECONNECT_DELAY_MS: u64 = 50;
const MAX_RECONNECT_DELAY_MS: u64 = 10000;
const MAX_RECONNECT_DELAY_MS: u64 = 10_000;
/// A socket that retries
pub struct RetrySocket {
@ -25,7 +25,6 @@ impl Debug for RetrySocket {
}
impl RetrySocket {
/// Create a new socket that will retry
pub fn new<A: ToSocketAddrs>(addresses: A) -> io::Result<Self> {
// FIXME instead of collecting addresses early, store ToSocketAddrs as trait object
@ -35,7 +34,7 @@ impl RetrySocket {
retries: 0,
next_try: Instant::now() - Duration::from_millis(MIN_RECONNECT_DELAY_MS),
addresses,
socket: None
socket: None,
};
// try early connect
@ -64,35 +63,37 @@ impl RetrySocket {
self.socket = None;
self.retries += 1;
let delay = MAX_RECONNECT_DELAY_MS.min(MIN_RECONNECT_DELAY_MS << self.retries);
warn!("Could not connect to {:?} after {} trie(s). Backing off reconnection by {}ms. {}",
self.addresses, self.retries, delay, e);
warn!(
"Could not connect to {:?} after {} trie(s). Backing off reconnection by {}ms. {}",
self.addresses, self.retries, delay, e
);
self.next_try = Instant::now() + Duration::from_millis(delay);
e
}
fn with_socket<F, T>(&mut self, operation: F) -> io::Result<T>
where F: FnOnce(&mut TcpStream) -> io::Result<T>
where
F: FnOnce(&mut TcpStream) -> io::Result<T>,
{
if let Err(e) = self.try_connect() {
return Err(self.backoff(e))
return Err(self.backoff(e));
}
let opres = if let Some(ref mut socket) = self.socket {
operation(socket)
} else {
// still none, quiescent
return Err(io::Error::from(io::ErrorKind::NotConnected))
return Err(io::Error::from(io::ErrorKind::NotConnected));
};
match opres {
Ok(r) => Ok(r),
Err(e) => Err(self.backoff(e))
Err(e) => Err(self.backoff(e)),
}
}
}
impl Write for RetrySocket {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.with_socket(|sock| sock.write(buf))
}
@ -100,5 +101,4 @@ impl Write for RetrySocket {
fn flush(&mut self) -> io::Result<()> {
self.with_socket(|sock| sock.flush())
}
}
}

View File

@ -21,7 +21,7 @@ where
Ok(Chain::new(
move |kind, name, rate| {
let mut prefix = String::with_capacity(32);
prefix.push_str(name.as_ref());
prefix.push_str(name);
prefix.push(':');
let mut suffix = String::with_capacity(16);
@ -49,13 +49,13 @@ where
scale,
}
},
move |auto_flush| {
move |buffered| {
let buf = RwLock::new(ScopeBuffer {
buffer: String::with_capacity(MAX_UDP_PAYLOAD),
socket: socket.clone(),
auto_flush,
buffered,
});
Arc::new(move |cmd| {
ControlScopeFn::new(move |cmd| {
if let Ok(mut buf) = buf.write() {
match cmd {
ScopeCmd::Write(metric, value) => buf.write(metric, value),
@ -68,9 +68,9 @@ where
}
lazy_static! {
static ref STATSD_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("statsd.");
static ref SEND_ERR: Marker<Aggregate> = STATSD_METRICS.marker("send_failed");
static ref SENT_BYTES: Counter<Aggregate> = STATSD_METRICS.counter("sent_bytes");
static ref STATSD_METRICS: AppMetrics<Aggregate> = SELF_METRICS.with_prefix("statsd.");
static ref SEND_ERR: AppMarker<Aggregate> = STATSD_METRICS.marker("send_failed");
static ref SENT_BYTES: AppCounter<Aggregate> = STATSD_METRICS.counter("sent_bytes");
}
/// Key of a statsd metric.
@ -90,7 +90,7 @@ const MAX_UDP_PAYLOAD: usize = 576;
struct ScopeBuffer {
buffer: String,
socket: Arc<UdpSocket>,
auto_flush: bool,
buffered: bool,
}
/// Any remaining buffered data is flushed on Drop.
@ -101,7 +101,7 @@ impl Drop for ScopeBuffer {
}
impl ScopeBuffer {
fn write (&mut self, metric: &Statsd, value: Value) {
fn write(&mut self, metric: &Statsd, value: Value) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let entry_len = metric.prefix.len() + value_str.len() + metric.suffix.len();
@ -124,7 +124,7 @@ impl ScopeBuffer {
self.buffer.push_str(&value_str);
self.buffer.push_str(&metric.suffix);
}
if self.auto_flush {
if self.buffered {
self.flush();
}
}
@ -158,7 +158,7 @@ mod bench {
let timer = sd.define_metric(Kind::Timer, "timer", 1000000.0);
let scope = sd.open_scope(false);
b.iter(|| test::black_box((scope)(ScopeCmd::Write(&timer, 2000))));
b.iter(|| test::black_box(scope.write(&timer, 2000)));
}
}

56
src/with_buffer.rs Normal file
View File

@ -0,0 +1,56 @@
use core::*;
use core::ScopeCmd::*;
use std::sync::{Arc, RwLock};
pub trait WithBuffer {
fn with_buffered_scopes(&self) -> Self;
}
impl<M: Send + Sync + Clone + 'static> WithBuffer for Chain<M> {
/// Create a new scope to report metric values.
fn with_buffered_scopes(&self) -> Self {
self.mod_scope(|next| {
let scope_buffer = RwLock::new(ScopeBuffer {
buffer: Vec::new(),
scope: self.open_scope(false),
});
Arc::new(move |cmd: ScopeCmd<M>| {
let mut buf = scope_buffer.write().expect("Lock metric scope.");
match cmd {
Write(metric, value) => buf.buffer.push(ScopeCommand {
metric: (*metric).clone(),
value,
}),
Flush => buf.flush(),
}
})
})
}
}
/// Save the metrics for delivery upon scope close.
struct ScopeCommand<M> {
metric: M,
value: Value,
}
struct ScopeBuffer<M: Clone> {
buffer: Vec<ScopeCommand<M>>,
scope: ControlScopeFn<M>,
}
impl<M: Clone> ScopeBuffer<M> {
fn flush(&mut self) {
for cmd in self.buffer.drain(..) {
(self.scope)(Write(&cmd.metric, cmd.value))
}
(self.scope)(Flush)
}
}
impl<M: Clone> Drop for ScopeBuffer<M> {
fn drop(&mut self) {
self.flush()
}
}

1
tests/skeptic.rs Normal file
View File

@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/skeptic-tests.rs"));