Compare commits

...

242 Commits

Author SHA1 Message Date
Francis Lalonde 87af3724f1 Justfile replaces Makefile. Clippy fixes: socket retry exponential backoff calculation 2023-02-15 16:45:21 -05:00
Francis Lalonde 37c7d806ea remove some 'as' casts and unused returned timeval 2023-01-26 15:25:12 -05:00
Francis Lalonde 38c1315f31 v0.9.1 2023-01-16 11:00:13 -05:00
Francis Lalonde f24a285944
Merge pull request #86 from RafalGoslawski/expose_metricid
Exposed attributes::MetricId to make extending new outputs possible
2022-12-16 08:27:07 -05:00
RafalGoslawski 9333c73783
Expose Attributes and WithAttributes trait as well 2022-12-16 13:19:29 +00:00
RafalGoslawski ed90c33f4a
Tagalong: Fixed warnings in examples 2022-12-15 17:20:46 +00:00
RafalGoslawski a29991a938
Exposed attributes::MetricId 2022-12-15 17:18:49 +00:00
Francis Lalonde e7a00d9cf3 Clippy fixes 2022-07-12 14:11:39 -04:00
Francis Lalonde e3e82f86a9
Merge pull request #81 from fralalonde/no-custom-result
v0.9.0
2020-05-25 09:13:56 -04:00
Francis Lalonde a5939a794e Clippy + Format 2020-05-25 09:12:24 -04:00
Francis Lalonde 46241ad439 v0.9.0 2020-05-22 23:53:18 -04:00
Francis Lalonde a63ddbeab7 0.8.0 SUCH REJOICING 2020-05-15 00:28:40 -04:00
Francis Lalonde 27fe143e6c Fix --no-default-features build, bump to v0.7.13 2020-04-23 15:37:41 -04:00
Francis Lalonde ecd1a86da0 Bump version to 0.7.12 2020-04-17 07:55:36 -04:00
Francis Lalonde 8025b9da5c Fix for BorrowMut Panic #78 2020-04-17 07:52:46 -04:00
Francis Lalonde 9774ea7343 Prepare v0.7.11, bump parking_lot to v0.9 2019-08-30 06:40:49 -04:00
Francis Lalonde ac9903a413
Merge pull request #77 from vorner/on-flush-send-sync
Make the OnFlushCancel Send and Sync
2019-08-30 06:28:49 -04:00
Michal 'vorner' Vaner 159c422afa
Make the OnFlushCancel Send and Sync
It is useless otherwise in most multithreaded applications.
2019-08-30 11:59:52 +02:00
Francis Lalonde fc4d7e8058
Merge pull request #76 from vorner/cancel-guard
A cancel guard
2019-07-12 15:39:15 -04:00
Michal 'vorner' Vaner 9fdbb4839d
fixup! A cancel guard 2019-07-12 21:32:16 +02:00
Michal 'vorner' Vaner 35572454d4
A cancel guard
Allows cancelling something by dropping a guard, in RAII manner, instead
of manually calling .cancel();

Closes #74.
2019-07-12 20:04:13 +02:00
Francis Lalonde 1f656fc62e
Merge pull request #75 from vorner/warn-unused-observe-when
Use must_use on the ObserveWhen
2019-07-12 09:58:11 -04:00
Michal 'vorner' Vaner ea96d5c6a0
Use must_use on the ObserveWhen
Otherwise, nothing would get observed, so better warn on it.
2019-07-11 18:39:51 +02:00
Francis Lalonde 787874fee4 Use clippy stable 2019-07-11 11:55:28 -04:00
Francis Lalonde 945fe7949a Update changelist 2019-07-11 11:50:48 -04:00
Francis Lalonde 314764987b Add dyn keyword to dyn traits as per clippy's advice 2019-07-11 11:48:14 -04:00
Francis Lalonde 5b466b6dcc
Merge pull request #73 from vorner/on-flush-cancel
Make OnFlushCancel public
2019-07-11 11:24:23 -04:00
Michal 'vorner' Vaner 2ebf6719b9
Make OnFlushCancel public
Otherwise it can't be directly stored into a struct, because the type
can't be named.
2019-07-11 17:19:59 +02:00
Francis Lalonde 3147ca034f Merge branch 'master' of github.com:fralalonde/dipstick 2019-06-22 08:53:32 -04:00
Francis Lalonde e800701072
Merge pull request #71 from grippy/grippy/prometheus
Prometheus Fixes
2019-06-22 07:51:05 -04:00
grippy 645cf15427 forgot cargo fmt 2019-06-21 20:06:12 -07:00
grippy 33a3fba4af Fixes issue with missing linefeed and uses post for prom/pushgateway call. Also adds an example on using Proxy, AtomicBucket, and MultiOutput 2019-06-21 20:05:24 -07:00
Francis Lalonde 622c3ba667 Bumped to 0.7.8, changes, fix EOL prometheus 2019-06-21 20:55:11 -04:00
Francis Lalonde 18457a792d (cargo-release) start next development iteration 0.7.8-alpha.0 2019-06-21 20:36:57 -04:00
Francis Lalonde aff4a47149 Bumped to 0.7.7 2019-06-21 20:36:26 -04:00
Francis Lalonde 84a5e4c4d3 Fixed prometheus buffer issue 2019-06-21 19:42:57 -04:00
Francis Lalonde 6d6e26d8b2 Remove spurious files from root 2019-05-23 11:35:52 -04:00
Francis Lalonde b229f3ce9a Make ID generator static (not const), updates CHANGES.md 2019-05-21 22:35:15 -04:00
Francis Lalonde c281012b5d Quick move to Rust 2018 using cargo fix 2019-05-21 22:23:31 -04:00
Francis Lalonde acdf2ddd36 Prepare 0.7.6 release 2019-05-21 22:01:24 -04:00
Francis Lalonde c523dc966b Fix nightly's 'acceptable regression' 2019-05-21 21:45:30 -04:00
Francis Lalonde ab818afe53
Merge pull request #68 from fralalonde/listener_ids
Give each flush listener its own id
2019-05-19 07:08:15 -04:00
Francis Lalonde 4182504f76 Give each flush listener its own id 2019-05-17 15:48:51 -04:00
Francis Lalonde 58ef0797d2
Merge pull request #66 from fralalonde/unique-metrics
Fix potential observer leak with unique metric IDs
2019-05-14 20:40:14 -04:00
Francis Lalonde c88cc8f7f8 Observer on_flush PR fixes 2019-05-14 20:07:49 -04:00
Francis Lalonde d2c9d19397
Merge pull request #67 from vorner/observe-foreign
Allow delegating the Observe trait to other type
2019-05-10 18:44:20 -04:00
Michal 'vorner' Vaner 8974fc5fa7
Allow delegating the Observe trait to other type
When the result was fixed to ObserveWhen<Self, _>, it was impossible to
simply delegate the trait into an wrapped type. This allows the
possibility to return ObserveWhen<Whatever, _> (which can be Self in the
case of the default implementation).

Fixes #65.
2019-05-10 22:11:13 +02:00
Francis Lalonde d50e38308d Fix potential observer leak with unique metric IDs 2019-05-10 15:16:21 -04:00
Francis Lalonde ac19241a58
Update README.md 2019-05-09 16:33:43 -04:00
Francis Lalonde 8c6ba4ea45 Bump version to 0.7.4 before release 2019-05-09 15:51:41 -04:00
Francis Lalonde 3fac0c2187
Merge pull request #62 from mixalturek/public-ObserveWhen
Reexport `ObserveWhen` to make it public.
2019-05-09 14:27:10 -04:00
Michal Turek 55fe491df5 Reexport `ObserveWhen` to make it public.
- `ObserveWhen` is a return value of public `Observe.observe()` but it is unfortunately private.
2019-05-07 15:42:31 +02:00
Francis Lalonde 577f3c72b7 Updated changelist and added build instructions 2019-04-16 17:44:15 -04:00
Francis Lalonde dbd659a91d Apply formatting 2019-04-16 17:30:44 -04:00
Francis Lalonde f13948e43f Sanitize with clippy 2019-04-16 17:29:55 -04:00
Francis Lalonde 7bb718c1b4 Expanded handbook - stats, metrics, schduler 2019-04-11 00:41:44 -04:00
Francis Lalonde 88a807435b (cargo-release) start next development iteration 0.7.3-alpha.0 2019-04-10 21:04:40 -04:00
Francis Lalonde b2c674814e v0.7.2 2019-04-10 21:04:12 -04:00
Francis Lalonde 2e067691d2
Update README.md 2019-04-10 11:57:24 -04:00
Francis Lalonde 11cc138065 Observe any metric type, pass now to Oberver as param 2019-04-10 11:00:46 -04:00
Francis Lalonde 018662faf5 0.7.2 changelist 2019-04-10 09:05:14 -04:00
Francis Lalonde ff5c291d28
Merge pull request #50 from fralalonde/feature/observers
Feature/observers
2019-04-09 16:10:10 -04:00
Francis Lalonde 7028ae9ffc Add back to_new_file changes 2019-04-09 08:53:20 -04:00
Francis Lalonde 6a205337e5 Document observers in handbook 2019-04-09 08:43:14 -04:00
Francis Lalonde bb9b174d17 Merge branch 'master' into feature/observers 2019-04-09 07:56:42 -04:00
Francis Lalonde 0070f26d6d Applied format 2019-04-09 07:55:15 -04:00
Francis Lalonde e5c74de9ed Fix shared flush listeners & schedule handles 2019-04-09 07:46:25 -04:00
Francis Lalonde 23dc724a4b Ergonomize listeners 2019-04-08 22:49:39 -04:00
Francis Lalonde 5c5ec504f5 New single thread scheduler 2019-04-07 23:42:34 -04:00
Francis Lalonde 7593eea5a0 Merge branch 'feature/observers' of github.com:fralalonde/dipstick into feature/observers 2019-04-04 22:46:51 -04:00
Francis Lalonde 5f9aa17b71
Merge pull request #59 from vorner/to-file-fixes
output::Stream::to_file should append
2019-03-29 15:34:57 -04:00
Michal 'vorner' Vaner 8c82012448
output::Stream::to_file should append
The `to_file` should append. Add `to_new_file` that overwrites (or
errors, depending on parameter).

Piggy-back: make the methods take AsRef<Path>, to allow taking `&str`
and `String` and such too ‒ just like file manipulation routines do in
std (that is fully backwards compatible change).
2019-03-29 20:19:51 +01:00
Francis Lalonde 2e7f6f01dc
Merge pull request #55 from fralalonde/rust-format
Applied rustfmt
2019-03-17 21:44:17 -04:00
Francis Lalonde 028f6d996d Applied rustfmt 2019-03-17 21:42:00 -04:00
Francis Lalonde 8c764d0d71 WIP - Single thread scheduler 2019-03-17 09:34:42 -04:00
Francis Lalonde c0a1e4cb7a
Merge pull request #53 from vorner/stream-to-file
Stream::to_file: Open for writing
2019-03-12 17:08:45 -04:00
Michal 'vorner' Vaner 147d99745b Stream::to_file: Open for writing
And create if necessary.
2019-03-12 21:56:58 +01:00
Francis Lalonde 22a7e47c2c Abandon Observer struct 2019-03-11 14:28:55 -04:00
Francis Lalonde 90626748ef Abandon Observer struct 2019-03-11 14:22:55 -04:00
Francis Lalonde 8aeb216b5b
Merge pull request #51 from vorner/unused-generic
Unused generic
2019-03-10 13:40:33 -04:00
Michal 'vorner' Vaner 8199157909 cleanup: () is the default return type
No need to write it explicitly
2019-03-10 18:38:02 +01:00
Michal 'vorner' Vaner b009542183 Get rid of unused generic param
It makes it cumbersome to call and serves no purpose.
2019-03-10 18:37:29 +01:00
Francis Lalonde bb2780e0ce Added OnFlush & Gauge Observers 2019-03-08 16:59:51 -05:00
Michal Turek f279fa6790 #42 pub(crate) -> pub as requested. 2019-03-07 15:38:42 -05:00
Michal Turek fb8fe633cb #42 Improve API, remove Arc that wraps the closure, rename method. 2019-03-07 15:37:48 -05:00
Michal Turek 4a6227122d #42 Improve API, remove Arc that wraps the closure. 2019-03-07 15:37:48 -05:00
Michal Turek 5e35e680b7 #42 Fix memory and performance leak.
- If multiple callbacks are registered under the same conflicting key, only the last one will survive.
- It would be especially bad if the callback was re-registered periodically. The memory would increase and even all historical callbacks would be executed during each iteration.
2019-03-07 15:31:00 -05:00
Michal Turek 210b37f5fb #42 Minor improvements.
- Use `MetricValue` instead of `isize`.
- Cancel handle in example.
- Configurable thread name.
2019-03-07 15:23:23 -05:00
Michal Turek b2d6b94469 #42 Observer of gauge values. 2019-03-07 15:23:23 -05:00
Francis Lalonde 2ffafff929
Merge pull request #49 from fralalonde/crossbeam-etc
Crossbeam etc
2019-02-19 16:05:37 -05:00
Francis Lalonde a1a6d0593d Use parking_lot for all mods 2019-02-19 15:36:35 -05:00
Francis Lalonde 352371049b Use parking_lot for Proxy and AtomicBucket 2019-02-19 14:56:13 -05:00
Francis Lalonde 9700ac6489
Update HANDBOOK.md 2019-02-19 09:55:50 -05:00
Francis Lalonde 3785981f82 Remove unused protobuf schema 2019-02-06 09:10:27 -05:00
Francis Lalonde 09e54bbd37 Non-option sampling & buf, per-metric-sample example, default features fix 2019-01-26 11:53:38 -05:00
Francis Lalonde 9c50fe0ffc Fix unused macro warnings 2019-01-26 08:10:53 -05:00
Francis Lalonde 54acee9f62
Merge pull request #38 from fralalonde/fix/api-cleanup-named-etc
Rename + deprecate a bunch of methods for API consistency
2019-01-26 08:04:21 -05:00
Francis Lalonde 9eb41aebeb Final method rename PR changes - named() / add_name() 2019-01-25 15:45:46 -05:00
Francis Lalonde bae0fa6b5a Merge branch 'master' into fix/api-cleanup-named-etc 2019-01-23 12:58:13 -05:00
Francis Lalonde 44ebcdc2b5 PR #38 changes #1 2019-01-23 12:53:46 -05:00
Francis Lalonde fb06a53dae
Merge pull request #46 from fralalonde/fix/syntax-highlighting-skeptic
Get rid of skeptic templates for better code highlighting
2019-01-23 12:15:16 -05:00
Francis Lalonde f15841aa73
Merge pull request #40 from mixalturek/doc-fixes
Fix doc comments.
2019-01-23 12:11:42 -05:00
Francis Lalonde 770b7e498d Get rid of skeptic templates for better code highlighting 2019-01-23 09:07:32 -05:00
Francis Lalonde bc334ded3f
Merge pull request #37 from fralalonde/fix/send-sync-error
Make boxed errors Send + Sync
2019-01-22 08:42:57 -05:00
Michal Turek 0c8ff6a316 Fix doc comments. 2019-01-22 13:15:56 +01:00
Francis Lalonde 2122729efb Rename + deprecate a bunch of methods for API consistency 2019-01-21 16:53:01 -05:00
Francis Lalonde b13bba7b5e Make boxed errors Send + Sync 2019-01-21 14:27:46 -05:00
Francis Lalonde 528f30c1c5 (cargo-release) start next development iteration 0.7.2-alpha.0 2019-01-13 12:50:15 -05:00
Francis Lalonde c0e789abcf (cargo-release) version 0.7.1 2019-01-13 12:49:57 -05:00
Francis Lalonde 914ee1e7e6 Fix: Export Missing Level 2019-01-13 12:49:24 -05:00
Francis Lalonde 5a4353e9db Bit of doc 2019-01-12 18:21:47 -05:00
Francis Lalonde 75d5487d78 (cargo-release) start next development iteration 0.7.1-alpha.0 2019-01-11 19:47:07 -05:00
Francis Lalonde 3dbe61e7fa
Merge pull request #34 from fralalonde/gagaga 2019-01-08 10:25:48 -05:00
Francis Lalonde 7ba1a7687e
Merge pull request #33 from mixalturek/name-threads
Name all threads spawned by dipstick.
2019-01-08 10:20:05 -05:00
Michal Turek e2ed2e6712 Name all threads spawned by dipstick.
- Well behaved code should always name all spawned threads. It's to simplify debugging (`pstree -t PID`), the names are written to logs etc.
- `.unwrap()` may cause panic which is bad, but the same one was already in the original `thread::spawn()`. Fix would require API change which is out of scope of this update.
- Examples weren't touched to make them short and simple.
2019-01-07 07:57:51 +01:00
Francis Lalonde 963dfc0fd2 Level Log Target 2018-12-31 17:03:34 -05:00
Francis Lalonde 456f096981 Tested Level counters OK 2018-12-20 16:19:03 -05:00
Francis Lalonde b449578e86 Merge branch 'cleanup' into gagaga 2018-12-20 07:35:51 -05:00
Francis Lalonde 547458f240 Some cleanup 2018-12-20 07:35:31 -05:00
Francis Lalonde 5a39b111c8 gskdjghdk 2018-12-20 07:26:19 +00:00
Francis Lalonde 0158c5c698 Using examples for handbook 2018-12-19 01:29:19 +00:00
Francis Lalonde 5d67e096ee Switch to isize for MetricValue 2018-12-18 16:06:03 -05:00
Francis Lalonde f55640cdd6
Merge pull request #31 from fralalonde/use-prometheus-crate
Use prometheus crate
2018-12-18 19:24:01 +00:00
Francis Lalonde a4fb022da4 Merge branch 'master' of github.com:fralalonde/dipstick 2018-12-18 14:18:57 -05:00
Francis Lalonde 1eb2c01e01 Drop custom errors 2018-12-18 14:17:12 -05:00
Francis Lalonde b5c7d923b3 Set theme jekyll-theme-tactile 2018-12-17 08:07:26 -05:00
Francis Lalonde ce3273bafa Use prometheus-rust crate rather than reinventing it 2018-11-13 09:27:06 -05:00
Francis Lalonde 0b66ffdbfd Merge pull request #29 from fralalonde/superdoc
Superdoc
2018-11-12 08:55:50 -05:00
Francis Lalonde b904fb795a superdocc 2018-11-12 08:41:00 -05:00
Francis Lalonde ea812eb9a0 superdoc 2018-10-26 01:20:47 +00:00
Francis Lalonde 3360ba7b9e Merge pull request #26 from fralalonde/labels
Labels
2018-10-18 14:25:00 +00:00
Francis Lalonde 851463b193 Skeptic fix 2018-10-18 10:22:38 -04:00
Francis Lalonde 0b61c01550 template from vec 2018-10-18 09:31:53 -04:00
Francis Lalonde 40bef5115c Formatting trait 2018-10-18 08:54:41 -04:00
Francis Lalonde 5e9aebbb22 Save scopes 2018-10-17 16:43:08 -04:00
Francis Lalonde ad4168e977 Labels wrapped in Option 2018-10-12 13:26:01 -04:00
Francis Lalonde 7d354ba999 Labels 2018-10-11 17:01:12 -04:00
Francis Lalonde 253a28ed2a Templates 2018-10-10 14:29:30 -04:00
Francis Lalonde 25bc816b0d Add output templates 2018-10-09 16:46:15 -04:00
Francis Lalonde fc57e4de8f Fix sterr() and stdout() usability, rename log module 2018-10-09 10:05:40 -04:00
Francis Lalonde 8bf1b7b6c9 Rename naming fns, still more linting 2018-10-05 10:26:33 -04:00
Francis Lalonde 7ac4908cba Remove more lint, bump ton num 0.2 2018-10-04 17:28:25 -04:00
Francis Lalonde 79cc69fd0b Applied some clippy recommendations 2018-10-04 13:13:55 -04:00
Francis Lalonde 97a02caafe Merge pull request #23 from fralalonde/split-name-namespace
Split Name and Namespace to separate module
2018-10-03 18:03:01 +00:00
Francis Lalonde 969f0ed66c Split Name and Namespace to separate module 2018-10-03 14:01:53 -04:00
Francis Lalonde 306c404191 Horizontal dipstick pic 2018-09-21 01:01:08 -04:00
Francis Lalonde 1556cdd184 Expand handbook 2018-09-20 09:07:17 -04:00
Francis Lalonde 8f9a2b8885 Switch logo, shields on one line 2018-09-19 17:01:24 -04:00
Francis Lalonde 0293b8b826 Merge pull request #22 from fralalonde/exp/reimpl
Exp/reimpl
2018-09-19 16:51:41 -04:00
Francis Lalonde 6fc4330af1 Merge branch 'master' into exp/reimpl 2018-09-19 16:47:49 -04:00
Francis Lalonde 68dbe3aca4 Merge branch 'exp/reimpl' of github.com:fralalonde/dipstick into exp/reimpl 2018-09-19 16:46:54 -04:00
Francis Lalonde dd3be7871c Updating README 2018-09-19 16:40:08 -04:00
Francis Lalonde dc50862ed0 Updating README 2018-09-19 16:38:17 -04:00
Francis Lalonde 35c90336b7 Move to subs, break core 2018-09-19 14:27:55 -04:00
Francis Lalonde 6706f18872 Pre-split core 2018-09-11 08:27:10 -04:00
Francis Lalonde 147e89195b input(), output(), ::send_to() 2018-07-03 12:10:55 -04:00
Francis Lalonde 52b2a79f05 README intro update 2018-06-28 17:16:16 -04:00
Francis Lalonde e90891bcd7 Queue & Bucket benchmarks 2018-06-27 10:18:05 -04:00
Francis Lalonde f32a1777b0 Multi output raw 2018-06-26 15:51:49 -04:00
Francis Lalonde 69ec3a7014 Input renamed back to Scope 2018-06-26 14:28:20 -04:00
Francis Lalonde 4e4307234b Clean up constructors 2018-06-26 11:46:55 -04:00
Francis Lalonde d1f2b02761 Adding protobuf prometheus 2018-06-24 09:54:11 -04:00
Francis Lalonde 6cae3b5d4c Beautiful tree macros? 2018-06-22 16:34:55 -04:00
Francis Lalonde 83d9ecb508 update macros 2018-06-22 01:08:04 -04:00
Francis Lalonde 252eba5d48 Raw Bridge 2018-06-21 16:28:25 -04:00
Francis Lalonde 537fc102ad Raw Graphite 2018-06-20 15:04:04 -04:00
Francis Lalonde 1ca1ea2654 Raw Metrics 2018-06-20 14:18:51 -04:00
Francis Lalonde faf5e5bc67 Remove Flush 2018-06-20 07:59:21 -04:00
Francis Lalonde a35e6dd57a Everyday I'm Buffering 2018-06-19 16:05:04 -04:00
Francis Lalonde a7856bce93 Readd cache 2018-06-16 22:59:51 -04:00
Francis Lalonde 0cea9bef11 Async Ouptut 2018-06-16 19:11:29 -04:00
Francis Lalonde b35c014e60 Renaming to Bucket, Proxy, Name, Input, Output... 2018-06-15 17:33:28 -04:00
Francis Lalonde c72c66edc9 Reenable async 2018-06-15 15:26:01 -04:00
Francis Lalonde 7ff46604ad Add Attributes 2018-06-14 12:58:56 -04:00
Francis Lalonde 34a4c929d4 Major Cleanup reporting for duty, Sir! 2018-06-12 17:09:22 -04:00
Francis Lalonde 4ebcbedc2a (cargo-release) start next development iteration 0.6.12-alpha.0 2018-06-08 14:21:29 -04:00
Francis Lalonde 193217220d (cargo-release) version 0.6.11 2018-06-08 14:21:09 -04:00
Francis Lalonde ffca49078a Merge pull request #21 from fralalonde/no-flush-on-drop
Stop flushing on scope drop
2018-06-08 14:19:58 -04:00
Francis Lalonde 6e6a2eaa7a Stop flushing on scope drop 2018-06-08 12:59:09 -04:00
Francis Lalonde adb56eeb59 Make ROOT_DISPATCH public 2018-05-31 14:24:06 -04:00
Francis Lalonde 09baab8c42 Fix scope namespace concat 2018-05-28 22:58:58 -04:00
Francis Lalonde d764cb5826 Put back some 0.6 compat fn 2018-05-25 17:10:45 -04:00
Francis Lalonde 838c121fcc TLS Mock Clock for tests 2018-05-25 12:28:01 -04:00
Francis Lalonde c3c7329c3d Merge branch 'global_dispatch' of github.com:fralalonde/dipstick into global_dispatch 2018-05-25 11:18:30 -04:00
Francis Lalonde 1a6ac869bd Removed MetricWriter
Add mock_clock feature
2018-05-24 22:43:16 -04:00
Francis Lalonde 1af38005e5 Removed MetricWriter
Add mock_clock feature
2018-05-24 18:19:41 -04:00
Francis Lalonde d6bc8e22b4 Fuse &str name into Namespace 2018-05-22 17:18:39 -04:00
Francis Lalonde 124d35d5f7 (cargo-release) start next development iteration 0.6.11-alpha.0 2018-05-18 11:02:52 -04:00
Francis Lalonde 37a2d1d116 Use a single timestamp for all of an Aggregator scores Use std::time::Instant for all timestamps and timers 2018-05-18 09:57:33 -04:00
Francis Lalonde c2d9a9a735 Use a single timestamp for all of an Aggregator scores
Use std::time::Instant for all timestamps and timers
2018-05-17 12:44:39 -04:00
Francis Lalonde a3e9c69d0c Bump log and lazy_static versions 2018-05-16 11:53:03 -04:00
Francis Lalonde 0255604bc8 Reenable feature gated skpetic 2018-05-10 13:06:36 -04:00
Francis Lalonde d9da53c804 Make skeptic entirely optional for reduced transitive deps 2018-05-10 10:30:37 -04:00
Francis Lalonde 9a921612b1 Bump chrono to v0.4 2018-05-10 09:47:16 -04:00
Francis Lalonde 6d1f93e6a0 Added namespaces 2018-05-04 13:38:33 -04:00
Francis Lalonde 490ba54908 Merge branch 'master' of github.com:fralalonde/dipstick 2018-05-01 12:36:33 -04:00
Francis Lalonde c9208c415e Dispatch tree 2018-04-30 14:10:44 -04:00
Francis Lalonde 511e9aa9c9 Merge pull request #19 from fralalonde/upkeep/add-logging
Add loggging to publish operation
2018-04-27 11:58:01 -04:00
Francis Lalonde 8d57419ead Bump version to 0.6.7 2018-04-27 11:57:13 -04:00
Francis Lalonde c13df9fca9 Add loggging to publish operation 2018-04-27 11:53:47 -04:00
Francis Lalonde f06b49ecae Use slow or imprecise clock 2018-04-10 12:37:29 -04:00
Francis Lalonde 91e75380db Less types macros 2018-04-06 23:47:43 -04:00
Francis Lalonde ede2033e02 Fabtastic unified API 2018-04-06 17:22:00 -04:00
Francis Lalonde ef4356f8c2 Use chrono Local now 2018-04-06 12:23:19 -04:00
Francis Lalonde bf4ea29fad AcqRel 2018-04-03 07:51:53 -04:00
Francis Lalonde b5cd355286 Merge min and max swap fn 2018-03-31 07:42:31 -04:00
Francis Lalonde 65dc1cfc87 Added scores & now 2018-03-31 00:39:03 -04:00
Francis Lalonde 6f89bb09d9 Stabilized dispatch and aggregate to default only for now 2018-03-29 16:29:39 -04:00
Francis Lalonde 1e32ea1383 Much rejoicing 2018-03-29 14:43:07 -04:00
Francis Lalonde 4ccd19bd49 Much rejoicing 2018-03-29 14:22:43 -04:00
Francis Lalonde d055270d84 Registry dissolved 2018-03-28 17:00:35 -04:00
Francis Lalonde e40bf74d88 Temp check on averages 2018-03-28 10:19:11 -04:00
Francis Lalonde 2681c7b342 AppMetrics -> Metrics, fix send_metrics() 2018-03-26 16:44:04 -04:00
Francis Lalonde 0143cea743 Context -> Scope 2018-03-26 12:57:40 -04:00
Francis Lalonde 4c4b82a9da Shorten Send/Recv delegate 2018-03-22 10:11:02 -04:00
Francis Lalonde d4274cf49c Fix format + naming + a few tests 2018-03-13 09:54:12 -04:00
Francis Lalonde 9b84a1ac42 Rich & Delegate Macros 2018-03-08 17:22:35 -05:00
Francis Lalonde 178a9eb319 Global Dispatch 2018-02-06 07:23:15 -05:00
Francis Lalonde 00e6505dd4 Dispatcher done 2018-01-26 16:33:11 -05:00
Francis Lalonde 928704233d Bump REAME crate version 2018-01-17 12:37:38 -05:00
Francis Lalonde 04d1b93ecd Fix README tests 2018-01-17 11:24:25 -05:00
Francis Lalonde 9cf6bb8a74 (cargo-release) start next development iteration 0.6.6-alpha.0 2018-01-16 16:57:33 -05:00
Francis Lalonde 66f51c14e4 (cargo-release) version 0.6.5 2018-01-16 16:56:41 -05:00
Francis Lalonde e97cf70f88 Add app_metric! forms for tuples and arrays 2018-01-16 16:56:34 -05:00
Francis Lalonde 5c5f5dc3b6 (cargo-release) start next development iteration 0.6.5-alpha.0 2018-01-16 15:15:55 -05:00
Francis Lalonde 119abe505a (cargo-release) version 0.6.4 2018-01-16 15:14:57 -05:00
Francis Lalonde 5afa5ad919 Reintroduce old macros, same form (fix) 2018-01-16 15:14:53 -05:00
Francis Lalonde 6d27df2e1e (cargo-release) start next development iteration 0.6.4-alpha.0 2018-01-16 15:11:16 -05:00
Francis Lalonde 38b0b41f79 (cargo-release) version 0.6.3 2018-01-16 15:10:22 -05:00
Francis Lalonde b5fd801ca4 Reintroduce old macros, deprecated 2018-01-16 15:10:10 -05:00
Francis Lalonde fce1c72965 Fix marker rate, add Namespace 2018-01-16 15:00:47 -05:00
Francis Lalonde a3cd89cca9 (cargo-release) start next development iteration 0.6.3-alpha.0 2018-01-15 15:34:03 -05:00
Francis Lalonde 61b6a11b08 (cargo-release) version 0.6.2 2018-01-15 15:33:06 -05:00
Francis Lalonde 0da03669bb Update app_metric macro syntax 2018-01-15 15:32:57 -05:00
Francis Lalonde 2ea2dbd4b6 (cargo-release) start next development iteration 0.6.2-alpha.0 2018-01-15 13:15:58 -05:00
Francis Lalonde 0c1892a787 (cargo-release) version 0.6.1 2018-01-15 13:15:02 -05:00
Francis Lalonde ed3ce00212 Add app_* types and mod_* macros 2018-01-15 13:14:08 -05:00
Francis Lalonde 8d5ee95688 Merge branch 'master' of github.com:fralalonde/dipstick 2018-01-10 14:17:37 -05:00
Francis Lalonde 3d31f2d671 (cargo-release) start next development iteration -1.6.1-alpha.0 2018-01-10 09:21:04 -05:00
Francis Lalonde 17bd5ed8ef (cargo-release) start next development iteration 0.6.1-alpha.0 2018-01-09 16:46:24 -05:00
Francis Lalonde c52fa0e8a4 (cargo-release) version 0.6.0 2018-01-09 16:45:55 -05:00
Francis Lalonde 39358e0a39 for great justice 2018-01-09 16:43:32 -05:00
98 changed files with 6314 additions and 3092 deletions

3
.gitignore vendored Normal file → Executable file
View File

@ -1,3 +1,6 @@
target/
**/*.rs.bk
Cargo.lock
src/prometheus_proto.rs
.idea
cmake-*

View File

@ -1,6 +0,0 @@
language: rust
rust:
- stable
cache: cargo
script:
- cargo test

84
CHANGES.md Normal file
View File

@ -0,0 +1,84 @@
# Latest changes + history
## version 0.9.1
- Fix sleep in `basic` example (@RafalGoslawski)
- Expose attributes::MetricId+Attributes to make extending new outputs possible (@RafalGoslawski #86)
- Various clippy fixes
- Dropped boilerplate-y CoC and contribution documents. Just be nice, mkay?
- Dropped .travis.yml. Trav's dead, baby.
- Revalidated full build after overdue `cargo update`
## version 0.9.1
- Fix bugs
## version 0.9.0
- Abandon custom Result type and error module in favor
of io::Result usage across all API. (Based on @rtyler's comment in #80)
- Update all dependencies to latest versions
- Move Void module to output (internal change)
- Examples no longer declare `extern crate dipstick;`
## version 0.8.0 - ("SUCH REJOICING")
- THIS VERSION HAS BEEN YANKED - API broke (again) for 0.9.0 and 0.8.0 hadn't been out long enough.
- Abandon non-threadsafe "Output"s in exchange for a simpler, more consistent API.
Everything is now threadsafe and thus all "Output" have been promoted to Inputs.
No significant performance loss was observed (using parking_lot locks).
Some client code (custom output classes, etc.) rework might be necessary.
- Flattened internal project structure down to only two modules, including root.
## version 0.7.13
- Fixed statsd & graphite panic when running on async threadpool.
## version 0.7.11
- Make OnFlushCancel Send + Sync (@vorner)
## version 0.7.10
- Make OnFlushCancel public
- Add dyn keyword to dyn traits
## version 0.7.9
- Prometheus uses HTTP POST, not GET
- Add proxy_multi_output example
## version 0.7.8
- Fix Prometheus output https://github.com/fralalonde/dipstick/issues/70
## version 0.7.6
- Move to Rust 2018 using cargo fix --edition and some manual help
- Fix nightly's 'acceptable regression' https://github.com/rust-lang/rust/pull/59825
- Give each flush listener a unique id
## version 0.7.5
- Fix leak on observers when registering same metric twice.
- Add `metric_id()` on `InputMetric`
## version 0.7.4
- Reexport `ObserveWhen` to make it public
## version 0.7.3
- Fixed / shushed a bunch of `clippy` warnings
- Made `clippy` part of `make` checks
## version 0.7.2
### features
- Observe gauge On Flush
- Observe gauge Periodically
- Stream::write_to_new_file()
- Level
### Enhancement
- Use crossbeam channels & parking_lot locks by default
- Single thread scheduler
## version 0.7.1
- API changes, some methods renamed / deprecated for clarity
- Logging output now have selectable level
## version 0.7.0
- Add `Delegate` mechanism to allow runtime (re)configuration of metrics
- Enhance macros to allow metrics of different types within a single block
- Additional pre-typed 'delegate' and 'aggregate' macros

View File

@ -1,41 +0,0 @@
# Contributing to Dipstick
Thank you for your interest in contributing to Dipstick!
The project is fairly new so there's no hard development process in place (yet).
If you found a bug, create a GitHub issue so it can be tracked.
Include any relevant data to help solving the issue.
If you have an idea, create a GitHub issue so it can be discussed.
Highlight how your idea fits with Dipstick's design goals from the README.
If you have code, create a GitHub issue first and attach it to your pull request.
The issue will be used as a space for non-code related discussion.
# Contributor Code of Conduct
(from https://contributor-covenant.org/version/1/3/0/)
As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
The use of sexualized language or imagery
Personal attacks
Trolling or insulting/derogatory comments
Public or private harassment
Publishing other's private information, such as physical or electronic addresses, without explicit permission
Other unethical or unprofessional conduct
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a project maintainer at [INSERT EMAIL ADDRESS]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. Maintainers are obligated to maintain confidentiality with regard to the reporter of an incident.
This Code of Conduct is adapted from the Contributor Covenant, version 1.3.0, available from http://contributor-covenant.org/version/1/3/0/

316
Cargo.lock generated
View File

@ -1,316 +0,0 @@
[[package]]
name = "aggregate_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "aho-corasick"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ansi_term"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "badlog"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "counter_timer_gauge"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "custom_publish"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "derivative"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"itertools 0.5.10 (registry+https://github.com/rust-lang/crates.io-index)",
"quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"syn 0.10.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "dipstick"
version = "0.5.2-alpha.0"
dependencies = [
"derivative 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "either"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "env_logger"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "graphite"
version = "0.0.0"
dependencies = [
"badlog 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "itertools"
version = "0.5.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"either 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "kernel32-sys"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "lazy_static"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.33"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "memchr"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "multi_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "num"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-integer"
version = "0.1.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-iter"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-traits"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "queued"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "quote"
version = "0.3.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "raw_sink"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "redox_syscall"
version = "0.1.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "regex"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "scoped_print"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
]
[[package]]
name = "static_metrics"
version = "0.0.0"
dependencies = [
"dipstick 0.5.2-alpha.0",
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "syn"
version = "0.10.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "thread_local"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "time"
version = "0.1.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.32 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "unicode-xid"
version = "0.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unreachable"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "utf8-ranges"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "void"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-build"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[metadata]
"checksum aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)" = "d6531d44de723825aa81398a6415283229725a00fa30713812ab9323faa82fc4"
"checksum ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "23ac7c30002a5accbf7e8987d0632fa6de155b7c3d39d0067317a391e00a2ef6"
"checksum badlog 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "69ff7ee77081b9de4c7ec6721321c892f02b02e9d29107f469e88b42468740cc"
"checksum derivative 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "67b3d6d0e84e53a5bdc263cc59340541877bb541706a191d762bfac6a481bdde"
"checksum either 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "740178ddf48b1a9e878e6d6509a1442a2d42fd2928aae8e7a6f8a36fb01981b3"
"checksum env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3ddf21e73e016298f5cb37d6ef8e8da8e39f91f9ec8b0df44b7deb16a9f8cd5b"
"checksum itertools 0.5.10 (registry+https://github.com/rust-lang/crates.io-index)" = "4833d6978da405305126af4ac88569b5d71ff758581ce5a987dbfa3755f694fc"
"checksum kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7507624b29483431c0ba2d82aece8ca6cdba9382bff4ddd0f7490560c056098d"
"checksum lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "76f033c7ad61445c5b347c7382dd1237847eb1bce590fe50365dcb33d546be73"
"checksum libc 0.2.33 (registry+https://github.com/rust-lang/crates.io-index)" = "5ba3df4dcb460b9dfbd070d41c94c19209620c191b0340b929ce748a2bcd42d2"
"checksum log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)" = "880f77541efa6e5cc74e76910c9884d9859683118839d6a1dc3b11e63512565b"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "a311b77ebdc5dd4cf6449d81e4135d9f0e3b153839ac90e648a8ef538f923525"
"checksum num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)" = "d1452e8b06e448a07f0e6ebb0bb1d92b8890eea63288c0b627331d53514d0fba"
"checksum num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)" = "7485fcc84f85b4ecd0ea527b14189281cf27d60e583ae65ebc9c088b13dffe01"
"checksum num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "99843c856d68d8b4313b03a17e33c4bb42ae8f6610ea81b28abe076ac721b9b0"
"checksum quote 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)" = "7a6e920b65c65f10b2ae65c831a81a073a89edd28c7cce89475bff467ab4167a"
"checksum redox_syscall 0.1.32 (registry+https://github.com/rust-lang/crates.io-index)" = "ab105df655884ede59d45b7070c8a65002d921461ee813a024558ca16030eea0"
"checksum regex 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "ac6ab4e9218ade5b423358bbd2567d1617418403c7a512603630181813316322"
"checksum regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ad890a5eef7953f55427c50575c680c42841653abd2b028b68cd223d157f62db"
"checksum syn 0.10.8 (registry+https://github.com/rust-lang/crates.io-index)" = "58fd09df59565db3399efbba34ba8a2fec1307511ebd245d0061ff9d42691673"
"checksum thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "1697c4b57aeeb7a536b647165a2825faddffb1d3bad386d507709bd51a90bb14"
"checksum time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)" = "d5d788d3aa77bc0ef3e9621256885555368b47bd495c13dd2e7413c89f845520"
"checksum unicode-xid 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)" = "8c1f860d7d29cf02cb2f3f359fd35991af3d30bac52c57d265a3c461074cb4dc"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a"
"checksum winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc"

View File

@ -1,45 +1,52 @@
[workspace]
members = [
"./",
"examples/counter_timer_gauge/",
"examples/multi_outs/",
"examples/aggregate_print/",
"examples/async_print/",
"examples/custom_publish/",
"examples/raw_log/",
"examples/graphite/",
"examples/static_metrics/",
"examples/scoped_print/",
]
[package]
name = "dipstick"
version = "0.5.2-alpha.0"
version = "0.9.1"
authors = ["Francis Lalonde <fralalonde@gmail.com>"]
description = """A fast and modular metrics library decoupling app instrumentation from reporting backend.
Similar to popular logging frameworks, but with counters and timers.
Can be configured for combined outputs (log + statsd), random sampling, local aggregation of metrics, recurrent background publication, etc."""
description = """Fast, all-purpose metrics library decoupling instrumentation from reporting backends.
Like logging frameworks but with counters, timers and gauges.
Supports combined outputs (e.g. log + graphite), sampling, aggregation, scheduled push, etc."""
documentation = "https://docs.rs/dipstick"
homepage = "https://github.com/fralalonde/dipstick"
repository = "https://github.com/fralalonde/dipstick"
readme = "README.md"
keywords = ["metrics", "statsd", "graphite", "timer", "monitoring"]
keywords = ["metrics", "statsd", "graphite", "timer", "prometheus"]
license = "MIT/Apache-2.0"
edition = "2021"
[badges]
travis-ci = { repository = "fralalonde/dipstick", branch = "master" }
[dependencies]
log = { version = "0.3" }
time = "0.1"
lazy_static = "0.2.9"
derivative = "1.0"
log = "0.4"
lazy_static = "1"
atomic_refcell = "0.1"
skeptic = { version = "0.13", optional = true }
num = { version = "0.2", default-features = false }
crossbeam-channel = { version = "0.4", optional = true }
parking_lot = { version = "0.10", optional = true }
[dependencies.num]
default-features = false
version = "0.1"
# FIXME required only for random seed for sampling
time = "0.1"
minreq = { version = "2" }
# optional dep for standalone http pull metrics
tiny_http = { version = "0.7", optional = true }
[build-dependencies]
skeptic = { version = "0.13", optional = true }
[features]
default = [ "self_metrics", "crossbeam-channel", "parking_lot" ]
bench = []
self_metrics = []
tokio = []
[package.metadata.release]
#sign-commit = true
#upload-handbook = true
pre-release-replacements = [
{file="README.md", search="dipstick = \"[a-z0-9\\.-]+\"", replace="dipstick = \"{{version}}\""}
]

410
HANDBOOK.md Executable file
View File

@ -0,0 +1,410 @@
# The dipstick handbook
This handbook's purpose is to get you started instrumenting your apps and give an idea of what's possible.
It is not a full-on cookbook (yet) - some reader experimentation may be required!
# Background
Dipstick is a structured metrics library that allows to combine, select from, and switch between multiple metrics backends.
Counters, timers and gauges declared in the code are not tied to a specific implementation. This has multiple benefits.
- Simplified instrumentation:
For example, a single Counter instrument can be used to send metrics to the log and to a network aggregator.
This prevents duplication of instrumentation which in turn prevents errors.
- Flexible configuration:
Let's say we had an application defining a "fixed" metrics stack on initialization.
We could upgrade to a configuration file defined metrics stack without altering the instruments in every module.
Or we could make the configuration hot-reloadable, suddenly using different output formats.
- Easier metrics testing:
For example, using a compile-time feature switch, metrics could be collected directly to hash map at test time
but be sent over the network and written to a file at runtime, as described by external configuration.
# API Overview
Dipstick's API is split between _input_ and _output_ layers.
The input layer provides named metrics such as counters and timers to be used by the application.
The output layer controls how metric values will be recorded and emitted by the configured backend(s).
Input and output layers are decoupled, making code instrumentation independent of output configuration.
Intermediates can also be added between input and output for specific features or performance characteristics.
Although this handbook covers input before output, implementation of metrics can certainly be performed the other way around.
For more details, consult the [docs](https://docs.rs/dipstick/).
## Metrics Input
A metrics library first job is to help a program collect measurements about its operations.
Dipstick provides a restricted but robust set of _five_ instrument types, taking a stance against
an application's functional code having to pick what statistics should be tracked for each defined metric.
This helps to enforce contracts with downstream metrics systems and keeps code free of configuration elements.
### Counters
Counters a quantity of elements processed, for example, the number of bytes received in a read operation.
Counters only accepts positive values.
### Markers
Markers counters that can only be incremented by one (i.e. they are _monotonic_ counters).
Markers are useful to count the processing of individual events, or the occurrence of errors.
The default statistics for markers are not the same as those for counters.
Markers offer a safer API than counters, preventing values other than "1" from being passed.
### Timers
Timers measure an operation's duration.
Timers can be used in code with the `time!` macro, wrap around a closure or with explicit calls to `start()` and `stop()`.
```rust
use dipstick::*;
fn main() {
let metrics = Stream::write_to_stdout().metrics();
let timer = metrics.timer("my_timer");
// using macro
time!(timer, {/* timed code here ... */} );
// using closure
timer.time(|| {/* timed code here ... */} );
// using start/stop
let handle = timer.start();
/* timed code here ... */
timer.stop(handle);
// directly reporting microseconds
timer.interval_us(123_456);
}
```
Time intervals are measured in microseconds, and can be scaled down (milliseconds, seconds...) on output.
Internally, timers use nanoseconds precision but their actual accuracy will depend on the platform's OS and hardware.
Note that Dipstick's embedded and always-on nature make its time measurement goals different from those of a full-fledged profiler.
Simplicity, flexibility and low impact on application performance take precedence over accuracy.
Timers should still offer more than reasonable performance for most I/O and high-level CPU operations.
### Levels
Levels are relative, cumulative counters.
Compared to counters:
- Levels accepts positive and negative values.
- Level's aggregation statistics will track the minimum and maximum _sum_ of values at any point,
rather than min/max observed individual values.
```rust
use dipstick::*;
fn main() {
let metrics = Stream::write_to_stdout().metrics();
let queue_length = metrics.level("queue_length");
queue_length.adjust(-2);
queue_length.adjust(4);
}
```
Levels are halfway between counters and gauges and may be preferred to either in some situations.
### Gauges
Gauges are use to record instant observation of a resource's value.
Gauges values can be positive or negative, but are non-cumulative.
As such, a gauge's aggregated statistics are simply the mean, max and min values.
Values can be observed for gauges at any moment, like any other metric.
```rust
use dipstick::*;
fn main() {
let metrics = Stream::write_to_stdout().metrics();
let uptime = metrics.gauge("uptime");
uptime.value(2);
}
```
### Observers
The observation of values for any metric can be triggered on schedule or upon publication.
This mechanism can be used for automatic reporting of gauge values:
```rust
use dipstick::*;
use std::time::{Duration, Instant};
fn main() {
let metrics = Stream::write_to_stdout().metrics();
// observe a constant value before each flush
let uptime = metrics.gauge("uptime");
metrics.observe(uptime, |_| 6).on_flush();
// observe a function-provided value periodically
let _handle = metrics
.observe(metrics.gauge("threads"), thread_count)
.every(Duration::from_secs(1));
}
fn thread_count(_now: Instant) -> MetricValue {
6
}
```
Observations triggered `on_flush` take place _before_ metrics are published, allowing last-moment insertion of metric values.
Scheduling could also be used to setup a "heartbeat" metric:
```rust
use dipstick::*;
use std::time::{Duration};
fn main() {
let metrics = Graphite::send_to("localhost:2003")
.expect("Connected")
.metrics();
let heartbeat = metrics.marker("heartbeat");
// update a metric with a constant value every 5 sec
let _handle = metrics.observe(heartbeat, |_| 1)
.every(Duration::from_secs(5));
}
```
Scheduled operations can be cancelled at any time using the returned `CancelHandle`.
Also, scheduled operations are canceled automatically when the metrics input scope they were attached to is `Drop`ped,
making them more useful with persistent, statically declared `metrics!()`.
Observation scheduling is done on a best-effort basis by a simple but efficient internal single-thread scheduler.
Be mindful of measurement callback performance to prevent slippage of following observation tasks.
The scheduler only runs if scheduling is used.
Once started, the scheduler thread will run a low-overhead wait loop until the application is terminated.
### Names
Each metric is given a simple name upon instantiation.
Names are opaque to the application and are used only to identify the metrics upon output.
Names may be prepended with a application-namespace shared across all backends.
```rust
use dipstick::*;
fn main() {
let stdout = Stream::write_to_stdout();
let metrics = stdout.metrics();
// plainly name "timer"
let _timer = metrics.timer("timer");
// prepend metrics namespace
let db_metrics = metrics.named("database");
// qualified name will be "database.counter"
let _db_counter = db_metrics.counter("counter");
}
```
Names may also be prepended with a namespace by each configured backend.
For example, the metric named `success`, declared under the namespace `request` could appear under different qualified names:
- logging as `app_module.request.success`
- statsd as `environment.hostname.pid.module.request.success`
Aggregation statistics may also append identifiers to the metric's name, such as `counter_mean` or `marker_rate`.
Names should exclude characters that can interfere with namespaces, separator and output protocols.
A good convention is to stick with lowercase alphanumeric identifiers of less than 12 characters.
Note that highly dynamic elements in metric names are usually better handled using `Labels`.
### Labels
Some backends (such as Prometheus) allow "tagging" the metrics with labels to provide additional context,
such as the URL or HTTP method requested from a web server.
Dipstick offers the thread-local ThreadLabel and global AppLabel context maps to transparently carry
metadata to the backends configured to use it.
Notes about labels:
- Using labels may incur a significant runtime cost because
of the additional implicit parameter that has to be carried around.
- Labels runtime costs may be even higher if async queuing is used
since current context has to be persisted across threads.
- While internally supported, single metric labels are not yet part of the input API.
If this is important to you, consider using dynamically defined metrics or open a GitHub issue!
### Static vs dynamic metrics
Metric inputs are usually setup statically upon application startup.
```rust
use dipstick::*;
metrics!("my_app" => {
COUNTER_A: Counter = "counter_a";
});
fn main() {
Proxy::default_target(Stream::write_to_stdout().metrics());
COUNTER_A.count(11);
}
```
The static metric definition macro is just `lazy_static!` wrapper.
## Dynamic metrics
If necessary, metrics can also be defined "dynamically".
This is more flexible but has a higher runtime cost, which may be alleviated with the optional caching mechanism.
```rust
use dipstick::*;
fn main() {
let user_name = "john_day";
let app_metrics = Log::to_log().cached(512).metrics();
app_metrics.gauge(&format!("gauge_for_user_{}", user_name)).value(44);
}
```
Alternatively, you may use `Labels` to output context-dependent metrics.
## Metrics Output
A metrics library's second job is to help a program emit metric values that can be used in further systems.
Dipstick provides an assortment of drivers for network or local metrics output.
Multiple outputs can be used at a time, each with its own configuration.
### Types
These output type are provided, some are extensible, you may write your own if you need to.
- Stream: Write values to any Write trait implementer, including files, stderr and stdout.
- Log: Write values to the log using the `log` crate.
- Map: Insert metric values in a map. Useful for testing or programmatic retrieval of stats.
- Statsd: Send metrics over UDP using the statsd format. Allows sampling of values.
- Graphite: Send metrics over TCP using the graphite format.
- Prometheus: Send metrics to a Prometheus "PushGateway" using the Prometheus 2.0 text format.
### Attributes
Attributes change the outputs behavior.
#### Prefixes
Outputs can be given Prefixes.
Prefixes are prepended to the Metrics names emitted by this output.
With network outputs, a typical use of Prefixes is to identify the network host,
environment and application that metrics originate from.
#### Formatting
Stream and Log outputs have configurable formatting that enables usage of custom templates.
Other outputs, such as Graphite, have a fixed format because they're intended to be processed by a downstream system.
#### Buffering
Most outputs provide optional buffering, which can be used to optimized throughput at the expense of higher latency.
If enabled, buffering is usually a best-effort affair, to safely limit the amount of memory that is used by the metrics.
#### Sampling
Some outputs such as statsd also have the ability to sample metrics values.
If enabled, sampling is done using pcg32, a fast random algorithm with reasonable entropy.
```rust
use dipstick::*;
fn main() {
let _app_metrics = Statsd::send_to("localhost:8125").expect("connected")
.sampled(Sampling::Random(0.01))
.metrics();
}
```
## Intermediates
### Proxy
Because the input's actual _implementation_ depends on the output configuration,
it is necessary to create an output channel before defining any metrics.
This is often not possible because metrics configuration could be dynamic (e.g. loaded from a file),
which might happen after the static initialization phase in which metrics are defined.
To get around this catch-22, Dipstick provides `Proxy` which acts as intermediate output,
allowing redirection to the effective output after it has been set up.
A default `Proxy` is defined when using the simplest form of the `metrics!` macro:
```rust
use dipstick::*;
metrics!(
COUNTER: Counter = "another_counter";
);
fn main() {
dipstick::Proxy::default_target(Stream::write_to_stdout().metrics());
COUNTER.count(456);
}
```
On occasion it might be useful to define your own `Proxy` for independent configuration:
```rust
use dipstick::*;
metrics! {
pub MY_PROXY: Proxy = "my_proxy" => {
COUNTER: Counter = "some_counter";
}
}
fn main() {
MY_PROXY.target(Stream::write_to_stdout().metrics());
COUNTER.count(456);
}
```
The performance overhead incurred by the proxy's dynamic dispatching of metrics will be negligible
in most applications in regards to the flexibility and convenience provided.
### Bucket
The `AtomicBucket` can be used to aggregate metric values.
Bucket aggregation is performed locklessly and is very fast.
The tracked statistics vary across metric types:
| |Counter|Marker | Level | Gauge | Timer |
|-------|-------|--- |--- |--- |--- |
| count | x | x | x | | x |
| sum | x | | | | x |
| min | x | | s | x | x |
| max | x | | s | x | x |
| rate | | x | | | x |
| mean | x | | x | x | x |
Some notes on statistics:
- The count is the "hit count" - the number of times values were recorded.
If no values were recorded, no statistics are emitted for this metric.
- Markers have no `sum` as it would always be equal to the `count`.
- The mean is derived from the sum divided by the count of values.
Because count and sum are read sequentially but not atomically, there is _very small_ chance that the
calculation could be off in scenarios of high concurrency. It is assumed that the amount of data collected
in such situations will make up for the slight error.
- Min and max are for individual values except for level where the sum of values is tracked instead.
- The Rate is derived from the sum of values divided by the duration of the aggregation.
#### Preset bucket statistics
Published statistics can be selected with presets such as `all_stats`, `summary`, `average`.
#### Custom bucket statistics
For more control over published statistics, you can provide your own strategy.
Consult the `custom_publish` [example](https://github.com/fralalonde/dipstick/blob/master/examples/custom_publish.rs)
to see how this can be done.
#### Scheduled publication
Buffered and aggregated (bucket) metrics can be scheduled to be
[periodically published](https://github.com/fralalonde/dipstick/blob/master/examples/bucket_summary.rs) as a background task.
The schedule runs on a dedicated thread and follows a recurrent `Duration`.
It can be cancelled at any time using the `CancelHandle` returned by the `flush_every()` method.
### Multi
Just like Constructicons, multiple metrics channels can assemble, creating a unified facade
that transparently dispatches metrics to every constituent.
This can be done using multiple [inputs](https://github.com/fralalonde/dipstick/blob/master/examples/multi_input.rs)
or multiple [outputs](https://github.com/fralalonde/dipstick/blob/master/examples/multi_output.rs)
### Asynchronous Queue
Metrics can be collected asynchronously using a queue.
The async queue uses a Rust channel and a standalone thread.
If the queue ever fills up under heavy load, it reverts to blocking (rather than dropping metrics).
I'm sure [an example](https://github.com/fralalonde/dipstick/blob/master/examples/async_queue.rs) would help.
This is a tradeoff, lowering app latency by taking any metrics I/O off the thread but increasing overall metrics reporting latency.
Using async metrics should not be required if using only aggregated metrics such as an `AtomicBucket`.

View File

@ -1,28 +0,0 @@
# Customizing release-flow
[tasks.default]
description = "GAGAGA"
alias = "GAGA-flow"
[tasks.pre-publish]
dependencies = [
"update-readme",
"git-add",
]
[tasks.update-readme]
description = "Update the README using cargo readme."
script = [
"#!/usr/bin/env bash",
"cargo readme > README.md",
]
[tasks.post-publish]
dependencies = [
"update-examples",
]
[tasks.update-examples]
description = "Update the examples dependencies."
command = "cargo"
args = ["update", "-p", "dipstick"]

203
README.md Normal file → Executable file
View File

@ -1,150 +1,99 @@
A quick, modular metrics toolkit for Rust applications; similar to popular logging frameworks,
but with counters, markers, gauges and timers.
[![crates.io](https://img.shields.io/crates/v/dipstick.svg)](https://crates.io/crates/dipstick)
[![docs.rs](https://docs.rs/dipstick/badge.svg)](https://docs.rs/dipstick)
[![Build Status](https://travis-ci.org/fralalonde/dipstick.svg?branch=master)](https://travis-ci.org/fralalonde/dipstick)
Dipstick's main attraction is the ability to send metrics to multiple customized outputs.
For example, captured metrics can be written immediately to the log _and_
sent over the network after a period of aggregation.
Dipstick builds on stable Rust with minimal dependencies
and is published as a [crate.](https://crates.io/crates/dipstick)
# dipstick ![a dipstick picture](https://raw.githubusercontent.com/fralalonde/dipstick/master/assets/dipstick_single_ok_horiz_transparent_small.png)
A one-stop shop metrics library for Rust applications with lots of features,
minimal impact on applications and a choice of output to downstream systems.
## Features
Dipstick is a toolkit to help all sorts of application collect and send out metrics.
As such, it needs a bit of set up to suit one's needs.
Skimming through the [handbook](https://github.com/fralalonde/dipstick/tree/master/HANDBOOK.md)
and many [examples](https://github.com/fralalonde/dipstick/tree/master/examples)
should help you get an idea of the possible configurations.
- Send metrics to stdout, log, statsd or graphite (one or many)
- Synchronous, asynchronous or mixed operation
- Optional fast random statistical sampling
- Immediate propagation or local aggregation of metrics (count, sum, average, min/max)
- Periodic or programmatic publication of aggregated metrics
- Customizable output statistics and formatting
- Global or scoped (e.g. per request) metrics
- Per-application and per-output metric namespaces
- Predefined or ad-hoc metrics
In short, dipstick-enabled apps _can_:
## Cookbook
- Send metrics to console, log, statsd, graphite or prometheus (one or many)
- Locally aggregate the count, sum, mean, min, max and rate of metric values
- Publish aggregated metrics, on schedule or programmatically
- Customize output statistics and formatting
- Define global or scoped (e.g. per request) metrics
- Statistically sample metrics (statsd)
- Choose between sync or async operation
- Choose between buffered or immediate output
- Switch between metric backends at runtime
For convenience, dipstick builds on stable Rust with minimal, feature-gated dependencies.
Performance, safety and ergonomy are also prime concerns.
### Non-goals
Dipstick's focus is on metrics collection (input) and forwarding (output).
Although it will happily aggregate base statistics, for the sake of simplicity and performance Dipstick will not
- plot graphs
- send alerts
- track histograms
These are all best done by downstream timeseries visualization and monitoring tools.
## Show me the code!
Here's a basic aggregating & auto-publish counter metric:
Dipstick is easy to add to your code:
```rust
use dipstick::*;
let app_metrics = metrics(to_graphite("host.com:2003"));
app_metrics.counter("my_counter").count(3);
```
Metrics can be sent to multiple outputs at the same time:
```rust
let app_metrics = metrics((to_stdout(), to_statsd("localhost:8125", "app1.host.")));
```
Since instruments are decoupled from the backend, outputs can be swapped easily.
Metrics can be aggregated and scheduled to be published periodically in the background:
```rust
use std::time::Duration;
let (to_aggregate, from_aggregate) = aggregate();
publish_every(Duration::from_secs(10), from_aggregate, to_log("last_ten_secs:"), all_stats);
let app_metrics = metrics(to_aggregate);
```
Aggregation is performed locklessly and is very fast.
Count, sum, min, max and average are tracked where they make sense.
Published statistics can be selected with presets such as `all_stats` (see previous example),
`summary`, `average`.
For more control over published statistics, a custom filter can be provided:
```rust
let (_to_aggregate, from_aggregate) = aggregate();
publish(from_aggregate, to_log("my_custom_stats:"),
|metric_kind, metric_name, metric_score|
match metric_score {
HitCount(hit_count) => Some((Counter, vec![metric_name, ".per_thousand"], hit_count / 1000)),
_ => None
});
```
Metrics can be statistically sampled:
```rust
let app_metrics = metrics(sample(0.001, to_statsd("server:8125", "app.sampled.")));
```
A fast random algorithm is used to pick samples.
Outputs can use sample rate to expand or format published data.
Metrics can be recorded asynchronously:
```rust
let app_metrics = metrics(async(48, to_stdout()));
```
The async queue uses a Rust channel and a standalone thread.
The current behavior is to block when full.
Metric definitions can be cached to make using _ad-hoc metrics_ faster:
```rust
let app_metrics = metrics(cache(512, to_log()));
app_metrics.gauge(format!("my_gauge_{}", 34)).value(44);
```
The preferred way is to _predefine metrics_,
possibly in a [lazy_static!](https://crates.io/crates/lazy_static) block:
```rust
#[macro_use] external crate lazy_static;
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = metrics(to_stdout());
pub static ref COUNTER_A: Counter<Aggregate> = METRICS.counter("counter_a");
fn main() {
let bucket = AtomicBucket::new();
bucket.drain(Stream::write_to_stdout());
bucket.flush_every(std::time::Duration::from_secs(3));
let counter = bucket.counter("counter_a");
counter.count(8);
}
COUNTER_A.count(11);
```
Timers can be used multiple ways:
Persistent apps wanting to declare static metrics will prefer using the `metrics!` macro:
```rust
let timer = app_metrics.timer("my_timer");
time!(timer, {/* slow code here */} );
timer.time(|| {/* slow code here */} );
use dipstick::*;
let start = timer.start();
/* slow code here */
timer.stop(start);
metrics! { METRICS = "my_app" => {
pub COUNTER: Counter = "my_counter";
}
}
timer.interval_us(123_456);
fn main() {
METRICS.target(Graphite::send_to("localhost:2003").expect("connected").metrics());
COUNTER.count(32);
}
```
Related metrics can share a namespace:
```rust
let db_metrics = app_metrics.with_prefix("database.");
let db_timer = db_metrics.timer("db_timer");
let db_counter = db_metrics.counter("db_counter");
For sample applications see the [examples](https://github.com/fralalonde/dipstick/tree/master/examples).
For documentation see the [handbook](https://github.com/fralalonde/dipstick/tree/master/HANDBOOK.md).
To use Dipstick in your project, add the following line to your `Cargo.toml`
in the `[dependencies]` section:
```toml
dipstick = "0.9.0"
```
## External features
## Design
Dipstick's design goals are to:
- support as many metrics backends as possible while favoring none
- support all types of applications, from embedded to servers
- promote metrics conventions that facilitate app monitoring and maintenance
- stay out of the way in the code and at runtime (ergonomic, fast, resilient)
Configuring dipstick from a text file is possible using
the [spirit-dipstick](https://crates.io/crates/spirit-dipstick) crate.
## Performance
Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once.
Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use.
Defining a metric `cache()` reduces that cost for recurring metrics.
## Building
When building the crate prior to PR or release, just run plain old `make`.
This will in turn run `cargo` a few times to run tests, benchmarks, lints, etc.
Unfortunately, nightly Rust is still required to run `bench` and `clippy`.
Run benchmarks with `cargo +nightly bench --features bench`.
## TODO / Missing / Weak points
- Prometheus support is still primitive (read untested). Only the push gateway approach is supported for now.
- No backend for "pull" metrics yet. Should at least provide tiny-http listener capability.
- No quick integration feature with common frameworks (Actix, etc.) is provided yet.
- Thread Local buckets could be nice.
- "Rolling" aggregators would be nice for pull metrics. Current bucket impl resets after flush.
## TODO
Although already usable, Dipstick is still under heavy development and makes no guarantees
of any kind at this point. See the following list for any potential caveats :
- META turn TODOs into GitHub issues
- generic publisher / sources
- feature flags
- time measurement units in metric kind (us, ms, etc.) for naming & scaling
- heartbeat metric on publish
- logger templates
- configurable aggregation (?)
- non-aggregating buffers
- framework glue (rocket, iron, gotham, indicatif, etc.)
- more tests & benchmarks
- complete doc / inline samples
- more example apps
- A cool logo
- method annotation processors `#[timer("name")]`
- fastsinks (M / &M) vs. safesinks (Arc<M>)
- `static_metric!` macro to replace `lazy_static!` blocks and handle generics boilerplate.
License: MIT/Apache-2.0
_this file was generated using [cargo readme](https://github.com/livioribeiro/cargo-readme)_
## License
Dipstick is licensed under the terms of the Apache 2.0 and MIT license.

View File

@ -1,40 +0,0 @@
{{readme}}
## Design
Dipstick's design goals are to:
- support as many metrics backends as possible while favoring none
- support all types of applications, from embedded to servers
- promote metrics conventions that facilitate app monitoring and maintenance
- stay out of the way in the code and at runtime (ergonomic, fast, resilient)
## Performance
Predefined timers use a bit more code but are generally faster because their initialization cost is is only paid once.
Ad-hoc timers are redefined "inline" on each use. They are more flexible, but have more overhead because their init cost is paid on each use.
Defining a metric `cache()` reduces that cost for recurring metrics.
Run benchmarks with `cargo +nightly bench --features bench`.
## TODO
Although already usable, Dipstick is still under heavy development and makes no guarantees
of any kind at this point. See the following list for any potential caveats :
- META turn TODOs into GitHub issues
- fix rustdocs https://docs.rs/dipstick/0.4.15/dipstick/
- generic publisher / sources
- add feature flags
- time measurement units in metric kind (us, ms, etc.) for naming & scaling
- heartbeat metric on publish
- logger templates
- configurable aggregation (?)
- non-aggregating buffers
- framework glue (rocket, iron, gotham, indicatif, etc.)
- more tests & benchmarks
- complete doc / inline samples
- more example apps
- A cool logo
- method annotation processors `#[timer("name")]`
- fastsinks (M / &M) vs. safesinks (Arc<M>)
- `static_metric!` macro to replace `lazy_static!` blocks and handle generics boilerplate.
License: {{license}}
_this file was generated using [cargo readme](https://github.com/livioribeiro/cargo-readme)_

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

8
build.rs Executable file
View File

@ -0,0 +1,8 @@
#[cfg(feature = "skeptic")]
extern crate skeptic;
fn main() {
// generates documentation tests.
#[cfg(feature = "skeptic")]
skeptic::generate_doc_tests(&["README.md", "HANDBOOK.md"]);
}

View File

@ -1,7 +0,0 @@
[package]
name = "aggregate_print"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,7 +0,0 @@
[package]
name = "queued"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,33 +0,0 @@
//! A sample application asynchronously printing metrics to stdout.
#[macro_use]
extern crate dipstick;
use std::thread::sleep;
use std::time::Duration;
use dipstick::*;
fn main() {
let metrics = global_metrics(async(0, to_stdout()));
let counter = metrics.counter("counter_a");
let timer = metrics.timer("timer_b");
let subsystem_metrics = metrics.with_prefix("subsystem.");
let event = subsystem_metrics.marker("event_c");
let gauge = subsystem_metrics.gauge("gauge_d");
loop {
// report some metric values from our "application" loop
counter.count(11);
gauge.value(22);
metrics.counter("ad_hoc").count(4);
event.mark();
time!(timer, sleep(Duration::from_millis(5)));
timer.time(|| sleep(Duration::from_millis(5)));
}
}

22
examples/async_queue.rs Executable file
View File

@ -0,0 +1,22 @@
//! A sample application asynchronously printing metrics to stdout.
use dipstick::{Input, InputScope, QueuedInput, Stream};
use std::thread;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let async_metrics = Stream::write_to_stdout().queued(100).metrics();
let counter = async_metrics.counter("counter_a");
for _ in 0..4 {
let counter = counter.clone();
thread::spawn(move || {
loop {
// report some metric values from our "application" loop
counter.count(11);
sleep(Duration::from_millis(50));
}
});
}
sleep(Duration::from_secs(5000));
}

View File

@ -1,35 +1,41 @@
//! An app demonstrating the basics of the metrics front-end.
//! Defines metrics of each kind and use them to print values to the console in multiple ways.
#[macro_use]
extern crate dipstick;
use dipstick::{time, Input, InputScope, Prefixed, Stream};
use std::io;
use std::thread::sleep;
use std::time::Duration;
use dipstick::*;
fn main() {
// for this demo, print metric values to the console
let app_metrics = global_metrics(to_stdout());
let app_metrics = Stream::write_to(io::stdout()).metrics();
// metrics can be predefined by type and name
let counter = app_metrics.counter("counter_a");
let level = app_metrics.level("level_a");
let timer = app_metrics.timer("timer_b");
// metrics can also be declared and used ad-hoc
// metrics can also be declared and used ad-hoc (use output.cache() if this happens often)
app_metrics.counter("just_once").count(4);
// metric names can be prepended with a common prefix
let prefixed_metrics = app_metrics.with_prefix("subsystem.");
let prefixed_metrics = app_metrics.named("subsystem");
let event = prefixed_metrics.marker("event_c");
let gauge = prefixed_metrics.gauge("gauge_d");
// each kind of metric has a different method name to prevent errors
counter.count(11);
level.adjust(-4);
level.adjust(5);
gauge.value(22);
gauge.value(-24);
event.mark();
timer.interval_us(35573);
// time can be measured multiple equivalent ways:
// in microseconds (used internally)
timer.interval_us(35573);
// using the time! macro
time!(timer, sleep(Duration::from_millis(5)));
@ -37,9 +43,8 @@ fn main() {
// using a closure
timer.time(|| sleep(Duration::from_millis(5)));
// using a start time handle
// using a "time handle"
let start_time = timer.start();
Duration::from_millis(5);
sleep(Duration::from_millis(5));
timer.stop(start_time);
}

30
examples/bench_bucket.rs Executable file
View File

@ -0,0 +1,30 @@
//! A sample application asynchronously printing metrics to stdout.
use dipstick::{stats_all, AtomicBucket, Input, InputScope, Stream};
use std::env::args;
use std::str::FromStr;
use std::thread;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let bucket = AtomicBucket::new();
let event = bucket.marker("a");
let args = &mut args();
args.next();
let tc: u8 = u8::from_str(&args.next().unwrap()).unwrap();
for _ in 0..tc {
let event = event.clone();
thread::spawn(move || {
loop {
// report some metric values from our "application" loop
event.mark();
}
});
}
sleep(Duration::from_secs(5));
bucket.stats(stats_all);
bucket
.flush_to(&Stream::write_to_stdout().metrics())
.unwrap();
}

33
examples/bench_bucket_proxy.rs Executable file
View File

@ -0,0 +1,33 @@
//! A sample application asynchronously printing metrics to stdout.
use dipstick::{AtomicBucket, Input, InputScope, Proxy, Stream};
use std::env::args;
use std::str::FromStr;
use std::thread;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let event = Proxy::default().marker("a");
let bucket = AtomicBucket::new();
Proxy::default().target(bucket.clone());
let args = &mut args();
args.next();
let tc: u8 = u8::from_str(&args.next().unwrap()).unwrap();
for _ in 0..tc {
let event = event.clone();
thread::spawn(move || {
loop {
// report some metric values from our "application" loop
event.mark();
}
});
}
sleep(Duration::from_secs(5));
bucket
.flush_to(&Stream::write_to_stdout().metrics())
.unwrap();
}

31
examples/bench_queue.rs Executable file
View File

@ -0,0 +1,31 @@
//! A sample application asynchronously printing metrics to stdout.
use dipstick::{AtomicBucket, Input, InputQueueScope, InputScope, Stream};
use std::env::args;
use std::str::FromStr;
use std::thread;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let bucket = AtomicBucket::new();
// NOTE: Wrapping an AtomicBucket with a Queue probably useless, as it is very fast and performs no I/O.
let queue = InputQueueScope::wrap(bucket.clone(), 10000);
let event = queue.marker("a");
let args = &mut args();
args.next();
let tc: u8 = u8::from_str(&args.next().unwrap()).unwrap();
for _ in 0..tc {
let event = event.clone();
thread::spawn(move || {
loop {
// report some metric values from our "application" loop
event.mark();
}
});
}
sleep(Duration::from_secs(5));
bucket
.flush_to(&Stream::write_to_stdout().metrics())
.unwrap();
}

43
examples/bucket2graphite.rs Executable file
View File

@ -0,0 +1,43 @@
//! A sample application continuously aggregating metrics,
//! printing the summary stats every three seconds
use dipstick::*;
use std::time::Duration;
fn main() {
// adding a name to the stats
let bucket = AtomicBucket::new().named("test");
// adding two names to Graphite output
// metrics will be prefixed with "machine1.application.test"
bucket.drain(
Graphite::send_to("localhost:2003")
.expect("Socket")
.named("machine1")
.add_name("application"),
);
bucket.flush_every(Duration::from_secs(3));
let counter = bucket.counter("counter_a");
let timer = bucket.timer("timer_a");
let gauge = bucket.gauge("gauge_a");
let marker = bucket.marker("marker_a");
loop {
// add counts forever, non-stop
counter.count(11);
counter.count(12);
counter.count(13);
timer.interval_us(11_000_000);
timer.interval_us(12_000_000);
timer.interval_us(13_000_000);
gauge.value(11);
gauge.value(12);
gauge.value(13);
marker.mark();
}
}

35
examples/bucket2stdout.rs Executable file
View File

@ -0,0 +1,35 @@
//! A sample application continuously aggregating metrics,
//! printing the summary stats every three seconds
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics = AtomicBucket::new().named("test");
metrics.drain(Stream::write_to_stdout());
metrics.flush_every(Duration::from_secs(3));
let counter = metrics.counter("counter_a");
let timer = metrics.timer("timer_a");
let gauge = metrics.gauge("gauge_a");
let marker = metrics.marker("marker_a");
loop {
// add counts forever, non-stop
counter.count(11);
counter.count(12);
counter.count(13);
timer.interval_us(11_000_000);
timer.interval_us(12_000_000);
timer.interval_us(13_000_000);
gauge.value(11);
gauge.value(12);
gauge.value(13);
marker.mark();
}
}

27
examples/bucket_cleanup.rs Executable file
View File

@ -0,0 +1,27 @@
//! Transient metrics are not retained by buckets after flushing.
use dipstick::*;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let bucket = AtomicBucket::new();
AtomicBucket::default_drain(Stream::write_to_stdout());
let persistent_marker = bucket.marker("persistent");
let mut i = 0;
loop {
i += 1;
let transient_marker = bucket.marker(&format!("marker_{}", i));
transient_marker.mark();
persistent_marker.mark();
bucket.flush().unwrap();
sleep(Duration::from_secs(1));
}
}

View File

@ -1,15 +1,12 @@
//! A sample application continuously aggregating metrics,
//! printing the summary stats every three seconds
extern crate dipstick;
use std::time::Duration;
use dipstick::*;
use std::time::Duration;
fn main() {
let to_quick_aggregate = aggregate(16, summary, to_stdout());
let app_metrics = global_metrics(to_quick_aggregate);
let app_metrics = AtomicBucket::new();
app_metrics.drain(Stream::write_to_stdout());
app_metrics.flush_every(Duration::from_secs(3));
@ -34,5 +31,4 @@ fn main() {
marker.mark();
}
}

View File

@ -0,0 +1,22 @@
//! Metrics are printed at the end of every cycle as scope is dropped
use std::thread::sleep;
use std::time::Duration;
use dipstick::*;
fn main() {
let input = Stream::write_to_stdout().buffered(Buffering::Unlimited);
loop {
println!("\n------- open scope");
let metrics = input.metrics();
metrics.marker("marker_a").mark();
sleep(Duration::from_millis(1000));
println!("------- close scope: ");
}
}

21
examples/cache.rs Executable file
View File

@ -0,0 +1,21 @@
//! A sample application asynchronously printing metrics to stdout.
use dipstick::*;
use std::io;
use std::thread::sleep;
use std::time::Duration;
fn main() {
let metrics = Stream::write_to(io::stdout())
.cached(5)
.metrics()
.named("cache");
loop {
// report some ad-hoc metric values from our "application" loop
metrics.counter("blorf").count(1134);
metrics.marker("burg").mark();
sleep(Duration::from_millis(500));
}
}

39
examples/clopwizard.rs Normal file
View File

@ -0,0 +1,39 @@
//! A dropwizard-like configuration using three buckets
//! aggregating one, five and fifteen minutes of data.
use dipstick::*;
use std::thread::sleep;
use std::time::Duration;
metrics! {
APP = "application" => {
pub COUNTER: Counter = "counter";
}
}
fn main() {
let one_minute = AtomicBucket::new();
one_minute.flush_every(Duration::from_secs(60));
let five_minutes = AtomicBucket::new();
five_minutes.flush_every(Duration::from_secs(300));
let fifteen_minutes = AtomicBucket::new();
fifteen_minutes.flush_every(Duration::from_secs(900));
let all_buckets = MultiInputScope::new()
.add_target(one_minute)
.add_target(five_minutes)
.add_target(fifteen_minutes)
.named("machine_name");
// send application metrics to aggregator
Proxy::default().target(all_buckets);
AtomicBucket::default_drain(Stream::write_to_stdout());
AtomicBucket::default_stats(stats_all);
loop {
COUNTER.count(17);
sleep(Duration::from_secs(3));
}
}

View File

@ -1,7 +0,0 @@
[package]
name = "counter_timer_gauge"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

59
examples/custom_publish.rs Executable file
View File

@ -0,0 +1,59 @@
//! A demonstration of customization of exported aggregated metrics.
//! Using match on origin metric kind or score type to alter publication output.
use dipstick::*;
use std::time::Duration;
fn main() {
fn custom_statistics(
kind: InputKind,
mut name: MetricName,
score: ScoreType,
) -> Option<(InputKind, MetricName, MetricValue)> {
match (kind, score) {
// do not export gauge scores
(InputKind::Gauge, _) => None,
// prepend and append to metric name
(_, ScoreType::Count(count)) => {
if let Some(last) = name.pop_back() {
Some((
InputKind::Counter,
name.append("customized_add_prefix")
.append(format!("{}_and_a_suffix", last)),
count,
))
} else {
None
}
}
// scaling the score value and appending unit to name
(kind, ScoreType::Sum(sum)) => Some((kind, name.append("per_thousand"), sum / 1000)),
// using the unmodified metric name
(kind, ScoreType::Mean(avg)) => Some((kind, name, avg.round() as MetricValue)),
// do not export min and max
_ => None,
}
}
// send application metrics to aggregator
AtomicBucket::default_drain(Stream::write_to_stderr());
AtomicBucket::default_stats(custom_statistics);
let app_metrics = AtomicBucket::new();
// schedule aggregated metrics to be printed every 3 seconds
app_metrics.flush_every(Duration::from_secs(3));
let counter = app_metrics.counter("counter_a");
let timer = app_metrics.timer("timer_b");
let gauge = app_metrics.gauge("gauge_c");
loop {
counter.count(11);
timer.interval_us(654654);
gauge.value(3534);
}
}

View File

@ -1,8 +0,0 @@
[package]
name = "custom_publish"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,48 +0,0 @@
//! A demonstration of customization of exported aggregated metrics.
//! Using match on origin metric kind or score type to alter publication output.
extern crate dipstick;
use std::time::Duration;
use dipstick::*;
fn main() {
fn custom_statistics(kind: Kind, name: &str, score:ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match (kind, score) {
// do not export gauge scores
(Kind::Gauge, _) => None,
// prepend and append to metric name
(_, ScoreType::Count(count)) => Some((Kind::Counter,
vec!["name customized_with_prefix:", &name, " and a suffix: "], count)),
// scaling the score value and appending unit to name
(kind, ScoreType::Sum(sum)) => Some((kind, vec![&name, "_millisecond"], sum * 1000)),
// using the unmodified metric name
(kind, ScoreType::Mean(avg)) => Some((kind, vec![&name], avg.round() as u64)),
// do not export min and max
_ => None,
}
}
// send application metrics to aggregator
let to_aggregate = aggregate(16, custom_statistics, to_stdout());
let app_metrics = global_metrics(to_aggregate);
// schedule aggregated metrics to be printed every 3 seconds
app_metrics.flush_every(Duration::from_secs(3));
let counter = app_metrics.counter("counter_a");
let timer = app_metrics.timer("timer_b");
let gauge = app_metrics.gauge("gauge_c");
loop {
counter.count(11);
timer.interval_us(654654);
gauge.value(3534);
}
}

17
examples/graphite.rs Normal file
View File

@ -0,0 +1,17 @@
//! A sample application sending ad-hoc metrics to graphite.
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics = Graphite::send_to("localhost:2003")
.expect("Connected")
.named("my_app")
.metrics();
loop {
metrics.counter("counter_a").count(123);
metrics.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(40));
}
}

View File

@ -1,8 +0,0 @@
[package]
name = "graphite"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }
badlog = "1.0"

View File

@ -1,21 +0,0 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
extern crate dipstick;
extern crate badlog;
use dipstick::*;
use std::time::Duration;
fn main() {
badlog::init(Some("info"));
let metrics = global_metrics(
to_graphite("localhost:2003").expect("Connect to graphite")
);
loop {
metrics.counter("counter_a").count(123);
metrics.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(40));
}
}

45
examples/macro_proxy.rs Executable file
View File

@ -0,0 +1,45 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
use dipstick::*;
use std::time::Duration;
// undeclared root (un-prefixed) metrics
metrics! {
// create counter "some_counter"
pub ROOT_COUNTER: Counter = "root_counter";
// create counter "root_counter"
pub ROOT_GAUGE: Gauge = "root_gauge";
// create counter "root_timer"
pub ROOT_TIMER: Timer = "root_timer";
}
// public source
metrics!(pub PUB_METRICS = "pub_lib_prefix" => {
// create counter "lib_prefix.some_counter"
pub PUB_COUNTER: Counter = "some_counter";
});
// declare mod source
metrics!(pub LIB_METRICS = "mod_lib_prefix" => {
// create counter "mod_lib_prefix.some_counter"
pub SOME_COUNTER: Counter = "some_counter";
});
// reuse declared source
metrics!(LIB_METRICS => {
// create counter "mod_lib_prefix.another_counter"
ANOTHER_COUNTER: Counter = "another_counter";
});
fn main() {
dipstick::Proxy::default_target(Stream::write_to_stdout().metrics());
loop {
ROOT_COUNTER.count(123);
ANOTHER_COUNTER.count(456);
ROOT_TIMER.interval_us(2000000);
ROOT_GAUGE.value(34534);
std::thread::sleep(Duration::from_millis(40));
}
}

25
examples/multi_input.rs Normal file
View File

@ -0,0 +1,25 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
use dipstick::{Graphite, Input, InputScope, MultiInput, Prefixed, Stream};
use std::time::Duration;
fn main() {
// will output metrics to graphite and to stdout
let different_type_metrics = MultiInput::new()
.add_target(Graphite::send_to("localhost:2003").expect("Connecting"))
.add_target(Stream::write_to_stdout())
.metrics();
// will output metrics twice, once with "both.yeah" prefix and once with "both.ouch" prefix.
let same_type_metrics = MultiInput::new()
.add_target(Stream::write_to_stderr().named("yeah"))
.add_target(Stream::write_to_stderr().named("ouch"))
.named("both")
.metrics();
loop {
different_type_metrics.counter("counter_a").count(123);
same_type_metrics.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(400));
}
}

View File

@ -1,7 +0,0 @@
[package]
name = "multi_print"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,32 +0,0 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
extern crate dipstick;
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics1 = global_metrics(
(
// use tuples to combine metrics of different types
to_statsd("localhost:8125").expect("connect"),
to_stdout()
)
);
let metrics2 = global_metrics(
&[
// use slices to combine multiple metrics of the same type
prefix("yeah.", to_stdout()),
prefix("ouch.", to_stdout()),
prefix("nooo.", to_stdout()),
][..]
);
loop {
metrics1.counter("counter_a").count(123);
metrics2.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(40));
}
}

49
examples/observer.rs Normal file
View File

@ -0,0 +1,49 @@
//!
//! A sample application to demonstrate flush-triggered and scheduled observation of gauge values.
//!
//! This is the expected output:
//!
//! ```
//! cargo run --example observer
//! process.threads 4
//! process.uptime 6
//! process.threads 4
//! process.uptime 6
//! ...
//! ```
//!
use std::time::{Duration, Instant};
use dipstick::*;
fn main() {
let metrics = AtomicBucket::new().named("process");
metrics.drain(Stream::write_to_stdout());
metrics.flush_every(Duration::from_secs(3));
let uptime = metrics.gauge("uptime");
metrics.observe(uptime, |_| 6).on_flush();
// record number of threads in pool every second
let scheduled = metrics
.observe(metrics.gauge("threads"), thread_count)
.every(Duration::from_secs(1));
// "heartbeat" metric
let on_flush = metrics
.observe(metrics.marker("heartbeat"), |_| 1)
.on_flush();
for _ in 0..1000 {
std::thread::sleep(Duration::from_millis(40));
}
on_flush.cancel();
scheduled.cancel();
}
/// Query number of running threads in this process using Linux's /proc filesystem.
fn thread_count(_now: Instant) -> MetricValue {
4
}

36
examples/per_metric_sampling.rs Executable file
View File

@ -0,0 +1,36 @@
//! A sample application sending ad-hoc marker values both to statsd _and_ to stdout.
use dipstick::*;
use std::time::Duration;
fn main() {
let statsd = Statsd::send_to("localhost:8125")
.expect("Connected")
.named("my_app");
// Sampling::Full is the default
// .sampled(Sampling::Full);
let unsampled_marker = statsd.metrics().marker("marker_a");
let low_freq_marker = statsd
.sampled(Sampling::Random(0.1))
.metrics()
.marker("low_freq_marker");
let hi_freq_marker = statsd
.sampled(Sampling::Random(0.001))
.metrics()
.marker("hi_freq_marker");
loop {
unsampled_marker.mark();
for _i in 0..10 {
low_freq_marker.mark();
}
for _i in 0..1000 {
hi_freq_marker.mark();
}
std::thread::sleep(Duration::from_millis(3000));
}
}

View File

@ -0,0 +1,20 @@
//! A sample application sending ad-hoc metrics to prometheus.
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics = Prometheus::push_to("http://localhost:9091/metrics/job/prometheus_example")
.expect("Prometheus Socket")
.named("my_app")
.metrics();
AppLabel::set("abc", "456");
ThreadLabel::set("xyz", "123");
loop {
metrics.counter("counter_a").count(123);
metrics.timer("timer_a").interval_us(2000000);
std::thread::sleep(Duration::from_millis(40));
}
}

50
examples/proxy.rs Normal file
View File

@ -0,0 +1,50 @@
//! Use the proxy to dynamically switch the metrics input & names.
use dipstick::{Input, InputScope, Prefixed, Proxy, Stream};
use std::thread::sleep;
use std::time::Duration;
fn main() {
let root_proxy = Proxy::default();
let sub = root_proxy.named("sub");
let count1 = root_proxy.counter("counter_a");
let count2 = sub.counter("counter_b");
loop {
let stdout = Stream::write_to_stdout().metrics();
root_proxy.target(stdout.clone());
count1.count(1);
count2.count(2);
// route every metric from the root to stdout with prefix "root"
root_proxy.target(stdout.named("root"));
count1.count(3);
count2.count(4);
// route metrics from "sub" to stdout with prefix "mutant"
sub.target(stdout.named("mutant"));
count1.count(5);
count2.count(6);
// clear root metrics route, "sub" still appears
root_proxy.unset_target();
count1.count(7);
count2.count(8);
// now no metrics appear
sub.unset_target();
count1.count(9);
count2.count(10);
// go back to initial single un-prefixed route
root_proxy.target(stdout.clone());
count1.count(11);
count2.count(12);
sleep(Duration::from_secs(1));
println!()
}
}

57
examples/proxy_multi.rs Normal file
View File

@ -0,0 +1,57 @@
//! Use the proxy to send metrics to multiple outputs
/// Create a pipeline that fans out
/// The key here is to use AtomicBucket to read
/// from the proxy and aggregate and flush metrics
///
/// Proxy
/// -> AtomicBucket
/// -> MultiOutput
/// -> Prometheus
/// -> Statsd
/// -> stdout
use dipstick::*;
use std::time::Duration;
metrics! {
pub PROXY: Proxy = "my_proxy" => {}
}
fn main() {
// Placeholder to collect output targets
// This will prefix all metrics with "my_stats"
// before flushing them.
let mut targets = MultiInput::new().named("my_stats");
// Skip the metrics here... we just use this for the output
// Follow the same pattern for Statsd, Graphite, etc.
let prometheus = Prometheus::push_to("http://localhost:9091/metrics/job/dipstick_example")
.expect("Prometheus Socket");
targets = targets.add_target(prometheus);
// Add stdout
targets = targets.add_target(Stream::write_to_stdout());
// Create the stats and drain targets
let bucket = AtomicBucket::new();
bucket.drain(targets);
// Crucial, set the flush interval, otherwise risk hammering targets
bucket.flush_every(Duration::from_secs(3));
// Now wire up the proxy target with the stats and you're all set
let proxy = Proxy::default();
proxy.target(bucket.clone());
// Example using the macro! Proxy sugar
PROXY.target(bucket.named("global"));
loop {
// Using the default proxy
proxy.counter("beans").count(100);
proxy.timer("braincells").interval_us(420);
// global example
PROXY.counter("my_proxy_counter").count(123);
PROXY.timer("my_proxy_timer").interval_us(2000000);
std::thread::sleep(Duration::from_millis(100));
}
}

10
examples/raw_log/src/main.rs → examples/raw_log.rs Normal file → Executable file
View File

@ -1,9 +1,7 @@
//! Use the metrics backend directly to log a metric value.
//! Applications should use the metrics()-provided instruments instead.
extern crate dipstick;
use dipstick::*;
use dipstick::{labels, Input, InputScope};
fn main() {
raw_write()
@ -11,9 +9,9 @@ fn main() {
pub fn raw_write() {
// setup dual metric channels
let metrics_log = to_log();
let metrics_log = dipstick::Log::to_log().metrics();
// define and send metrics using raw channel API
let counter = metrics_log.define_metric(Kind::Counter, "count_a", FULL_SAMPLING_RATE);
metrics_log.open_scope(true)(ScopeCmd::Write(&counter, 1));
let counter = metrics_log.new_metric("count_a".into(), dipstick::InputKind::Counter);
counter.write(1, labels![]);
}

View File

@ -1,7 +0,0 @@
[package]
name = "raw_sink"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,7 +0,0 @@
[package]
name = "scoped_print"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }

View File

@ -1,47 +0,0 @@
//! A sample application continuously aggregating metrics,
//! printing the summary stats every three seconds
extern crate dipstick;
use std::time::Duration;
use std::thread::sleep;
use dipstick::*;
fn main() {
let metrics = scoped_metrics(to_stdout());
let counter = metrics.counter("counter_a");
let timer = metrics.timer("timer_a");
let gauge = metrics.gauge("gauge_a");
let marker = metrics.marker("marker_a");
loop {
// add counts forever, non-stop
println!("\n------- open scope");
let ref mut scope = metrics.open_scope();
counter.count(scope, 11);
counter.count(scope, 12);
counter.count(scope, 13);
timer.interval_us(scope, 11_000_000);
timer.interval_us(scope, 12_000_000);
timer.interval_us(scope, 13_000_000);
sleep(Duration::from_millis(1000));
gauge.value(scope, 11);
gauge.value(scope, 12);
gauge.value(scope, 13);
marker.mark(scope);
sleep(Duration::from_millis(1000));
println!("------- close scope: ")
// scope metrics are printed at the end of every cycle as scope is dropped
}
}

View File

@ -1,8 +0,0 @@
[package]
name = "static_metrics"
version = "0.0.0"
workspace = "../../"
[dependencies]
dipstick = { path = '../../' }
lazy_static = "0.2.9"

View File

@ -1,28 +0,0 @@
//! Use statically defined app metrics & backend.
//! This pattern is likely to emerge
extern crate dipstick;
#[macro_use]
extern crate lazy_static;
/// The `metric` module should be shared across the crate and contain metrics from all modules.
/// Conventions are easier to uphold and document when all metrics are defined in the same place.
pub mod metric {
use dipstick::*;
// Unfortunately, Rust's `static`s assignments still force us to explicitly declare types.
// This makes it uglier than it should be when working with generics...
// and is even more work because IDE's such as IntelliJ can not yet see through macro blocks :(
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = global_metrics(to_stdout());
pub static ref COUNTER_A: Counter<String> = METRICS.counter("counter_a");
pub static ref TIMER_B: Timer<String> = METRICS.timer("timer_b");
}
}
fn main() {
// The resulting application code is lean and clean
metric::COUNTER_A.count(11);
metric::TIMER_B.interval_us(654654);
}

View File

@ -0,0 +1,21 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics = Statsd::send_to("localhost:8125")
.expect("Connected")
// .with_sampling(Sampling::Random(0.2))
.named("my_app")
.metrics();
let counter = metrics.counter("counter_a");
loop {
for i in 1..11 as usize {
counter.count(i);
}
std::thread::sleep(Duration::from_millis(3000));
}
}

21
examples/statsd_sampling.rs Executable file
View File

@ -0,0 +1,21 @@
//! A sample application sending ad-hoc counter values both to statsd _and_ to stdout.
use dipstick::*;
use std::time::Duration;
fn main() {
let metrics = Statsd::send_to("localhost:8125")
.expect("Connected")
.sampled(Sampling::Random(0.2))
.named("my_app")
.metrics();
let counter = metrics.counter("counter_a");
loop {
for i in 1..11 {
counter.count(i);
}
std::thread::sleep(Duration::from_millis(3000));
}
}

View File

@ -0,0 +1,43 @@
//! Print metrics to stderr with custom formatter including a label.
use dipstick::{
AppLabel, Formatting, Input, InputKind, InputScope, LabelOp, LineFormat, LineOp, LineTemplate,
MetricName, Stream,
};
use std::thread::sleep;
use std::time::Duration;
/// Generates template like "$METRIC $value $label_value["abc"]\n"
struct MyFormat;
impl LineFormat for MyFormat {
fn template(&self, name: &MetricName, _kind: InputKind) -> LineTemplate {
LineTemplate::new(vec![
LineOp::Literal(format!("{} ", name.join(".")).to_uppercase().into()),
LineOp::ValueAsText,
LineOp::Literal(" ".into()),
LineOp::LabelExists(
"abc".into(),
vec![
LabelOp::LabelKey,
LabelOp::Literal(":".into()),
LabelOp::LabelValue,
],
),
LineOp::NewLine,
])
}
}
fn main() {
let counter = Stream::write_to_stderr()
.formatting(MyFormat)
.metrics()
.counter("counter_a");
AppLabel::set("abc", "xyz");
loop {
// report some metric values from our "application" loop
counter.count(11);
sleep(Duration::from_millis(500));
}
}

28
justfile Executable file
View File

@ -0,0 +1,28 @@
#!just
# Sane make-like command runner: https://github.com/casey/just
# Default target
all: format test examples bench lint
# Unit and doc tests
test:
cargo test --no-default-features --features="skeptic"
examples:
cargo build --examples
bench:
cargo +nightly bench --features="bench"
format:
cargo fmt
lint:
cargo clippy
clean:
cargo clean
# Build all and then publish to crates.io.
publish: all
cargo publish

View File

@ -1,131 +0,0 @@
//! Maintain aggregated metrics for deferred reporting,
//!
use core::*;
use scores::*;
use publish::*;
use std::fmt::Debug;
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
/// Aggregate metrics in memory.
/// Depending on the type of metric, count, sum, minimum and maximum of values will be tracked.
/// Needs to be connected to a publish to be useful.
/// ```
/// use dipstick::*;
/// let sink = aggregate(4, summary, to_stdout());
/// let metrics = global_metrics(sink);
/// metrics.marker("my_event").mark();
/// metrics.marker("my_event").mark();
/// ```
pub fn aggregate<E, M>(capacity: usize, stat_fn: E, to_chain: Chain<M>) -> Chain<Aggregate>
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + Debug + 'static,
{
let metrics = Arc::new(RwLock::new(HashMap::with_capacity(capacity)));
let metrics0 = metrics.clone();
let publish = Arc::new(Publisher::new(stat_fn, to_chain));
Chain::new(
move |kind, name, _rate| {
metrics.write().unwrap()
.entry(name.to_string())
.or_insert_with(|| Arc::new(Scoreboard::new(kind, name.to_string()))
).clone()
},
move |_auto_flush| {
let metrics = metrics0.clone();
let publish = publish.clone();
Arc::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => metric.update(value),
ScopeCmd::Flush => {
let metrics = metrics.read().expect("Lock scoreboards for a snapshot.");
let snapshot = metrics.values().flat_map(|score| score.reset()).collect();
publish.publish(snapshot);
}
})
},
)
}
/// Central aggregation structure.
/// Since `AggregateKey`s themselves contain scores, the aggregator simply maintains
/// a shared list of metrics for enumeration when used as source.
#[derive(Debug, Clone)]
pub struct Aggregator {
metrics: Arc<RwLock<HashMap<String, Arc<Scoreboard>>>>,
publish: Arc<Publish>,
}
impl Aggregator {
/// Build a new metric aggregation point with specified initial capacity of metrics to aggregate.
pub fn with_capacity(size: usize, publish: Arc<Publish>) -> Aggregator {
Aggregator {
metrics: Arc::new(RwLock::new(HashMap::with_capacity(size))),
publish: publish.clone(),
}
}
/// Discard scores for ad-hoc metrics.
pub fn cleanup(&self) {
let orphans: Vec<String> = self.metrics.read().unwrap().iter()
// is aggregator now the sole owner?
.filter(|&(_k, v)| Arc::strong_count(v) == 1)
.map(|(k, _v)| k.to_string())
.collect();
if !orphans.is_empty() {
let mut remover = self.metrics.write().unwrap();
orphans.iter().for_each(|k| {remover.remove(k);});
}
}
}
/// The type of metric created by the Aggregator.
pub type Aggregate = Arc<Scoreboard>;
#[cfg(feature = "bench")]
mod bench {
use super::*;
use test;
use core::Kind::*;
use output::*;
#[bench]
fn time_bench_write_event(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let metric = sink.define_metric(Marker, "event_a", 1.0);
let scope = sink.open_scope(false);
b.iter(|| test::black_box(scope(ScopeCmd::Write(&metric, 1))));
}
#[bench]
fn time_bench_write_count(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let metric = sink.define_metric(Counter, "count_a", 1.0);
let scope = sink.open_scope(false);
b.iter(|| test::black_box(scope(ScopeCmd::Write(&metric, 1))));
}
#[bench]
fn time_bench_read_event(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let metric = sink.define_metric(Marker, "marker_a", 1.0);
b.iter(|| test::black_box(metric.reset()));
}
#[bench]
fn time_bench_read_count(b: &mut test::Bencher) {
let sink = aggregate(4, summary, to_void());
let metric = sink.define_metric(Counter, "count_a", 1.0);
b.iter(|| test::black_box(metric.reset()));
}
}

View File

@ -1,79 +0,0 @@
//! Queue metrics for write on a separate thread,
//! Metrics definitions are still synchronous.
//! If queue size is exceeded, calling code reverts to blocking.
//!
use core::*;
use self_metrics::*;
use std::sync::Arc;
use std::sync::mpsc;
use std::thread;
/// Cache metrics to prevent them from being re-defined on every use.
/// Use of this should be transparent, this has no effect on the values.
/// Stateful sinks (i.e. Aggregate) may naturally cache their definitions.
pub fn async<M, IC>(queue_size: usize, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_scope(|next| {
// setup channel
let (sender, receiver) = mpsc::sync_channel::<QueueCommand<M>>(queue_size);
// start queue processor thread
thread::spawn(move || loop {
while let Ok(cmd) = receiver.recv() {
match cmd {
QueueCommand { cmd: Some((metric, value)), next_scope } =>
next_scope(ScopeCmd::Write(&metric, value)),
QueueCommand { cmd: None, next_scope } =>
next_scope(ScopeCmd::Flush),
}
}
});
Arc::new(move |auto_flush| {
// open next scope, make it Arc to move across queue
let next_scope: Arc<ControlScopeFn<M>> = Arc::from(next(auto_flush));
let sender = sender.clone();
// forward any scope command through the channel
Arc::new(move |cmd| {
let send_cmd = match cmd {
ScopeCmd::Write(metric, value) => Some((metric.clone(), value)),
ScopeCmd::Flush => None,
};
sender
.send(QueueCommand {
cmd: send_cmd,
next_scope: next_scope.clone(),
})
.unwrap_or_else(|e| {
SEND_FAILED.mark();
trace!("Async metrics could not be sent: {}", e);
})
})
})
})
}
lazy_static! {
static ref QUEUE_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("async.");
static ref SEND_FAILED: Marker<Aggregate> = QUEUE_METRICS.marker("send_failed");
}
/// Carry the scope command over the queue, from the sender, to be executed by the receiver.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct QueueCommand<M> {
/// If Some(), the metric and value to write.
/// If None, flush the scope
cmd: Option<(M, Value)>,
/// The scope to write the metric to
#[derivative(Debug = "ignore")]
next_scope: Arc<ControlScopeFn<M>>,
}

593
src/atomic.rs Executable file
View File

@ -0,0 +1,593 @@
//! Maintain aggregated metrics for deferred reporting,
use crate::attributes::{Attributes, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::clock::TimeHandle;
use crate::input::{Input, InputDyn, InputKind, InputMetric, InputScope};
use crate::name::MetricName;
use crate::stats::ScoreType::*;
use crate::stats::{stats_summary, ScoreType};
use crate::{Flush, MetricValue, Void};
use std::borrow::Borrow;
use std::collections::BTreeMap;
use std::isize;
use std::mem;
use std::sync::atomic::AtomicIsize;
use std::sync::atomic::Ordering::*;
use std::sync::Arc;
use std::{fmt, io};
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
/// A function type to transform aggregated scores into publishable statistics.
pub type Stat = Option<(InputKind, MetricName, MetricValue)>;
pub type StatsFn = dyn Fn(InputKind, MetricName, ScoreType) -> Stat + Send + Sync + 'static;
fn initial_stats() -> &'static StatsFn {
&stats_summary
}
fn initial_drain() -> Arc<dyn InputDyn + Send + Sync> {
Arc::new(Void::new())
}
lazy_static! {
static ref DEFAULT_AGGREGATE_STATS: RwLock<Arc<StatsFn>> =
RwLock::new(Arc::new(initial_stats()));
static ref DEFAULT_AGGREGATE_INPUT: RwLock<Arc<dyn InputDyn + Send + Sync>> =
RwLock::new(initial_drain());
}
/// Central aggregation structure.
/// Maintains a list of metrics for enumeration when used as source.
#[derive(Debug, Clone, Default)]
pub struct AtomicBucket {
attributes: Attributes,
inner: Arc<RwLock<InnerAtomicBucket>>,
}
#[derive(Default)]
struct InnerAtomicBucket {
metrics: BTreeMap<MetricName, Arc<AtomicScores>>,
period_start: TimeHandle,
stats: Option<Arc<StatsFn>>,
drain: Option<Arc<dyn InputDyn + Send + Sync + 'static>>,
publish_metadata: bool,
}
impl fmt::Debug for InnerAtomicBucket {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "metrics: {:?}", self.metrics)?;
write!(f, "period_start: {:?}", self.period_start)
}
}
lazy_static! {
static ref PERIOD_LENGTH: MetricName = "_period_length".into();
}
impl InnerAtomicBucket {
fn flush(&mut self) -> io::Result<()> {
let pub_scope: Arc<dyn InputScope> = match self.drain {
Some(ref out) => out.input_dyn(),
None => read_lock!(DEFAULT_AGGREGATE_INPUT).input_dyn(),
};
self.flush_to(pub_scope.borrow())?;
// all metrics published!
// purge: if stats is the last owner of the metric, remove it
// TODO parameterize whether to keep ad-hoc metrics after publish
let mut purged = self.metrics.clone();
self.metrics
.iter()
.filter(|&(_k, v)| Arc::strong_count(v) == 1)
.map(|(k, _v)| k)
.for_each(|k| {
purged.remove(k);
});
self.metrics = purged;
Ok(())
}
/// Take a snapshot of aggregated values and reset them.
/// Compute stats on captured values using assigned or default stats function.
/// Write stats to assigned or default output.
fn flush_to(&mut self, target: &dyn InputScope) -> io::Result<()> {
let now = TimeHandle::now();
let duration_seconds = self.period_start.elapsed_us() as f64 / 1_000_000.0;
self.period_start = now;
let mut snapshot: Vec<(&MetricName, InputKind, Vec<ScoreType>)> = self
.metrics
.iter()
.flat_map(|(name, scores)| {
scores
.reset(duration_seconds)
.map(|values| (name, scores.metric_kind(), values))
})
.collect();
if snapshot.is_empty() {
// no data was collected for this period
// TODO repeat previous frame min/max ?
// TODO update some canary metric ?
Ok(())
} else {
// TODO add switch for metadata such as PERIOD_LENGTH
if self.publish_metadata {
snapshot.push((
&PERIOD_LENGTH,
InputKind::Timer,
vec![Sum((duration_seconds * 1000.0) as isize)],
));
}
let stats_fn = match self.stats {
Some(ref stats_fn) => stats_fn.clone(),
None => read_lock!(DEFAULT_AGGREGATE_STATS).clone(),
};
for metric in snapshot {
for score in metric.2 {
let filtered = stats_fn(metric.1, metric.0.clone(), score);
if let Some((kind, name, value)) = filtered {
let metric: InputMetric = target.new_metric(name, kind);
// TODO provide some stats context through labels?
metric.write(value, labels![])
}
}
}
target.flush()
}
}
}
impl<S: AsRef<str>> From<S> for AtomicBucket {
fn from(name: S) -> AtomicBucket {
AtomicBucket::new().named(name.as_ref())
}
}
impl AtomicBucket {
/// Build a new atomic stats.
pub fn new() -> AtomicBucket {
AtomicBucket {
attributes: Attributes::default(),
inner: Arc::new(RwLock::new(InnerAtomicBucket {
metrics: BTreeMap::new(),
period_start: TimeHandle::now(),
stats: None,
drain: None,
// TODO add API toggle for metadata publish
publish_metadata: false,
})),
}
}
/// Set the default aggregated metrics statistics generator.
pub fn default_stats<F>(func: F)
where
F: Fn(InputKind, MetricName, ScoreType) -> Option<(InputKind, MetricName, MetricValue)>
+ Send
+ Sync
+ 'static,
{
*write_lock!(DEFAULT_AGGREGATE_STATS) = Arc::new(func)
}
/// Revert the default aggregated metrics statistics generator to the default `stats_summary`.
pub fn unset_default_stats() {
*write_lock!(DEFAULT_AGGREGATE_STATS) = Arc::new(initial_stats())
}
/// Set the default stats aggregated metrics flush output.
pub fn default_drain(default_config: impl Input + Send + Sync + 'static) {
*write_lock!(DEFAULT_AGGREGATE_INPUT) = Arc::new(default_config);
}
/// Revert the default stats aggregated metrics flush output.
pub fn unset_default_drain() {
*write_lock!(DEFAULT_AGGREGATE_INPUT) = initial_drain()
}
/// Set this stats's statistics generator.
#[deprecated(since = "0.7.2", note = "Use stats()")]
pub fn set_stats<F>(&self, func: F)
where
F: Fn(InputKind, MetricName, ScoreType) -> Option<(InputKind, MetricName, MetricValue)>
+ Send
+ Sync
+ 'static,
{
self.stats(func)
}
/// Set this stats's statistics generator.
pub fn stats<F>(&self, func: F)
where
F: Fn(InputKind, MetricName, ScoreType) -> Option<(InputKind, MetricName, MetricValue)>
+ Send
+ Sync
+ 'static,
{
write_lock!(self.inner).stats = Some(Arc::new(func))
}
/// Revert this stats's statistics generator to the default stats.
pub fn unset_stats(&self) {
write_lock!(self.inner).stats = None
}
/// Set this stats's aggregated metrics flush output.
#[deprecated(since = "0.7.2", note = "Use drain()")]
pub fn set_drain(&self, new_drain: impl Input + Send + Sync + 'static) {
self.drain(new_drain)
}
/// Set this stats's aggregated metrics flush output.
pub fn drain(&self, new_drain: impl Input + Send + Sync + 'static) {
write_lock!(self.inner).drain = Some(Arc::new(new_drain))
}
/// Revert this stats's flush target to the default output.
pub fn unset_drain(&self) {
write_lock!(self.inner).drain = None
}
/// Immediately flush the stats's metrics to the specified scope and stats.
pub fn flush_to(&self, publish_scope: &dyn InputScope) -> io::Result<()> {
let mut inner = write_lock!(self.inner);
inner.flush_to(publish_scope)
}
}
impl InputScope for AtomicBucket {
/// Lookup or create scores for the requested metric.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let scores = write_lock!(self.inner)
.metrics
.entry(self.prefix_append(name.clone()))
.or_insert_with(|| Arc::new(AtomicScores::new(kind)))
.clone();
InputMetric::new(MetricId::forge("stats", name), move |value, _labels| {
scores.update(value)
})
}
}
impl Flush for AtomicBucket {
/// Collect and reset aggregated data.
/// Publish statistics
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let mut inner = write_lock!(self.inner);
inner.flush()
}
}
impl WithAttributes for AtomicBucket {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
const HIT: usize = 0;
const SUM: usize = 1;
const MAX: usize = 2;
const MIN: usize = 3;
const SCORES_LEN: usize = 4;
/// A metric that holds aggregated values.
/// Some fields are kept public to ease publishing.
#[derive(Debug)]
struct AtomicScores {
/// The kind of metric
kind: InputKind,
/// The actual recorded metric scores
scores: [AtomicIsize; SCORES_LEN],
}
impl AtomicScores {
/// Create new scores to track summary values of a metric
pub fn new(kind: InputKind) -> Self {
AtomicScores {
kind,
scores: unsafe { mem::transmute(AtomicScores::blank()) },
}
}
/// Returns the metric's kind.
pub fn metric_kind(&self) -> InputKind {
self.kind
}
#[inline]
fn blank() -> [isize; SCORES_LEN] {
[0, 0, isize::MIN, isize::MAX]
}
/// Update scores with new value
pub fn update(&self, value: MetricValue) {
// TODO detect & report any concurrent updates / resets for measurement of contention
// Count is tracked for all metrics
self.scores[HIT].fetch_add(1, Relaxed);
match self.kind {
InputKind::Marker => {}
InputKind::Level => {
// Level min & max apply to the _sum_ of values
// fetch_add only returns the previous sum, so min & max trail behind by one operation
// instead, pickup the slack by comparing again with the final sum upon `snapshot`
// this is to avoid making an extra load() on every value
let prev_sum = self.scores[SUM].fetch_add(value, Relaxed);
swap_if(&self.scores[MAX], prev_sum, |new, current| new > current);
swap_if(&self.scores[MIN], prev_sum, |new, current| new < current);
}
InputKind::Counter | InputKind::Timer | InputKind::Gauge => {
// gauges are non cumulative, but we keep the sum to compute the mean
// TODO use #![feature(atomic_min_max)] when stabilized
self.scores[SUM].fetch_add(value, Relaxed);
swap_if(&self.scores[MAX], value, |new, current| new > current);
swap_if(&self.scores[MIN], value, |new, current| new < current);
}
}
}
/// Reset scores to zero, return previous values
fn snapshot(&self, scores: &mut [isize; 4]) -> bool {
// NOTE copy timestamp, count AND sum _before_ testing for data to reduce concurrent discrepancies
scores[HIT] = self.scores[HIT].swap(0, AcqRel);
scores[SUM] = self.scores[SUM].swap(0, AcqRel);
// if hit count is zero, no values were recorded.
if scores[HIT] == 0 {
return false;
}
scores[MAX] = self.scores[MAX].swap(isize::MIN, AcqRel);
scores[MIN] = self.scores[MIN].swap(isize::MAX, AcqRel);
if self.kind == InputKind::Level {
// fetch_add only returns the previous sum, so min & max trail behind by one operation
// pickup the slack by comparing one last time against the final sum
if scores[SUM] > scores[MAX] {
scores[MAX] = scores[SUM];
}
if scores[SUM] < scores[MIN] {
scores[MIN] = scores[SUM];
}
}
true
}
/// Map raw scores (if any) to applicable statistics
pub fn reset(&self, duration_seconds: f64) -> Option<Vec<ScoreType>> {
let mut scores = AtomicScores::blank();
if self.snapshot(&mut scores) {
let mut snapshot = Vec::new();
match self.kind {
InputKind::Marker => {
snapshot.push(Count(scores[HIT]));
snapshot.push(Rate(scores[HIT] as f64 / duration_seconds))
}
InputKind::Gauge => {
snapshot.push(Max(scores[MAX]));
snapshot.push(Min(scores[MIN]));
snapshot.push(Mean(scores[SUM] as f64 / scores[HIT] as f64));
}
InputKind::Timer => {
snapshot.push(Count(scores[HIT]));
snapshot.push(Sum(scores[SUM]));
snapshot.push(Max(scores[MAX]));
snapshot.push(Min(scores[MIN]));
snapshot.push(Mean(scores[SUM] as f64 / scores[HIT] as f64));
// timer rate uses the COUNT of timer calls per second (not SUM)
snapshot.push(Rate(scores[HIT] as f64 / duration_seconds))
}
InputKind::Counter => {
snapshot.push(Count(scores[HIT]));
snapshot.push(Sum(scores[SUM]));
snapshot.push(Max(scores[MAX]));
snapshot.push(Min(scores[MIN]));
snapshot.push(Mean(scores[SUM] as f64 / scores[HIT] as f64));
// counter rate uses the SUM of values per second (e.g. to get bytes/s)
snapshot.push(Rate(scores[SUM] as f64 / duration_seconds))
}
InputKind::Level => {
snapshot.push(Count(scores[HIT]));
snapshot.push(Sum(scores[SUM]));
snapshot.push(Max(scores[MAX]));
snapshot.push(Min(scores[MIN]));
snapshot.push(Mean(scores[SUM] as f64 / scores[HIT] as f64));
// counter rate uses the SUM of values per second (e.g. to get bytes/s)
snapshot.push(Rate(scores[SUM] as f64 / duration_seconds))
}
}
Some(snapshot)
} else {
None
}
}
}
/// Spinlock until success or clear loss to concurrent update.
#[inline]
fn swap_if(counter: &AtomicIsize, new_value: isize, compare: fn(isize, isize) -> bool) {
let mut current = counter.load(Acquire);
while compare(new_value, current) {
if counter
.compare_exchange(current, new_value, Relaxed, Relaxed)
.is_ok()
{
// update successful
break;
}
// race detected, retry
current = counter.load(Acquire);
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
#[bench]
fn update_marker(b: &mut test::Bencher) {
let metric = AtomicScores::new(InputKind::Marker);
b.iter(|| test::black_box(metric.update(1)));
}
#[bench]
fn update_count(b: &mut test::Bencher) {
let metric = AtomicScores::new(InputKind::Counter);
b.iter(|| test::black_box(metric.update(4)));
}
#[bench]
fn empty_snapshot(b: &mut test::Bencher) {
let metric = AtomicScores::new(InputKind::Counter);
let scores = &mut AtomicScores::blank();
b.iter(|| test::black_box(metric.snapshot(scores)));
}
#[bench]
fn aggregate_marker(b: &mut test::Bencher) {
let sink = AtomicBucket::new();
let metric = sink.new_metric("event_a".into(), InputKind::Marker);
b.iter(|| test::black_box(metric.write(1, labels![])));
}
#[bench]
fn aggregate_counter(b: &mut test::Bencher) {
let sink = AtomicBucket::new();
let metric = sink.new_metric("count_a".into(), InputKind::Counter);
b.iter(|| test::black_box(metric.write(1, labels![])));
}
}
#[cfg(test)]
mod mtest {
use super::*;
use crate::stats::{stats_all, stats_average, stats_summary};
use crate::clock::{mock_clock_advance, mock_clock_reset};
use crate::output::map::StatsMapScope;
use std::collections::BTreeMap;
use std::time::Duration;
fn make_stats(stats_fn: &'static StatsFn) -> BTreeMap<String, MetricValue> {
mock_clock_reset();
let metrics = AtomicBucket::new().named("test");
metrics.stats(stats_fn);
let counter = metrics.counter("counter_a");
let counter_b = metrics.counter("counter_b");
let timer = metrics.timer("timer_a");
let gauge = metrics.gauge("gauge_a");
let level = metrics.level("level_a");
let marker = metrics.marker("marker_a");
marker.mark();
marker.mark();
marker.mark();
counter.count(10);
counter.count(20);
counter_b.count(9);
counter_b.count(18);
counter_b.count(3);
timer.interval_us(10_000_000);
timer.interval_us(20_000_000);
gauge.value(10);
gauge.value(20);
level.adjust(789);
level.adjust(-7789);
level.adjust(77788);
mock_clock_advance(Duration::from_secs(3));
let map = StatsMapScope::default();
metrics.flush_to(&map).unwrap();
map.into()
}
#[test]
fn external_aggregate_all_stats() {
let map = make_stats(&stats_all);
assert_eq!(map["test.counter_a.count"], 2);
assert_eq!(map["test.counter_a.sum"], 30);
assert_eq!(map["test.counter_a.mean"], 15);
assert_eq!(map["test.counter_a.min"], 10);
assert_eq!(map["test.counter_a.max"], 20);
assert_eq!(map["test.counter_a.rate"], 10);
assert_eq!(map["test.counter_b.count"], 3);
assert_eq!(map["test.counter_b.sum"], 30);
assert_eq!(map["test.counter_b.mean"], 10);
assert_eq!(map["test.counter_b.min"], 3);
assert_eq!(map["test.counter_b.max"], 18);
assert_eq!(map["test.counter_b.rate"], 10);
assert_eq!(map["test.timer_a.count"], 2);
assert_eq!(map["test.timer_a.sum"], 30_000_000);
assert_eq!(map["test.timer_a.min"], 10_000_000);
assert_eq!(map["test.timer_a.max"], 20_000_000);
assert_eq!(map["test.timer_a.mean"], 15_000_000);
assert_eq!(map["test.timer_a.rate"], 1);
assert_eq!(map["test.gauge_a.mean"], 15);
assert_eq!(map["test.gauge_a.min"], 10);
assert_eq!(map["test.gauge_a.max"], 20);
assert_eq!(map["test.level_a.mean"], 23596);
assert_eq!(map["test.level_a.min"], -7000);
assert_eq!(map["test.level_a.max"], 70788);
assert_eq!(map["test.marker_a.count"], 3);
assert_eq!(map["test.marker_a.rate"], 1);
}
#[test]
fn external_aggregate_summary() {
let map = make_stats(&stats_summary);
assert_eq!(map["test.counter_a"], 30);
assert_eq!(map["test.counter_b"], 30);
assert_eq!(map["test.level_a"], 23596);
assert_eq!(map["test.timer_a"], 30_000_000);
assert_eq!(map["test.gauge_a"], 15);
assert_eq!(map["test.marker_a"], 3);
}
#[test]
fn external_aggregate_average() {
let map = make_stats(&stats_average);
assert_eq!(map["test.counter_a"], 15);
assert_eq!(map["test.counter_b"], 10);
assert_eq!(map["test.level_a"], 23596);
assert_eq!(map["test.timer_a"], 15_000_000);
assert_eq!(map["test.gauge_a"], 15);
assert_eq!(map["test.marker_a"], 3);
}
}

360
src/attributes.rs Executable file
View File

@ -0,0 +1,360 @@
use std::collections::HashMap;
use std::default::Default;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use crate::name::{MetricName, NameParts};
use crate::scheduler::{Cancel, SCHEDULER};
use crate::{CancelHandle, Flush, InputMetric, InputScope, MetricValue};
use std::fmt;
use std::time::{Duration, Instant};
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
use crate::Labels;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
use std::ops::Deref;
/// The actual distribution (random, fixed-cycled, etc) depends on selected sampling method.
#[derive(Debug, Clone, Copy)]
pub enum Sampling {
/// Record every collected value.
/// Effectively disable sampling.
Full,
/// Floating point sampling rate
/// - 1.0+ records everything
/// - 0.5 records one of two values
/// - 0.0 records nothing
Random(f64),
}
impl Default for Sampling {
fn default() -> Sampling {
Sampling::Full
}
}
/// A metrics buffering strategy.
/// All strategies other than `Unbuffered` are applied as a best-effort, meaning that the buffer
/// may be flushed at any moment before reaching the limit, for any or no reason in particular.
#[derive(Debug, Clone, Copy, Eq, PartialEq)]
pub enum Buffering {
/// Do not buffer output.
Unbuffered,
/// A buffer of maximum specified size is used.
BufferSize(usize),
/// Buffer as much as possible.
Unlimited,
}
impl Default for Buffering {
fn default() -> Buffering {
Buffering::Unbuffered
}
}
/// A metrics identifier
#[derive(Clone, Debug, Hash, Eq, PartialOrd, PartialEq)]
pub struct MetricId(String);
impl MetricId {
/// Return a MetricId based on output type and metric name
pub fn forge(out_type: &str, name: MetricName) -> Self {
let id: String = name.join("/");
MetricId(format!("{}:{}", out_type, id))
}
}
pub type Shared<T> = Arc<RwLock<T>>;
pub struct Listener {
listener_id: usize,
listener_fn: Arc<dyn Fn(Instant) + Send + Sync + 'static>,
}
/// Attributes common to metric components.
/// Not all attributes used by all components.
#[derive(Clone, Default)]
pub struct Attributes {
naming: NameParts,
sampling: Sampling,
buffering: Buffering,
flush_listeners: Shared<HashMap<MetricId, Listener>>,
tasks: Shared<Vec<CancelHandle>>,
}
impl fmt::Debug for Attributes {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "naming: {:?}", self.naming)?;
write!(f, "sampling: {:?}", self.sampling)?;
write!(f, "buffering: {:?}", self.buffering)
}
}
/// This trait should not be exposed outside the crate.
pub trait WithAttributes: Clone {
/// Return attributes of component.
fn get_attributes(&self) -> &Attributes;
/// Return attributes of component for mutation.
// TODO replace with fields-in-traits if ever stabilized (https://github.com/nikomatsakis/fields-in-traits-rfc)
fn mut_attributes(&mut self) -> &mut Attributes;
/// Clone the component and mutate its attributes at once.
fn with_attributes<F: Fn(&mut Attributes)>(&self, edit: F) -> Self {
let mut cloned = self.clone();
(edit)(cloned.mut_attributes());
cloned
}
}
/// Register and notify scope-flush listeners
pub trait OnFlush {
/// Notify registered listeners of an impending flush.
fn notify_flush_listeners(&self);
}
impl<T> OnFlush for T
where
T: Flush + WithAttributes,
{
fn notify_flush_listeners(&self) {
let now = Instant::now();
for listener in read_lock!(self.get_attributes().flush_listeners).values() {
(listener.listener_fn)(now)
}
}
}
/// When to observe a recurring task.
pub struct ObserveWhen<'a, T, F> {
target: &'a T,
metric: InputMetric,
operation: Arc<F>,
}
static ID_GENERATOR: AtomicUsize = AtomicUsize::new(0);
/// A handle to cancel a flush observer.
pub struct OnFlushCancel(Arc<dyn Fn() + Send + Sync>);
impl Cancel for OnFlushCancel {
fn cancel(&self) {
(self.0)()
}
}
impl<'a, T, F> ObserveWhen<'a, T, F>
where
F: Fn(Instant) -> MetricValue + Send + Sync + 'static,
T: InputScope + WithAttributes + Send + Sync,
{
/// Observe the metric's value upon flushing the scope.
pub fn on_flush(self) -> OnFlushCancel {
let gauge = self.metric;
let metric_id = gauge.metric_id().clone();
let op = self.operation;
let listener_id = ID_GENERATOR.fetch_add(1, Ordering::Relaxed);
write_lock!(self.target.get_attributes().flush_listeners).insert(
metric_id.clone(),
Listener {
listener_id,
listener_fn: Arc::new(move |now| gauge.write(op(now), Labels::default())),
},
);
let flush_listeners = self.target.get_attributes().flush_listeners.clone();
OnFlushCancel(Arc::new(move || {
let mut listeners = write_lock!(flush_listeners);
let installed_listener_id = listeners.get(&metric_id).map(|v| v.listener_id);
if let Some(id) = installed_listener_id {
if id == listener_id {
listeners.remove(&metric_id);
}
}
}))
}
/// Observe the metric's value periodically.
pub fn every(self, period: Duration) -> CancelHandle {
let gauge = self.metric;
let op = self.operation;
let handle = SCHEDULER.schedule(period, move |now| gauge.write(op(now), Labels::default()));
write_lock!(self.target.get_attributes().tasks).push(handle.clone());
handle
}
}
/// Schedule a recurring task
pub trait Observe {
/// The inner type for the [`ObserveWhen`].
///
/// The observe can be delegated to a different type then `Self`, however the latter is more
/// common.
type Inner;
/// Provide a source for a metric's values.
#[must_use = "must specify when to observe"]
fn observe<F>(
&self,
metric: impl Deref<Target = InputMetric>,
operation: F,
) -> ObserveWhen<Self::Inner, F>
where
F: Fn(Instant) -> MetricValue + Send + Sync + 'static,
Self: Sized;
}
impl<T: InputScope + WithAttributes> Observe for T {
type Inner = Self;
fn observe<F>(
&self,
metric: impl Deref<Target = InputMetric>,
operation: F,
) -> ObserveWhen<Self, F>
where
F: Fn(Instant) -> MetricValue + Send + Sync + 'static,
Self: Sized,
{
ObserveWhen {
target: self,
metric: (*metric).clone(),
operation: Arc::new(operation),
}
}
}
impl Drop for Attributes {
fn drop(&mut self) {
let mut tasks = write_lock!(self.tasks);
for task in tasks.drain(..) {
task.cancel()
}
}
}
/// Name operations support.
pub trait Prefixed {
/// Returns namespace of component.
fn get_prefixes(&self) -> &NameParts;
/// Append a name to the existing names.
/// Return a clone of the component with the updated names.
#[deprecated(since = "0.7.2", note = "Use named() or add_name()")]
fn add_prefix<S: Into<String>>(&self, name: S) -> Self;
/// Append a name to the existing names.
/// Return a clone of the component with the updated names.
fn add_name<S: Into<String>>(&self, name: S) -> Self;
/// Replace any existing names with a single name.
/// Return a clone of the component with the new name.
/// If multiple names are required, `add_name` may also be used.
fn named<S: Into<String>>(&self, name: S) -> Self;
/// Append any name parts to the name's namespace.
fn prefix_append<S: Into<MetricName>>(&self, name: S) -> MetricName {
name.into().append(self.get_prefixes().clone())
}
/// Prepend any name parts to the name's namespace.
fn prefix_prepend<S: Into<MetricName>>(&self, name: S) -> MetricName {
name.into().prepend(self.get_prefixes().clone())
}
}
/// Name operations support.
pub trait Label {
/// Return the namespace of the component.
fn get_label(&self) -> &Arc<HashMap<String, String>>;
/// Join namespace and prepend in newly defined metrics.
fn label(&self, name: &str) -> Self;
}
impl<T: WithAttributes> Prefixed for T {
/// Returns namespace of component.
fn get_prefixes(&self) -> &NameParts {
&self.get_attributes().naming
}
/// Append a name to the existing names.
/// Return a clone of the component with the updated names.
fn add_prefix<S: Into<String>>(&self, name: S) -> Self {
self.add_name(name)
}
/// Append a name to the existing names.
/// Return a clone of the component with the updated names.
fn add_name<S: Into<String>>(&self, name: S) -> Self {
let name = name.into();
self.with_attributes(|new_attr| new_attr.naming.push_back(name.clone()))
}
/// Replace any existing names with a single name.
/// Return a clone of the component with the new name.
/// If multiple names are required, `add_name` may also be used.
fn named<S: Into<String>>(&self, name: S) -> Self {
let parts = NameParts::from(name);
self.with_attributes(|new_attr| new_attr.naming = parts.clone())
}
}
/// Apply statistical sampling to collected metrics data.
pub trait Sampled: WithAttributes {
/// Perform random sampling of values according to the specified rate.
fn sampled(&self, sampling: Sampling) -> Self {
self.with_attributes(|new_attr| new_attr.sampling = sampling)
}
/// Get the sampling strategy for this component, if any.
fn get_sampling(&self) -> Sampling {
self.get_attributes().sampling
}
}
/// Determine scope buffering strategy, if supported by output.
/// Changing this only affects scopes opened afterwards.
/// Buffering is done on best effort, meaning flush will occur if buffer capacity is exceeded.
pub trait Buffered: WithAttributes {
/// Return a clone with the specified buffering set.
fn buffered(&self, buffering: Buffering) -> Self {
self.with_attributes(|new_attr| new_attr.buffering = buffering)
}
/// Return the current buffering strategy.
fn get_buffering(&self) -> Buffering {
self.get_attributes().buffering
}
/// Returns false if the current buffering strategy is `Buffering::Unbuffered`.
/// Returns true otherwise.
fn is_buffered(&self) -> bool {
!(self.get_attributes().buffering == Buffering::Unbuffered)
}
}
#[cfg(test)]
mod test {
use crate::attributes::*;
use crate::input::Input;
use crate::input::*;
use crate::output::map::StatsMap;
use crate::Flush;
use crate::StatsMapScope;
#[test]
fn on_flush() {
let metrics: StatsMapScope = StatsMap::default().metrics();
let gauge = metrics.gauge("my_gauge");
metrics.observe(gauge, |_| 4).on_flush();
metrics.flush().unwrap();
assert_eq!(Some(&4), metrics.into_map().get("my_gauge"))
}
}

129
src/cache.rs Normal file → Executable file
View File

@ -1,33 +1,108 @@
//! Cache metric definitions.
//! Metric input scope caching.
use core::*;
use std::sync::{Arc, RwLock};
use lru_cache::LRUCache;
use crate::attributes::{Attributes, OnFlush, Prefixed, WithAttributes};
use crate::input::{Input, InputDyn, InputKind, InputMetric, InputScope};
use crate::lru_cache as lru;
use crate::name::MetricName;
use crate::Flush;
/// Cache metrics to prevent them from being re-defined on every use.
/// Use of this should be transparent, this has no effect on the values.
/// Stateful sinks (i.e. Aggregate) may naturally cache their definitions.
pub fn cache<M, IC>(size: usize, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_metric(|next| {
let cache: RwLock<LRUCache<String, M>> = RwLock::new(LRUCache::with_capacity(size));
Arc::new(move |kind, name, rate| {
let mut cache = cache.write().expect("Lock metric cache");
let name_str = String::from(name);
use std::sync::Arc;
// FIXME lookup should use straight &str
if let Some(value) = cache.get(&name_str) {
return value.clone()
}
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
let new_value = (next)(kind, name, rate).clone();
cache.insert(name_str, new_value.clone());
new_value
})
})
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
use std::io;
/// Wrap an input with a metric definition cache.
/// This can provide performance benefits for metrics that are dynamically defined at runtime on each access.
/// Caching is useless if all metrics are statically declared
/// or instantiated programmatically in advance and referenced by a long living variable.
pub trait CachedInput: Input + Send + Sync + 'static + Sized {
/// Wrap an input with a metric definition cache.
/// This can provide performance benefits for metrics that are dynamically defined at runtime on each access.
/// Caching is useless if all metrics are statically declared
/// or instantiated programmatically in advance and referenced by a long living variable.
fn cached(self, max_size: usize) -> InputCache {
InputCache::wrap(self, max_size)
}
}
/// Output wrapper caching frequently defined metrics
#[derive(Clone)]
pub struct InputCache {
attributes: Attributes,
target: Arc<dyn InputDyn + Send + Sync + 'static>,
cache: Arc<RwLock<lru::LRUCache<MetricName, InputMetric>>>,
}
impl InputCache {
/// Wrap scopes with an asynchronous metric write & flush dispatcher.
fn wrap<OUT: Input + Send + Sync + 'static>(target: OUT, max_size: usize) -> InputCache {
InputCache {
attributes: Attributes::default(),
target: Arc::new(target),
cache: Arc::new(RwLock::new(lru::LRUCache::with_capacity(max_size))),
}
}
}
impl WithAttributes for InputCache {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Input for InputCache {
type SCOPE = InputScopeCache;
fn metrics(&self) -> Self::SCOPE {
let target = self.target.input_dyn();
InputScopeCache {
attributes: self.attributes.clone(),
target,
cache: self.cache.clone(),
}
}
}
/// Input wrapper caching frequently defined metrics
#[derive(Clone)]
pub struct InputScopeCache {
attributes: Attributes,
target: Arc<dyn InputScope + Send + Sync + 'static>,
cache: Arc<RwLock<lru::LRUCache<MetricName, InputMetric>>>,
}
impl WithAttributes for InputScopeCache {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl InputScope for InputScopeCache {
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name = self.prefix_append(name);
let lookup = { write_lock!(self.cache).get(&name).cloned() };
lookup.unwrap_or_else(|| {
let new_metric = self.target.new_metric(name.clone(), kind);
// FIXME (perf) having to take another write lock for a cache miss
write_lock!(self.cache).insert(name, new_metric.clone());
new_metric
})
}
}
impl Flush for InputScopeCache {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
self.target.flush()
}
}

84
src/clock.rs Executable file
View File

@ -0,0 +1,84 @@
#[cfg(test)]
use std::ops::Add;
#[cfg(test)]
use std::time::Duration;
use std::time::Instant;
use crate::MetricValue;
#[derive(Debug, Copy, Clone)]
/// A handle to the start time of a counter.
/// Wrapped so it may be changed safely later.
pub struct TimeHandle(Instant);
impl TimeHandle {
/// Get a handle on current time.
/// Used by the TimerMetric start_time() method.
pub fn now() -> TimeHandle {
TimeHandle(now())
}
/// Get the elapsed time in microseconds since TimeHandle was obtained.
pub fn elapsed_us(self) -> u64 {
let duration = now() - self.0;
(duration.as_secs() * 1_000_000) + u64::from(duration.subsec_micros())
}
/// Get the elapsed time in milliseconds since TimeHandle was obtained.
pub fn elapsed_ms(self) -> MetricValue {
(self.elapsed_us() / 1000) as isize
}
}
impl Default for TimeHandle {
fn default() -> Self {
TimeHandle::now()
}
}
/// The mock clock is thread local so that tests can run in parallel without affecting each other.
use std::cell::RefCell;
thread_local! {
static MOCK_CLOCK: RefCell<Instant> = RefCell::new(Instant::now());
}
/// Set the mock clock to the current time.
/// Enables writing reproducible metrics tests in combination with #mock_clock_advance()
/// Should be called at beginning of test, before the metric scope is created.
/// Not feature-gated so it stays visible to outside crates but may not be used outside of tests.
#[cfg(test)]
pub fn mock_clock_reset() {
if !cfg!(not(test)) {
warn!("Mock clock used outside of cfg[]tests has no effect")
}
MOCK_CLOCK.with(|now| {
*now.borrow_mut() = Instant::now();
})
}
/// Advance the mock clock by a certain amount of time.
/// Enables writing reproducible metrics tests in combination with #mock_clock_reset()
/// Should be after metrics have been produced but before they are published.
/// Not feature-gated so it stays visible to outside crates but may not be used outside of tests.
#[cfg(test)]
pub fn mock_clock_advance(period: Duration) {
MOCK_CLOCK.with(|now| {
let mut now = now.borrow_mut();
*now = now.add(period);
})
}
#[cfg(not(test))]
fn now() -> Instant {
Instant::now()
}
#[cfg(test)]
/// Metrics mock_clock enabled!
/// thread::sleep will have no effect on metrics.
/// Use advance_time() to simulate passing time.
fn now() -> Instant {
MOCK_CLOCK.with(|now| *now.borrow())
}

View File

@ -1,159 +0,0 @@
//! Dipstick metrics core types and traits.
//! This is mostly centered around the backend.
//! Application-facing types are in the `app` module.
use time;
use std::sync::Arc;
/// Base type for recorded metric values.
// TODO should this be f64? f32?
pub type Value = u64;
#[derive(Debug, Copy, Clone)]
/// A handle to the start time of a counter.
/// Wrapped so it may be changed safely later.
pub struct TimeHandle(u64);
impl TimeHandle {
/// Get a handle on current time.
/// Used by the TimerMetric start_time() method.
pub fn now() -> TimeHandle {
TimeHandle(time::precise_time_ns())
}
/// Get the elapsed time in microseconds since TimeHandle was obtained.
pub fn elapsed_us(self) -> Value {
(TimeHandle::now().0 - self.0) / 1_000
}
}
/// Base type for sampling rate.
/// - 1.0 records everything
/// - 0.5 records one of two values
/// - 0.0 records nothing
/// The actual distribution (random, fixed-cycled, etc) depends on selected sampling method.
pub type Rate = f64;
/// Do not sample, use all data.
pub const FULL_SAMPLING_RATE: Rate = 1.0;
/// Used to differentiate between metric kinds in the backend.
#[derive(Debug, Copy, Clone)]
pub enum Kind {
/// Handling one item at a time.
Marker,
/// Handling quantities or multiples.
Counter,
/// Reporting instant measurement of a resource at a point in time.
Gauge,
/// Measuring a time interval, internal to the app or provided by an external source.
Timer,
}
/// Dynamic metric definition function.
/// Metrics can be defined from any thread, concurrently (Fn is Sync).
/// The resulting metrics themselves can be also be safely shared across threads (<M> is Send + Sync).
/// Concurrent usage of a metric is done using threaded scopes.
/// Shared concurrent scopes may be provided by some backends (aggregate).
pub type DefineMetricFn<M> = Arc<Fn(Kind, &str, Rate) -> M + Send + Sync>;
/// A function trait that opens a new metric capture scope.
pub type OpenScopeFn<M> = Arc<Fn(bool) -> ControlScopeFn<M> + Send + Sync>;
/// Returns a callback function to send commands to the metric scope.
/// Writes can be performed by passing Some((&Metric, Value))
/// Flushes can be performed by passing None
/// Used to write values to the scope or flush the scope buffer (if applicable).
/// Simple applications may use only one scope.
/// Complex applications may define a new scope fo each operation or request.
/// Scopes can be moved acrossed threads (Send) but are not required to be thread-safe (Sync).
/// Some implementations _may_ be 'Sync', otherwise queue()ing or threadlocal() can be used.
pub type ControlScopeFn<M> = Arc<Fn(ScopeCmd<M>) + Send + Sync>;
/// An method dispatching command enum to manipulate metric scopes.
/// Replaces a potential `Writer` trait that would have methods `write` and `flush`.
/// Using a command pattern allows buffering, async queuing and inline definition of writers.
pub enum ScopeCmd<'a, M: 'a> {
/// Write the value for the metric.
/// Takes a reference to minimize overhead in single-threaded scenarios.
Write(&'a M, Value),
/// Flush the scope buffer, if applicable.
Flush,
}
/// A pair of functions composing a twin "chain of command".
/// This is the building block for the metrics backend.
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct Chain<M> {
#[derivative(Debug = "ignore")]
define_metric_fn: DefineMetricFn<M>,
#[derivative(Debug = "ignore")]
scope_metric_fn: OpenScopeFn<M>,
}
impl<M: Send + Sync> Chain<M> {
/// Define a new metric.
#[allow(unused_variables)]
pub fn define_metric(&self, kind: Kind, name: &str, sampling: Rate) -> M {
(self.define_metric_fn)(kind, name, sampling)
}
/// Open a new metric scope.
#[allow(unused_variables)]
pub fn open_scope(&self, auto_flush: bool) -> ControlScopeFn<M> {
(self.scope_metric_fn)(auto_flush)
}
/// Create a new metric chain with the provided metric definition and scope creation functions.
pub fn new<MF, WF>(make_metric: MF, make_scope: WF) -> Self
where
MF: Fn(Kind, &str, Rate) -> M + Send + Sync + 'static,
WF: Fn(bool) -> ControlScopeFn<M> + Send + Sync + 'static,
{
Chain {
// capture the provided closures in Arc to provide cheap clones
define_metric_fn: Arc::new(make_metric),
scope_metric_fn: Arc::new(make_scope),
}
}
/// Intercept metric definition without changing the metric type.
pub fn mod_metric<MF>(&self, mod_fn: MF) -> Chain<M>
where
MF: Fn(DefineMetricFn<M>) -> DefineMetricFn<M>,
{
Chain {
define_metric_fn: mod_fn(self.define_metric_fn.clone()),
scope_metric_fn: self.scope_metric_fn.clone()
}
}
/// Intercept both metric definition and scope creation, possibly changing the metric type.
pub fn mod_both<MF, N>(&self, mod_fn: MF) -> Chain<N>
where
MF: Fn(DefineMetricFn<M>, OpenScopeFn<M>) -> (DefineMetricFn<N>, OpenScopeFn<N>),
N: Clone + Send + Sync,
{
let (metric_fn, scope_fn) = mod_fn(self.define_metric_fn.clone(), self.scope_metric_fn.clone());
Chain {
define_metric_fn: metric_fn,
scope_metric_fn: scope_fn
}
}
/// Intercept scope creation.
pub fn mod_scope<MF>(&self, mod_fn: MF) -> Self
where
MF: Fn(OpenScopeFn<M>) -> OpenScopeFn<M>,
{
Chain {
define_metric_fn: self.define_metric_fn.clone(),
scope_metric_fn: mod_fn(self.scope_metric_fn.clone())
}
}
}

View File

@ -1,46 +0,0 @@
//! Error-chain like mechanism, without the error-chain dependency.
use std::io;
use std::error;
use std::fmt::{self, Display, Formatter};
use std::result;
use self::Error::*;
/// Any error that may result from dipstick usage.
#[derive(Debug)]
pub enum Error {
/// A generic I/O error.
IO(io::Error),
}
impl Display for Error {
fn fmt(&self, formatter: &mut Formatter) -> result::Result<(), fmt::Error> {
match *self {
IO(ref err) => err.fmt(formatter),
}
}
}
impl error::Error for Error {
fn description(&self) -> &str {
match *self {
IO(ref err) => err.description(),
}
}
fn cause(&self) -> Option<&error::Error> {
match *self {
IO(ref err) => Some(err),
}
}
}
/// The result type for dipstick operations that may fail.
pub type Result<T> = result::Result<T, Error>;
impl From<io::Error> for Error {
fn from(err: io::Error) -> Self {
IO(err)
}
}

View File

@ -1,280 +0,0 @@
//! Static metrics are used to define metrics that share a single persistent metrics scope.
//! Because the scope never changes (it is "global"), all that needs to be provided by the
//! application is the metrics values.
//!
//! Compared to [ScopeMetrics], static metrics are easier to use and provide satisfactory metrics
//! in many applications.
//!
//! If multiple [GlobalMetrics] are defined, they'll each have their scope.
//!
use core::*;
use core::ScopeCmd::*;
use std::sync::Arc;
use std::time::Duration;
use schedule::*;
// TODO define an 'AsValue' trait + impl for supported number types, then drop 'num' crate
pub use num::ToPrimitive;
/// Wrap the metrics backend to provide an application-friendly interface.
pub fn global_metrics<M, IC>(chain: IC) -> GlobalMetrics<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
let static_scope = chain.open_scope(true);
GlobalMetrics {
prefix: "".to_string(),
scope: static_scope,
chain: Arc::new(chain),
}
}
/// A monotonic counter metric.
/// Since value is only ever increased by one, no value parameter is provided,
/// preventing programming errors.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Marker<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
}
impl<M> Marker<M> {
/// Record a single event occurence.
pub fn mark(&self) {
self.scope.as_ref()(Write(&self.metric, 1));
}
}
/// A counter that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Counter<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
}
impl<M> Counter<M> {
/// Record a value count.
pub fn count<V>(&self, count: V)
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, count.to_u64().unwrap()));
}
}
/// A gauge that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Gauge<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
}
impl<M> Gauge<M> {
/// Record a value point for this gauge.
pub fn value<V>(&self, value: V)
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, value.to_u64().unwrap()));
}
}
/// A timer that sends values to the metrics backend
/// Timers can record time intervals in multiple ways :
/// - with the time! macrohich wraps an expression or block with start() and stop() calls.
/// - with the time(Fn) methodhich wraps a closure with start() and stop() calls.
/// - with start() and stop() methodsrapping around the operation to time
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Derivative)]
#[derivative(Debug)]
pub struct Timer<M> {
metric: M,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
}
impl<M> Timer<M> {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
pub fn interval_us<V>(&self, interval_us: V) -> V
where
V: ToPrimitive,
{
self.scope.as_ref()(Write(&self.metric, interval_us.to_u64().unwrap()));
interval_us
}
/// Obtain a opaque handle to the current time.
/// The handle is passed back to the stop() method to record a time interval.
/// This is actually a convenience method to the TimeHandle::now()
/// Beware, handles obtained here are not bound to this specific timer instance
/// _for now_ but might be in the future for safety.
/// If you require safe multi-timer handles, get them through TimeType::now()
pub fn start(&self) -> TimeHandle {
TimeHandle::now()
}
/// Record the time elapsed since the start_time handle was obtained.
/// This call can be performed multiple times using the same handle,
/// reporting distinct time intervals each time.
/// Returns the microsecond interval value that was recorded.
pub fn stop(&self, start_time: TimeHandle) -> u64 {
let elapsed_us = start_time.elapsed_us();
self.interval_us(elapsed_us)
}
/// Record the time taken to execute the provided closure
pub fn time<F, R>(&self, operations: F) -> R
where
F: FnOnce() -> R,
{
let start_time = self.start();
let value: R = operations();
self.stop(start_time);
value
}
}
/// Variations of this should also provide control of the metric recording scope.
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct GlobalMetrics<M> {
prefix: String,
chain: Arc<Chain<M>>,
#[derivative(Debug = "ignore")]
scope: ControlScopeFn<M>,
}
impl<M> GlobalMetrics<M>
where
M: Clone + Send + Sync + 'static,
{
fn qualified_name<AS>(&self, name: AS) -> String
where
AS: Into<String> + AsRef<str>,
{
// FIXME is there a way to return <S> in both cases?
if self.prefix.is_empty() {
return name.into();
}
let mut buf: String = self.prefix.clone();
buf.push_str(name.as_ref());
buf.to_string()
}
/// Get an event counter of the provided name.
pub fn marker<AS>(&self, name: AS) -> Marker<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Marker,
&self.qualified_name(name),
1.0,
);
Marker {
metric,
scope: self.scope.clone(),
}
}
/// Get a counter of the provided name.
pub fn counter<AS>(&self, name: AS) -> Counter<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Counter,
&self.qualified_name(name),
1.0,
);
Counter {
metric,
scope: self.scope.clone(),
}
}
/// Get a timer of the provided name.
pub fn timer<AS>(&self, name: AS) -> Timer<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Timer,
&self.qualified_name(name),
1.0,
);
Timer {
metric,
scope: self.scope.clone(),
}
}
/// Get a gauge of the provided name.
pub fn gauge<AS>(&self, name: AS) -> Gauge<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Gauge,
&self.qualified_name(name),
1.0,
);
Gauge {
metric,
scope: self.scope.clone(),
}
}
/// Prepend the metrics name with a prefix.
/// Does not affect metrics that were already obtained.
pub fn with_prefix<IS>(&self, prefix: IS) -> Self
where
IS: Into<String>,
{
GlobalMetrics {
prefix: prefix.into(),
chain: self.chain.clone(),
scope: self.scope.clone(),
}
}
/// Forcefully flush the backing metrics scope.
/// This is usually not required since static metrics use auto flushing scopes.
/// The effect, if any, of this method depends on the selected metrics backend.
pub fn flush(&self) {
self.scope.as_ref()(Flush);
}
/// Schedule for the metrics aggregated of buffered by downstream metrics sinks to be
/// sent out at regular intervals.
pub fn flush_every(&self, period: Duration) -> CancelHandle {
let scope = self.scope.clone();
schedule(period, move || (scope)(Flush))
}
}
#[cfg(feature = "bench")]
mod bench {
use ::*;
use test;
#[bench]
fn time_bench_direct_dispatch_event(b: &mut test::Bencher) {
let sink = aggregate(5, summary, to_void());
let metrics = global_metrics(sink);
let marker = metrics.marker("aaa");
b.iter(|| test::black_box(marker.mark()));
}
}

View File

@ -1,170 +0,0 @@
//! Send metrics to a graphite server.
use core::*;
use error;
use self_metrics::*;
use std::net::ToSocketAddrs;
use std::sync::{Arc, RwLock};
use std::time::{SystemTime, UNIX_EPOCH};
use std::io::Write;
use std::fmt::Debug;
use socket::RetrySocket;
/// Send metrics to a graphite server at the address and port provided.
pub fn to_graphite<ADDR>(address: ADDR) -> error::Result<Chain<Graphite>>
where
ADDR: ToSocketAddrs + Debug + Clone,
{
debug!("Connecting to graphite {:?}", address);
let socket = Arc::new(RwLock::new(RetrySocket::new(address.clone())?));
Ok(Chain::new(
move |kind, name, rate| {
let mut prefix = String::with_capacity(32);
prefix.push_str(name.as_ref());
prefix.push(' ');
let mut scale = match kind {
// timers are in µs, lets give graphite milliseconds for consistency with statsd
Kind::Timer => 1000,
_ => 1,
};
if rate < FULL_SAMPLING_RATE {
// graphite does not do sampling, so we'll upsample before sending
let upsample = (1.0 / rate).round() as u64;
warn!("Metric {:?} '{}' being sampled at rate {} will be upsampled \
by a factor of {} when sent to graphite.", kind, name, rate, upsample);
scale = scale * upsample;
}
Graphite {
prefix,
scale,
}
},
move |auto_flush| {
let buf = ScopeBuffer {
buffer: Arc::new(RwLock::new(String::new())),
socket: socket.clone(),
auto_flush,
};
Arc::new(move |cmd| {
match cmd {
ScopeCmd::Write(metric, value) => buf.write(metric, value),
ScopeCmd::Flush => buf.flush(),
}
})
},
))
}
/// Its hard to see how a single scope could get more metrics than this.
// TODO make configurable?
const BUFFER_FLUSH_THRESHOLD: usize = 65536;
lazy_static! {
static ref GRAPHITE_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("graphite.");
static ref SEND_ERR: Marker<Aggregate> = GRAPHITE_METRICS.marker("send_failed");
static ref SENT_BYTES: Counter<Aggregate> = GRAPHITE_METRICS.counter("sent_bytes");
static ref TRESHOLD_EXCEEDED: Marker<Aggregate> = GRAPHITE_METRICS.marker("bufsize_exceeded");
}
/// Key of a graphite metric.
#[derive(Debug, Clone)]
pub struct Graphite {
prefix: String,
scale: u64,
}
/// Wrap string buffer & socket as one.
#[derive(Debug)]
struct ScopeBuffer {
buffer: Arc<RwLock<String>>,
socket: Arc<RwLock<RetrySocket>>,
auto_flush: bool,
}
/// Any remaining buffered data is flushed on Drop.
impl Drop for ScopeBuffer {
fn drop(&mut self) {
self.flush()
}
}
impl ScopeBuffer {
fn write (&self, metric: &Graphite, value: Value) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let start = SystemTime::now();
match start.duration_since(UNIX_EPOCH) {
Ok(timestamp) => {
let mut buf = self.buffer.write().expect("Lock graphite buffer.");
buf.push_str(&metric.prefix);
buf.push_str(&value_str);
buf.push(' ');
buf.push_str(timestamp.as_secs().to_string().as_ref());
buf.push('\n');
if buf.len() > BUFFER_FLUSH_THRESHOLD {
TRESHOLD_EXCEEDED.mark();
warn!("Flushing metrics scope buffer to graphite because its size exceeds \
the threshold of {} bytes. ", BUFFER_FLUSH_THRESHOLD);
self.flush_inner(&mut buf);
} else if self.auto_flush {
self.flush_inner(&mut buf);
}
},
Err(e) => {
warn!("Could not compute epoch timestamp. {}", e);
},
};
}
fn flush_inner(&self, buf: &mut String) {
if !buf.is_empty() {
let mut sock = self.socket.write().expect("Lock graphite socket.");
match sock.write(buf.as_bytes()) {
Ok(size) => {
buf.clear();
SENT_BYTES.count(size);
trace!("Sent {} bytes to graphite", buf.len());
}
Err(e) => {
SEND_ERR.mark();
// still just a best effort, do not warn! for every failure
debug!("Failed to send buffer to graphite: {}", e);
}
};
buf.clear();
}
}
fn flush(&self) {
let mut buf = self.buffer.write().expect("Lock graphite buffer.");
self.flush_inner(&mut buf);
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
use test;
#[bench]
pub fn timer_graphite(b: &mut test::Bencher) {
let sd = to_graphite("localhost:8125").unwrap();
let timer = sd.define_metric(Kind::Timer, "timer", 1000000.0);
let scope = sd.open_scope(false);
b.iter(|| test::black_box(scope.as_ref()(ScopeCmd::Write(&timer, 2000))));
}
}

324
src/input.rs Executable file
View File

@ -0,0 +1,324 @@
use crate::attributes::MetricId;
use crate::clock::TimeHandle;
use crate::label::Labels;
use crate::name::MetricName;
use crate::{Flush, MetricValue};
use std::fmt;
use std::sync::Arc;
// TODO maybe define an 'AsValue' trait + impl for supported number types, then drop 'num' crate
pub use num::integer;
pub use num::ToPrimitive;
use std::ops::Deref;
/// A function trait that opens a new metric capture scope.
pub trait Input: Send + Sync + 'static + InputDyn {
/// The type of Scope returned byt this input.
type SCOPE: InputScope + Send + Sync + 'static;
/// Open a new scope from this input.
fn metrics(&self) -> Self::SCOPE;
/// Open a new scope from this input.
#[deprecated(since = "0.7.2", note = "Use metrics()")]
fn input(&self) -> Self::SCOPE {
self.metrics()
}
/// Open a new scope from this input.
#[deprecated(since = "0.8.0", note = "Use metrics()")]
fn new_scope(&self) -> Self::SCOPE {
self.metrics()
}
}
/// A function trait that opens a new metric capture scope.
pub trait InputDyn: Send + Sync + 'static {
/// Open a new scope from this output.
fn input_dyn(&self) -> Arc<dyn InputScope + Send + Sync + 'static>;
}
/// Blanket impl of dyn input trait
impl<T: Input + Send + Sync + 'static> InputDyn for T {
fn input_dyn(&self) -> Arc<dyn InputScope + Send + Sync + 'static> {
Arc::new(self.metrics())
}
}
/// InputScope
/// Define metrics, write values and flush them.
pub trait InputScope: Flush {
/// Define a generic metric of the specified type.
/// It is preferable to use counter() / marker() / timer() / gauge() methods.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric;
/// Define a Counter.
fn counter(&self, name: &str) -> Counter {
self.new_metric(name.into(), InputKind::Counter).into()
}
/// Define a Marker.
fn marker(&self, name: &str) -> Marker {
self.new_metric(name.into(), InputKind::Marker).into()
}
/// Define a Timer.
fn timer(&self, name: &str) -> Timer {
self.new_metric(name.into(), InputKind::Timer).into()
}
/// Define a Gauge.
fn gauge(&self, name: &str) -> Gauge {
self.new_metric(name.into(), InputKind::Gauge).into()
}
/// Define a Level.
fn level(&self, name: &str) -> Level {
self.new_metric(name.into(), InputKind::Level).into()
}
}
/// A metric is actually a function that knows to write a metric value to a metric output.
#[derive(Clone)]
pub struct InputMetric {
identifier: MetricId,
inner: Arc<dyn Fn(MetricValue, Labels) + Send + Sync>,
}
impl fmt::Debug for InputMetric {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "InputMetric")
}
}
impl InputMetric {
/// Utility constructor
pub fn new<F: Fn(MetricValue, Labels) + Send + Sync + 'static>(
identifier: MetricId,
metric: F,
) -> InputMetric {
InputMetric {
identifier,
inner: Arc::new(metric),
}
}
/// Collect a new value for this metric.
#[inline]
pub fn write(&self, value: MetricValue, labels: Labels) {
(self.inner)(value, labels)
}
/// Returns the unique identifier of this metric.
pub fn metric_id(&self) -> &MetricId {
&self.identifier
}
}
/// Used to differentiate between metric kinds in the backend.
#[derive(Debug, Copy, Clone, Eq, PartialEq, Hash)]
pub enum InputKind {
/// Monotonic counter
Marker,
/// Strictly positive quantity accumulation
Counter,
/// Cumulative quantity fluctuation
Level,
/// Instant measurement of a resource at a point in time (non-cumulative)
Gauge,
/// Time interval, internal to the app or provided by an external source
Timer,
}
/// Used by the metrics! macro to obtain the InputKind from the stringified type.
impl<'a> From<&'a str> for InputKind {
fn from(s: &str) -> InputKind {
match s {
"Marker" => InputKind::Marker,
"Counter" => InputKind::Counter,
"Gauge" => InputKind::Gauge,
"Timer" => InputKind::Timer,
"Level" => InputKind::Level,
_ => panic!("No InputKind '{}' defined", s),
}
}
}
/// A monotonic counter metric.
/// Since value is only ever increased by one, no value parameter is provided,
/// preventing programming errors.
#[derive(Debug, Clone)]
pub struct Marker {
inner: InputMetric,
}
impl Marker {
/// Record a single event occurence.
pub fn mark(&self) {
self.inner.write(1, labels![])
}
}
/// A counter of absolute observed values (non-negative amounts).
/// Used to count things that cannot be undone:
/// - Bytes sent
/// - Records written
/// - Apples eaten
/// For relative (possibly negative) values, the `Level` counter type can be used.
/// If aggregated, minimum and maximum scores will track the collected values, not their sum.
#[derive(Debug, Clone)]
pub struct Counter {
inner: InputMetric,
}
impl Counter {
/// Record a value count.
pub fn count(&self, count: usize) {
self.inner.write(count as isize, labels![])
}
}
/// A counter of fluctuating resources accepting positive and negative values.
/// Can be used as a stateful `Gauge` or as a `Counter` of possibly decreasing amounts.
/// - Size of messages in a queue
/// - Strawberries on a conveyor belt
/// If aggregated, minimum and maximum scores will track the sum of values, not the collected values themselves.
#[derive(Debug, Clone)]
pub struct Level {
inner: InputMetric,
}
impl Level {
/// Record a positive or negative value count
pub fn adjust<V: ToPrimitive>(&self, count: V) {
self.inner.write(count.to_isize().unwrap(), labels![])
}
}
/// A gauge that sends values to the metrics backend
#[derive(Debug, Clone)]
pub struct Gauge {
inner: InputMetric,
}
impl Gauge {
/// Record a value point for this gauge.
pub fn value<V: ToPrimitive>(&self, value: V) {
self.inner.write(value.to_isize().unwrap(), labels![])
}
}
/// A timer that sends values to the metrics backend
/// Timers can record time intervals in multiple ways :
/// - with the time! macrohich wraps an expression or block with start() and stop() calls.
/// - with the time(Fn) methodhich wraps a closure with start() and stop() calls.
/// - with start() and stop() methodsrapping around the operation to time
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Debug, Clone)]
pub struct Timer {
inner: InputMetric,
}
impl Timer {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
pub fn interval_us(&self, interval_us: u64) -> u64 {
self.inner.write(interval_us as isize, labels![]);
interval_us
}
/// Obtain a opaque handle to the current time.
/// The handle is passed back to the stop() method to record a time interval.
/// Caveat: Handles obtained are not bound to this specific timer instance (but should be)
pub fn start(&self) -> TimeHandle {
TimeHandle::now()
}
/// Record the time elapsed since the start_time handle was obtained.
/// This call can be performed multiple times using the same handle,
/// reporting distinct time intervals each time.
/// Returns the microsecond interval value that was recorded.
pub fn stop(&self, start_time: TimeHandle) {
let elapsed_us = start_time.elapsed_us();
self.interval_us(elapsed_us);
}
/// Record the time taken to execute the provided closure
pub fn time<F: FnOnce() -> R, R>(&self, operations: F) -> R {
let start_time = self.start();
let value: R = operations();
self.stop(start_time);
value
}
}
impl From<InputMetric> for Gauge {
fn from(metric: InputMetric) -> Gauge {
Gauge { inner: metric }
}
}
impl From<InputMetric> for Timer {
fn from(metric: InputMetric) -> Timer {
Timer { inner: metric }
}
}
impl From<InputMetric> for Counter {
fn from(metric: InputMetric) -> Counter {
Counter { inner: metric }
}
}
impl From<InputMetric> for Marker {
fn from(metric: InputMetric) -> Marker {
Marker { inner: metric }
}
}
impl From<InputMetric> for Level {
fn from(metric: InputMetric) -> Level {
Level { inner: metric }
}
}
impl Deref for Counter {
type Target = InputMetric;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl Deref for Marker {
type Target = InputMetric;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl Deref for Timer {
type Target = InputMetric;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl Deref for Gauge {
type Target = InputMetric;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl Deref for Level {
type Target = InputMetric;
fn deref(&self) -> &Self::Target {
&self.inner
}
}

320
src/label.rs Normal file
View File

@ -0,0 +1,320 @@
use std::cell::RefCell;
use std::collections::HashMap;
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
/// Label values are immutable but can move around a lot.
type LabelValue = Arc<String>;
/// A reference table of key / value string pairs that may be used on output for additional metric context.
///
/// For concurrency reasons, labels are immutable.
/// All write operations return a mutated clone of the original.
#[derive(Debug, Clone, Default)]
struct LabelScope {
pairs: Option<Arc<HashMap<String, LabelValue>>>,
}
impl LabelScope {
/// Sets the value on a new copy of the map, then returns that copy.
fn set(&self, key: String, value: LabelValue) -> Self {
let mut new_pairs = match self.pairs {
None => HashMap::new(),
Some(ref old_pairs) => old_pairs.as_ref().clone(),
};
new_pairs.insert(key, value);
LabelScope {
pairs: Some(Arc::new(new_pairs)),
}
}
fn unset(&self, key: &str) -> Self {
match self.pairs {
None => self.clone(),
Some(ref old_pairs) => {
let mut new_pairs = old_pairs.as_ref().clone();
if new_pairs.remove(key).is_some() {
if new_pairs.is_empty() {
LabelScope { pairs: None }
} else {
LabelScope {
pairs: Some(Arc::new(new_pairs)),
}
}
} else {
// key wasn't set, labels unchanged
self.clone()
}
}
}
}
fn get(&self, key: &str) -> Option<LabelValue> {
// FIXME should use .and_then(), how?
match &self.pairs {
None => None,
Some(pairs) => pairs.get(key).cloned(),
}
}
fn collect(&self, map: &mut HashMap<String, LabelValue>) {
if let Some(pairs) = &self.pairs {
map.extend(pairs.as_ref().clone().into_iter())
}
}
}
lazy_static! {
static ref APP_LABELS: RwLock<LabelScope> = RwLock::new(LabelScope::default());
}
thread_local! {
static THREAD_LABELS: RefCell<LabelScope> = RefCell::new(LabelScope::default());
}
/// Handle metric labels for the current thread.
/// App scope labels have the lowest lookup priority and serve as a fallback to other scopes.
pub struct ThreadLabel;
impl ThreadLabel {
/// Retrieve a value from the thread scope.
pub fn get(key: &str) -> Option<Arc<String>> {
THREAD_LABELS.with(|map| map.borrow().get(key))
}
/// Set a new value for the thread scope.
/// Replaces any previous value for the key.
pub fn set<S: Into<String>>(key: S, value: S) {
THREAD_LABELS.with(|map| {
let new = { map.borrow().set(key.into(), Arc::new(value.into())) };
*map.borrow_mut() = new;
});
}
/// Unset a value for the app scope.
/// Has no effect if key was not set.
pub fn unset(key: &str) {
THREAD_LABELS.with(|map| {
let new = { map.borrow().unset(key) };
*map.borrow_mut() = new;
});
}
fn collect(map: &mut HashMap<String, LabelValue>) {
THREAD_LABELS.with(|mop| mop.borrow().collect(map));
}
}
/// Handle metric labels for the whole application (globals).
/// App scope labels have the lowest lookup priority and serve as a fallback to other scopes.
pub struct AppLabel;
impl AppLabel {
/// Retrieve a value from the app scope.
pub fn get(key: &str) -> Option<Arc<String>> {
read_lock!(APP_LABELS).get(key)
}
/// Set a new value for the app scope.
/// Replaces any previous value for the key.
pub fn set<S: Into<String>>(key: S, value: S) {
let b = { read_lock!(APP_LABELS).set(key.into(), Arc::new(value.into())) };
*write_lock!(APP_LABELS) = b;
}
/// Unset a value for the app scope.
/// Has no effect if key was not set.
pub fn unset(key: &str) {
let b = { read_lock!(APP_LABELS).unset(key) };
*write_lock!(APP_LABELS) = b;
}
fn collect(map: &mut HashMap<String, LabelValue>) {
read_lock!(APP_LABELS).collect(map)
}
}
/// Base structure to carry metric labels from the application to the metric backend(s).
/// Can carry both one-off labels and exported context labels (if async metrics are enabled).
/// Used in applications through the labels!() macro.
#[derive(Debug, Clone)]
pub struct Labels {
scopes: Vec<LabelScope>,
}
impl From<HashMap<String, LabelValue>> for Labels {
fn from(map: HashMap<String, LabelValue>) -> Self {
Labels {
scopes: vec![LabelScope {
pairs: Some(Arc::new(map)),
}],
}
}
}
impl Default for Labels {
/// Create empty labels.
/// Only Thread and App labels will be used for lookups.
#[inline]
fn default() -> Self {
Labels { scopes: vec![] }
}
}
impl Labels {
/// Used to save metric context before enqueuing value for async output.
pub fn save_context(&mut self) {
self.scopes
.push(THREAD_LABELS.with(|map| map.borrow().clone()));
self.scopes.push(read_lock!(APP_LABELS).clone());
}
/// Generic label lookup function.
/// Searches provided labels, provided scopes or default scopes.
// TODO needs less magic, add checks?
pub fn lookup(&self, key: &str) -> Option<LabelValue> {
fn lookup_current_context(key: &str) -> Option<LabelValue> {
ThreadLabel::get(key).or_else(|| AppLabel::get(key))
}
match self.scopes.len() {
// no value labels, no saved context labels
// just lookup implicit context
0 => lookup_current_context(key),
// some value labels, no saved context labels
// lookup value label, then lookup implicit context
1 => self.scopes[0]
.get(key)
.or_else(|| lookup_current_context(key)),
// value + saved context labels
// lookup explicit context in turn
_ => {
for src in &self.scopes {
if let Some(label_value) = src.get(key) {
return Some(label_value);
}
}
None
}
}
}
/// Export current state of labels to a map.
/// Note: An iterator would still need to allocate to check for uniqueness of keys.
///
pub fn into_map(mut self) -> HashMap<String, LabelValue> {
let mut map = HashMap::new();
match self.scopes.len() {
// no value labels, no saved context labels
// just lookup implicit context
0 => {
AppLabel::collect(&mut map);
ThreadLabel::collect(&mut map);
}
// some value labels, no saved context labels
// lookup value label, then lookup implicit context
1 => {
AppLabel::collect(&mut map);
ThreadLabel::collect(&mut map);
self.scopes[0].collect(&mut map);
}
// value + saved context labels
// lookup explicit context in turn
_ => {
self.scopes.reverse();
for src in self.scopes {
src.collect(&mut map)
}
}
}
map
}
}
#[cfg(test)]
pub mod test {
use super::*;
use std::sync::Mutex;
lazy_static! {
/// Label tests use the globally shared AppLabels which may make them interfere as tests are run concurrently.
/// We do not want to mandate usage of `RUST_TEST_THREADS=1` which would penalize the whole test suite.
/// Instead we use a local mutex to make sure the label tests run in sequence.
static ref TEST_SEQUENCE: Mutex<()> = Mutex::new(());
}
#[test]
fn context_labels() {
let _lock = TEST_SEQUENCE.lock().expect("Test Sequence");
AppLabel::set("abc", "456");
ThreadLabel::set("abc", "123");
assert_eq!(
Arc::new("123".into()),
labels!().lookup("abc").expect("ThreadLabel Value")
);
ThreadLabel::unset("abc");
assert_eq!(
Arc::new("456".into()),
labels!().lookup("abc").expect("AppLabel Value")
);
AppLabel::unset("abc");
assert_eq!(true, labels!().lookup("abc").is_none());
}
#[test]
fn labels_macro() {
let _lock = TEST_SEQUENCE.lock().expect("Test Sequence");
let labels = labels! {
"abc" => "789",
"xyz" => "123"
};
assert_eq!(
Arc::new("789".into()),
labels.lookup("abc").expect("Label Value")
);
assert_eq!(
Arc::new("123".into()),
labels.lookup("xyz").expect("Label Value")
);
}
#[test]
fn value_labels() {
let _lock = TEST_SEQUENCE.lock().expect("Test Sequence");
let labels = labels! { "abc" => "789" };
assert_eq!(
Arc::new("789".into()),
labels.lookup("abc").expect("Label Value")
);
AppLabel::set("abc", "456");
assert_eq!(
Arc::new("789".into()),
labels.lookup("abc").expect("Label Value")
);
ThreadLabel::set("abc", "123");
assert_eq!(
Arc::new("789".into()),
labels.lookup("abc").expect("Label Value")
);
}
}

286
src/lib.rs Normal file → Executable file
View File

@ -1,126 +1,14 @@
/*!
A quick, modular metrics toolkit for Rust applications; similar to popular logging frameworks,
but with counters, markers, gauges and timers.
Dipstick builds on stable Rust with minimal dependencies
and is published as a [crate.](https://crates.io/crates/dipstick)
# Features
- Send metrics to stdout, log, statsd or graphite (one or many)
- Synchronous, asynchronous or mixed operation
- Optional fast random statistical sampling
- Immediate propagation or local aggregation of metrics (count, sum, average, min/max)
- Periodic or programmatic publication of aggregated metrics
- Customizable output statistics and formatting
- Global or scoped (e.g. per request) metrics
- Per-application and per-output metric namespaces
- Predefined or ad-hoc metrics
# Cookbook
Dipstick is easy to add to your code:
```rust
use dipstick::*;
let app_metrics = metrics(to_graphite("host.com:2003"));
app_metrics.counter("my_counter").count(3);
```
Metrics can be sent to multiple outputs at the same time:
```rust
let app_metrics = metrics((to_stdout(), to_statsd("localhost:8125", "app1.host.")));
```
Since instruments are decoupled from the backend, outputs can be swapped easily.
Metrics can be aggregated and scheduled to be published periodically in the background:
```rust
use std::time::Duration;
let (to_aggregate, from_aggregate) = aggregate();
publish_every(Duration::from_secs(10), from_aggregate, to_log("last_ten_secs:"), all_stats);
let app_metrics = metrics(to_aggregate);
```
Aggregation is performed locklessly and is very fast.
Count, sum, min, max and average are tracked where they make sense.
Published statistics can be selected with presets such as `all_stats` (see previous example),
`summary`, `average`.
For more control over published statistics, a custom filter can be provided:
```rust
let (_to_aggregate, from_aggregate) = aggregate();
publish(from_aggregate, to_log("my_custom_stats:"),
|metric_kind, metric_name, metric_score|
match metric_score {
HitCount(hit_count) => Some((Counter, vec![metric_name, ".per_thousand"], hit_count / 1000)),
_ => None
});
```
Metrics can be statistically sampled:
```rust
let app_metrics = metrics(sample(0.001, to_statsd("server:8125", "app.sampled.")));
```
A fast random algorithm is used to pick samples.
Outputs can use sample rate to expand or format published data.
Metrics can be recorded asynchronously:
```rust
let app_metrics = metrics(async(48, to_stdout()));
```
The async queue uses a Rust channel and a standalone thread.
The current behavior is to block when full.
Metric definitions can be cached to make using _ad-hoc metrics_ faster:
```rust
let app_metrics = metrics(cache(512, to_log()));
app_metrics.gauge(format!("my_gauge_{}", 34)).value(44);
```
The preferred way is to _predefine metrics_,
possibly in a [lazy_static!](https://crates.io/crates/lazy_static) block:
```rust
#[macro_use] external crate lazy_static;
lazy_static! {
pub static ref METRICS: GlobalMetrics<String> = metrics(to_stdout());
pub static ref COUNTER_A: Counter<Aggregate> = METRICS.counter("counter_a");
}
COUNTER_A.count(11);
```
Timers can be used multiple ways:
```rust
let timer = app_metrics.timer("my_timer");
time!(timer, {/* slow code here */} );
timer.time(|| {/* slow code here */} );
let start = timer.start();
/* slow code here */
timer.stop(start);
timer.interval_us(123_456);
```
Related metrics can share a namespace:
```rust
let db_metrics = app_metrics.with_prefix("database.");
let db_timer = db_metrics.timer("db_timer");
let db_counter = db_metrics.counter("db_counter");
```
*/
//! A quick, modular metrics toolkit for Rust applications.
#![cfg_attr(feature = "bench", feature(test))]
#![warn(
missing_copy_implementations,
missing_docs,
trivial_casts,
trivial_numeric_casts,
unused_extern_crates,
unused_import_braces,
unused_qualifications,
// variant_size_differences,
missing_docs,
trivial_casts,
trivial_numeric_casts,
unused_extern_crates,
unused_qualifications
)]
#![recursion_limit = "32"]
#[cfg(feature = "bench")]
extern crate test;
@ -128,68 +16,124 @@ extern crate test;
#[macro_use]
extern crate log;
extern crate time;
extern crate num;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate derivative;
mod macros;
pub use crate::macros::*;
#[cfg(not(feature = "parking_lot"))]
macro_rules! write_lock {
($WUT:expr) => {
$WUT.write().unwrap()
};
}
#[cfg(feature = "parking_lot")]
macro_rules! write_lock {
($WUT:expr) => {
$WUT.write()
};
}
#[cfg(not(feature = "parking_lot"))]
macro_rules! read_lock {
($WUT:expr) => {
$WUT.read().unwrap()
};
}
#[cfg(feature = "parking_lot")]
macro_rules! read_lock {
($WUT:expr) => {
$WUT.read()
};
}
mod attributes;
mod clock;
mod input;
mod label;
mod metrics;
mod name;
mod pcg32;
mod lru_cache;
mod proxy;
mod scheduler;
pub mod error;
pub use error::*;
pub mod core;
pub use core::*;
pub mod macros;
mod output;
pub use output::*;
mod global_metrics;
pub use global_metrics::*;
mod scope_metrics;
pub use scope_metrics::*;
mod sample;
pub use sample::*;
mod scores;
pub use scores::*;
mod aggregate;
pub use aggregate::*;
mod publish;
pub use publish::*;
mod statsd;
pub use statsd::*;
mod namespace;
pub use namespace::*;
mod graphite;
pub use graphite::*;
mod socket;
pub use socket::*;
mod atomic;
mod stats;
mod cache;
pub use cache::*;
mod lru_cache;
mod multi;
pub use multi::*;
mod queue;
mod async;
pub use async::*;
pub use crate::attributes::{
Attributes, Buffered, Buffering, MetricId, Observe, ObserveWhen, OnFlush, OnFlushCancel,
Prefixed, Sampled, Sampling, WithAttributes,
};
pub use crate::clock::TimeHandle;
pub use crate::input::{
Counter, Gauge, Input, InputDyn, InputKind, InputMetric, InputScope, Level, Marker, Timer,
};
pub use crate::label::{AppLabel, Labels, ThreadLabel};
pub use crate::name::{MetricName, NameParts};
pub use crate::output::void::Void;
pub use crate::scheduler::{Cancel, CancelGuard, CancelHandle, ScheduleFlush};
mod schedule;
pub use schedule::*;
#[cfg(test)]
pub use crate::clock::{mock_clock_advance, mock_clock_reset};
mod self_metrics;
pub use self_metrics::snapshot;
pub use crate::proxy::Proxy;
mod output;
pub use crate::output::format::{
Formatting, LabelOp, LineFormat, LineOp, LineTemplate, SimpleFormat,
};
pub use crate::output::graphite::{Graphite, GraphiteMetric, GraphiteScope};
pub use crate::output::log::{Log, LogScope};
pub use crate::output::map::StatsMapScope;
pub use crate::output::statsd::{Statsd, StatsdMetric, StatsdScope};
pub use crate::output::stream::{Stream, TextScope};
//#[cfg(feature="prometheus")]
pub use crate::output::prometheus::{Prometheus, PrometheusScope};
pub use crate::atomic::AtomicBucket;
pub use crate::cache::CachedInput;
pub use crate::multi::{MultiInput, MultiInputScope};
pub use crate::queue::{InputQueue, InputQueueScope, QueuedInput};
pub use crate::stats::{stats_all, stats_average, stats_summary, ScoreType};
use std::io;
/// Base type for recorded metric values.
pub type MetricValue = isize;
/// Both InputScope and OutputScope share the ability to flush the recorded data.
pub trait Flush {
/// Flush does nothing by default.
fn flush(&self) -> io::Result<()>;
}
#[cfg(feature = "bench")]
pub mod bench {
use super::clock::*;
use super::input::*;
use crate::AtomicBucket;
#[bench]
fn get_instant(b: &mut test::Bencher) {
b.iter(|| test::black_box(TimeHandle::now()));
}
#[bench]
fn time_bench_direct_dispatch_event(b: &mut test::Bencher) {
let metrics = AtomicBucket::new();
let marker = metrics.marker("aaa");
b.iter(|| test::black_box(marker.mark()));
}
}

64
src/lru_cache.rs Normal file → Executable file
View File

@ -1,30 +1,9 @@
// The MIT License (MIT)
//
// Copyright (c) 2016 Christian W. Briones
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
//! A fixed-size cache with LRU expiration criteria.
//! Adapted from https://github.com/cwbriones/lrucache
//!
use std::hash::Hash;
//! Stored values will be held onto as long as there is space.
//! When space runs out, the oldest unused value will get evicted to make room for a new value.
use std::collections::HashMap;
use std::hash::Hash;
struct CacheEntry<K, V> {
key: K,
@ -38,24 +17,25 @@ pub struct LRUCache<K, V> {
table: HashMap<K, usize>,
entries: Vec<CacheEntry<K, V>>,
first: Option<usize>,
last: Option<usize>,
capacity: usize
last: Option<usize>,
capacity: usize,
}
impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// Creates a new cache that can hold the specified number of elements.
pub fn with_capacity(cap: usize) -> Self {
pub fn with_capacity(size: usize) -> Self {
LRUCache {
table: HashMap::with_capacity(cap),
entries: Vec::with_capacity(cap),
table: HashMap::with_capacity(size),
entries: Vec::with_capacity(size),
first: None,
last: None,
capacity: cap,
capacity: size,
}
}
/// Inserts a key-value pair into the cache and returns the previous value, if any.
/// If there is no room in the cache the oldest item will be removed.
#[allow(clippy::map_entry)]
pub fn insert(&mut self, key: K, value: V) -> Option<V> {
if self.table.contains_key(&key) {
self.access(&key);
@ -67,16 +47,16 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
self.ensure_room();
// Update old head
let idx = self.entries.len();
self.first.map(|e| {
if let Some(e) = self.first {
let prev = Some(idx);
self.entries[e].prev = prev;
});
}
// This is the new head
self.entries.push(CacheEntry {
key: key.clone(),
value: Some(value),
next: self.first,
prev: None
prev: None,
});
self.first = Some(idx);
self.last = self.last.or(self.first);
@ -89,9 +69,9 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// without promoting it.
pub fn peek(&mut self, key: &K) -> Option<&V> {
let entries = &self.entries;
self.table.get(key).and_then(move |i| {
entries[*i].value.as_ref()
})
self.table
.get(key)
.and_then(move |i| entries[*i].value.as_ref())
}
/// Retrieves a reference to the item associated with `key` from the cache.
@ -109,7 +89,7 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// Promotes the specified key to the top of the cache.
fn access(&mut self, key: &K) {
let i = *self.table.get(key).unwrap();
let i = self.table[key];
self.remove_from_list(i);
self.first = Some(i);
}
@ -121,7 +101,7 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
/// Removes an item from the linked list.
fn remove_from_list(&mut self, i: usize) {
let (prev, next) = {
let entry = self.entries.get_mut(i).unwrap();
let entry = &mut self.entries[i];
(entry.prev, entry.next)
};
match (prev, next) {
@ -133,15 +113,15 @@ impl<K: Clone + Hash + Eq, V> LRUCache<K, V> {
}
let second = &mut self.entries[k];
second.prev = prev;
},
}
// Item was at the end of the list
(Some(j), None) => {
let first = &mut self.entries[j];
first.next = None;
self.last = prev;
},
}
// Item was at front
_ => ()
_ => (),
}
}

185
src/macros.rs Normal file → Executable file
View File

@ -1,8 +1,8 @@
//! Publicly exposed metric macros are defined here.
//! Although `dipstick` does not have a macro-based API,
//! in some situations they can make instrumented code simpler.
// TODO add #[timer("name")] method annotation processors
pub use lazy_static::*;
// TODO add #[timer("name")] custom derive
/// A convenience macro to wrap a block or an expression with a start / stop timer.
/// Elapsed time is sent to the supplied statsd client after the computation has been performed.
@ -14,5 +14,182 @@ macro_rules! time {
let value = $body;
$timer.stop(start_time);
value
}}
}};
}
/// Create **Labels** from a list of key-value pairs
/// Adapted from the hashmap!() macro in the *maplit* crate.
///
/// ## Example
///
/// ```
/// #[macro_use] ///
/// use dipstick::*;
///
/// # fn main() {
///
/// let labels = labels!{
///
/// "a" => "1",
/// "b" => "2",
/// };
/// assert_eq!(labels.lookup("a"), Some(::std::sync::Arc::new("1".into())));
/// assert_eq!(labels.lookup("b"), Some(::std::sync::Arc::new("2".into())));
/// assert_eq!(labels.lookup("c"), None);
/// # }
/// ```
#[macro_export]
macro_rules! labels {
(@single $($x:tt)*) => (());
(@count $($rest:expr),*) => (<[()]>::len(&[$(labels!(@single $rest)),*]));
($($key:expr => $value:expr,)+) => { labels!($($key => $value),+) };
($($key:expr => $value:expr),*) => {
{
let _cap = labels!(@count $($key),*);
let mut _map: ::std::collections::HashMap<String, ::std::sync::Arc<String>> = ::std::collections::HashMap::with_capacity(_cap);
$(
let _ = _map.insert($key.into(), ::std::sync::Arc::new($value.into()));
)*
$crate::Labels::from(_map)
}
};
() => {
::Labels::default()
}
}
/// Metrics can be used from anywhere (public), does not need to declare metrics in this block.
#[macro_export]
macro_rules! metrics {
// BRANCH NODE - public type decl
($(#[$attr:meta])* pub $IDENT:ident: $TYPE:ty = $e:expr => { $($BRANCH:tt)*} $($REST:tt)*) => {
lazy_static! { $(#[$attr])* pub static ref $IDENT: $TYPE = $e.into(); }
metrics!{ @internal $IDENT; $TYPE; $($BRANCH)* }
metrics!{ $($REST)* }
};
// BRANCH NODE - private typed decl
($(#[$attr:meta])* $IDENT:ident: $TYPE:ty = $e:expr => { $($BRANCH:tt)* } $($REST:tt)*) => {
lazy_static! { $(#[$attr])* static ref $IDENT: $TYPE = $e.into(); }
metrics!{ @internal $IDENT; $TYPE; $($BRANCH)* }
metrics!{ $($REST)* }
};
// BRANCH NODE - public untyped decl
($(#[$attr:meta])* pub $IDENT:ident = $e:expr => { $($BRANCH:tt)* } $($REST:tt)*) => {
lazy_static! { $(#[$attr])* pub static ref $IDENT: Proxy = $e.into(); }
metrics!{ @internal $IDENT; Proxy; $($BRANCH)* }
metrics!{ $($REST)* }
};
// BRANCH NODE - private untyped decl
($(#[$attr:meta])* $IDENT:ident = $e:expr => { $($BRANCH:tt)* } $($REST:tt)*) => {
lazy_static! { $(#[$attr])* static ref $IDENT: Proxy = $e.into(); }
metrics!{ @internal $IDENT; Proxy; $($BRANCH)* }
metrics!{ $($REST)* }
};
// Identified Proxy Root
($e:ident => { $($BRANCH:tt)+ } $($REST:tt)*) => {
metrics!{ @internal $e; Proxy; $($BRANCH)* }
metrics!{ $($REST)* }
};
// Anonymous Proxy Namespace
($e:expr => { $($BRANCH:tt)+ } $($REST:tt)*) => {
lazy_static! { static ref PROXY_METRICS: Proxy = $e.into(); }
metrics!{ @internal PROXY_METRICS; Proxy; $($BRANCH)* }
metrics!{ $($REST)* }
};
// LEAF NODE - public typed decl
($(#[$attr:meta])* pub $IDENT:ident: $TYPE:ty = $e:expr; $($REST:tt)*) => {
metrics!{ @internal Proxy::default(); Proxy; $(#[$attr])* pub $IDENT: $TYPE = $e; }
metrics!{ $($REST)* }
};
// LEAF NODE - private typed decl
($(#[$attr:meta])* $IDENT:ident: $TYPE:ty = $e:expr; $($REST:tt)*) => {
metrics!{ @internal Proxy::default(); Proxy; $(#[$attr])* $IDENT: $TYPE = $e; }
metrics!{ $($REST)* }
};
// END NODE
() => ();
// METRIC NODE - public
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* pub $IDENT:ident: $MTY:ty = $METRIC_NAME:expr; $($REST:tt)*) => {
lazy_static! { $(#[$attr])* pub static ref $IDENT: $MTY =
$WITH.new_metric($METRIC_NAME.into(), stringify!($MTY).into()).into();
}
metrics!{ @internal $WITH; $TY; $($REST)* }
};
// METRIC NODE - private
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* $IDENT:ident: $MTY:ty = $METRIC_NAME:expr; $($REST:tt)*) => {
lazy_static! { $(#[$attr])* static ref $IDENT: $MTY =
$WITH.new_metric($METRIC_NAME.into(), stringify!($MTY).into()).into();
}
metrics!{ @internal $WITH; $TY; $($REST)* }
};
// SUB BRANCH NODE - public identifier
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* pub $IDENT:ident = $e:expr => { $($BRANCH:tt)*} $($REST:tt)*) => {
lazy_static! { $(#[$attr])* pub static ref $IDENT = $WITH.named($e); }
metrics!( @internal $IDENT; $TY; $($BRANCH)*);
metrics!( @internal $WITH; $TY; $($REST)*);
};
// SUB BRANCH NODE - private identifier
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* $IDENT:ident = $e:expr => { $($BRANCH:tt)*} $($REST:tt)*) => {
lazy_static! { $(#[$attr])* static ref $IDENT = $WITH.named($e); }
metrics!( @internal $IDENT; $TY; $($BRANCH)*);
metrics!( @internal $WITH; $TY; $($REST)*);
};
// SUB BRANCH NODE (not yet)
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* pub $e:expr => { $($BRANCH:tt)*} $($REST:tt)*) => {
metrics!( @internal $WITH.named($e); $TY; $($BRANCH)*);
metrics!( @internal $WITH; $TY; $($REST)*);
};
// SUB BRANCH NODE (not yet)
(@internal $WITH:expr; $TY:ty; $(#[$attr:meta])* $e:expr => { $($BRANCH:tt)*} $($REST:tt)*) => {
metrics!( @internal $WITH.named($e); $TY; $($BRANCH)*);
metrics!( @internal $WITH; $TY; $($REST)*);
};
(@internal $WITH:expr; $TYPE:ty;) => ()
}
#[cfg(test)]
mod test {
use crate::input::*;
use crate::proxy::Proxy;
metrics! {TEST: Proxy = "test_prefix" => {
pub M1: Marker = "failed";
C1: Counter = "failed";
G1: Gauge = "failed";
T1: Timer = "failed";
}}
metrics!("my_app" => {
COUNTER_A: Counter = "counter_a";
});
#[test]
fn gurp() {
COUNTER_A.count(11);
}
#[test]
fn call_new_macro_defined_metrics() {
M1.mark();
C1.count(1);
G1.value(1);
T1.interval_us(1);
}
}

34
src/metrics.rs Executable file
View File

@ -0,0 +1,34 @@
//! Internal Dipstick runtime metrics.
//! Because the possibly high volume of data, this is pre-set to use aggregation.
//! This is also kept in a separate module because it is not to be exposed outside of the crate.
use crate::attributes::Prefixed;
use crate::input::{Counter, InputScope, Marker};
use crate::proxy::Proxy;
metrics! {
/// Dipstick's own internal metrics.
pub DIPSTICK_METRICS = "dipstick" => {
"queue" => {
pub SEND_FAILED: Marker = "send_failed";
}
"prometheus" => {
pub PROMETHEUS_SEND_ERR: Marker = "send_failed";
pub PROMETHEUS_OVERFLOW: Marker = "buf_overflow";
pub PROMETHEUS_SENT_BYTES: Counter = "sent_bytes";
}
"graphite" => {
pub GRAPHITE_SEND_ERR: Marker = "send_failed";
pub GRAPHITE_OVERFLOW: Marker = "buf_overflow";
pub GRAPHITE_SENT_BYTES: Counter = "sent_bytes";
}
"statsd" => {
pub STATSD_SEND_ERR: Marker ="send_failed";
pub STATSD_SENT_BYTES: Counter = "sent_bytes";
}
}
}

176
src/multi.rs Normal file → Executable file
View File

@ -1,85 +1,121 @@
//! Dispatch metrics to multiple sinks.
use core::*;
use crate::attributes::{Attributes, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::{Input, InputDyn, InputKind, InputMetric, InputScope};
use crate::name::MetricName;
use crate::Flush;
use std::io;
use std::sync::Arc;
/// Two chains of different types can be combined in a tuple.
/// The chains will act as one, each receiving calls in the order the appear in the tuple.
/// For more than two types, make tuples of tuples, "Yo Dawg" style.
impl<M1, M2> From<(Chain<M1>, Chain<M2>)> for Chain<(M1, M2)>
where
M1: 'static + Clone + Send + Sync,
M2: 'static + Clone + Send + Sync,
{
fn from(combo: (Chain<M1>, Chain<M2>)) -> Chain<(M1, M2)> {
/// Opens multiple scopes at a time from just as many outputs.
#[derive(Clone, Default)]
pub struct MultiInput {
attributes: Attributes,
inputs: Vec<Arc<dyn InputDyn + Send + Sync>>,
}
let combo0 = combo.0.clone();
let combo1 = combo.1.clone();
impl Input for MultiInput {
type SCOPE = MultiInputScope;
Chain::new(
move |kind, name, rate| (
combo.0.define_metric(kind, name, rate),
combo.1.define_metric(kind, &name, rate),
),
fn metrics(&self) -> Self::SCOPE {
#[allow(clippy::redundant_closure)]
let scopes = self.inputs.iter().map(|input| input.input_dyn()).collect();
MultiInputScope {
attributes: self.attributes.clone(),
scopes,
}
}
}
move |auto_flush| {
let scope0 = combo0.open_scope(auto_flush);
let scope1 = combo1.open_scope(auto_flush);
impl MultiInput {
/// Create a new multi-input dispatcher.
#[deprecated(since = "0.7.2", note = "Use new()")]
pub fn input() -> Self {
Self::new()
}
Arc::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => {
scope0(ScopeCmd::Write(&metric.0, value));
scope1(ScopeCmd::Write(&metric.1, value));
}
ScopeCmd::Flush => {
scope0(ScopeCmd::Flush);
scope1(ScopeCmd::Flush);
}
})
/// Create a new multi-input dispatcher.
pub fn new() -> Self {
Self::default()
}
/// Returns a clone of the dispatch with the new target added to the list.
pub fn add_target<OUT: Input + Send + Sync + 'static>(&self, out: OUT) -> Self {
let mut cloned = self.clone();
cloned.inputs.push(Arc::new(out));
cloned
}
}
impl WithAttributes for MultiInput {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
/// Dispatch metric values to a list of scopes.
#[derive(Clone, Default)]
pub struct MultiInputScope {
attributes: Attributes,
scopes: Vec<Arc<dyn InputScope + Send + Sync>>,
}
impl MultiInputScope {
/// Create a new multi scope dispatcher with no scopes.
pub fn new() -> Self {
MultiInputScope {
attributes: Attributes::default(),
scopes: vec![],
}
}
/// Add a target to the dispatch list.
/// Returns a clone of the original object.
pub fn add_target<IN: InputScope + Send + Sync + 'static>(&self, scope: IN) -> Self {
let mut cloned = self.clone();
cloned.scopes.push(Arc::new(scope));
cloned
}
}
impl InputScope for MultiInputScope {
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name = &self.prefix_append(name);
let metrics: Vec<InputMetric> = self
.scopes
.iter()
.map(move |scope| scope.new_metric(name.clone(), kind))
.collect();
InputMetric::new(
MetricId::forge("multi", name.clone()),
move |value, labels| {
for metric in &metrics {
metric.write(value, labels.clone())
}
},
)
}
}
/// Multiple chains of the same type can be combined in a slice.
/// The chains will act as one, each receiving calls in the order the appear in the slice.
impl<'a, M> From<&'a [Chain<M>]> for Chain<Box<[M]>>
where
M: 'static + Clone + Send + Sync,
{
fn from(chains: &'a [Chain<M>]) -> Chain<Box<[M]>> {
let chains = chains.to_vec();
let chains2 = chains.clone();
Chain::new(
move |kind, name, rate| {
let mut metric = Vec::with_capacity(chains.len());
for chain in &chains {
metric.push(chain.define_metric(kind, name, rate));
}
metric.into_boxed_slice()
},
move |auto_flush| {
let mut scopes = Vec::with_capacity(chains2.len());
for chain in &chains2 {
scopes.push(chain.open_scope(auto_flush));
}
Arc::new(move |cmd| match cmd {
ScopeCmd::Write(metric, value) => {
for (i, scope) in scopes.iter().enumerate() {
(scope)(ScopeCmd::Write(&metric[i], value))
}
}
ScopeCmd::Flush => {
for scope in &scopes {
(scope)(ScopeCmd::Flush)
}
}
})
},
)
impl Flush for MultiInputScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
for w in &self.scopes {
w.flush()?;
}
Ok(())
}
}
impl WithAttributes for MultiInputScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}

153
src/name.rs Normal file
View File

@ -0,0 +1,153 @@
use std::collections::VecDeque;
use std::ops::{Deref, DerefMut};
/// A double-ended vec of strings constituting a metric name or a future part thereof.
#[derive(Debug, Clone, Hash, Eq, PartialEq, Ord, PartialOrd, Default)]
pub struct NameParts {
/// Nodes are stored in order or their final appearance in the names
/// If there is no namespace, the deque is empty.
nodes: VecDeque<String>,
}
impl NameParts {
/// Returns true if this instance is equal to or a subset (more specific) of the target instance.
/// e.g. `a.b.c` is within `a.b`
/// e.g. `a.d.c` is not within `a.b`
pub fn is_within(&self, other: &NameParts) -> bool {
// quick check: if this name has less parts it cannot be equal or more specific
if self.len() < other.nodes.len() {
return false;
}
for (i, part) in other.nodes.iter().enumerate() {
if part != &self.nodes[i] {
return false;
}
}
true
}
/// Make a name in this namespace
pub fn make_name<S: Into<String>>(&self, leaf: S) -> MetricName {
let mut nodes = self.clone();
nodes.push_back(leaf.into());
MetricName { nodes }
}
/// Extract a copy of the last name part
/// Panics if empty
pub fn short(&self) -> MetricName {
self.back().expect("Short metric name").clone().into()
}
}
/// Turn any string into a StringDeque
impl<S: Into<String>> From<S> for NameParts {
fn from(name_part: S) -> Self {
let name: String = name_part.into();
// can we do better than asserting? empty names should not exist, ever...
debug_assert!(!name.is_empty());
let mut nodes = NameParts::default();
nodes.push_front(name);
nodes
}
}
/// Enable use of VecDeque methods such as len(), push_*, insert()...
impl Deref for NameParts {
type Target = VecDeque<String>;
fn deref(&self) -> &Self::Target {
&self.nodes
}
}
/// Enable use of VecDeque methods such as len(), push_*, insert()...
impl DerefMut for NameParts {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.nodes
}
}
/// The name of a metric, including the concatenated possible namespaces in which it was defined.
#[derive(Debug, Clone, Hash, Eq, PartialEq, Ord, PartialOrd)]
pub struct MetricName {
nodes: NameParts,
}
impl MetricName {
/// Prepend to the existing namespace.
pub fn prepend<S: Into<NameParts>>(mut self, namespace: S) -> Self {
let parts: NameParts = namespace.into();
parts
.iter()
.rev()
.for_each(|node| self.nodes.push_front(node.clone()));
self
}
/// Append to the existing namespace.
pub fn append<S: Into<NameParts>>(mut self, namespace: S) -> Self {
let offset = self.nodes.len() - 1;
let parts: NameParts = namespace.into();
for (i, part) in parts.iter().enumerate() {
self.nodes.insert(i + offset, part.clone())
}
self
}
/// Combine name parts into a string.
pub fn join(&self, separator: &str) -> String {
self.nodes
.iter()
.map(|s| &**s)
.collect::<Vec<&str>>()
.join(separator)
}
}
impl<S: Into<String>> From<S> for MetricName {
fn from(name: S) -> Self {
MetricName {
nodes: NameParts::from(name),
}
}
}
impl Deref for MetricName {
type Target = NameParts;
fn deref(&self) -> &Self::Target {
&self.nodes
}
}
impl DerefMut for MetricName {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.nodes
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn string_deque_within_same() {
let mut sd1: NameParts = "c".into();
sd1.push_front("b".into());
assert_eq!(true, sd1.is_within(&sd1));
}
#[test]
fn string_deque_within_other() {
let mut sd1: NameParts = "b".into();
sd1.push_front("a".into());
let mut sd2: NameParts = "c".into();
sd2.push_front("b".into());
sd2.push_front("a".into());
assert_eq!(true, sd2.is_within(&sd1));
assert_eq!(false, sd1.is_within(&sd2));
}
}

View File

@ -1,31 +0,0 @@
//! Metrics name manipulation functions.
use core::*;
use std::sync::Arc;
/// Insert prefix in newly defined metrics.
pub fn prefix<M, IC>(prefix: &str, chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_metric(|next| {
let prefix = prefix.to_string();
Arc::new(move |kind, name, rate| {
let name = [&prefix, name].concat();
(next)(kind, name.as_ref(), rate)
})
})
}
/// Join namespace and prepend in newly defined metrics.
pub fn namespace<M, IC>(names: &[&str], chain: IC) -> Chain<M>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
let nspace = names.join(".");
prefix(nspace.as_ref(), chain)
}

View File

@ -1,54 +0,0 @@
//! Standard stateless metric outputs.
use core::*;
use std::sync::Arc;
/// Write metric values to stdout using `println!`.
pub fn to_stdout() -> Chain<String> {
Chain::new(
|_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|cmd| if let ScopeCmd::Write(m, v) = cmd { println!("{}: {}", m, v) })
)
}
/// Write metric values to the standard log using `info!`.
pub fn to_log() -> Chain<String> {
Chain::new(
|_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|cmd| if let ScopeCmd::Write(m, v) = cmd { info!("{}: {}", m, v) })
)
}
/// Special sink that discards all metric values sent to it.
pub fn to_void() -> Chain<String> {
Chain::new(
move |_kind, name, _rate| String::from(name),
|_auto_flush| Arc::new(|_cmd| {})
)
}
#[cfg(test)]
mod test {
use core::*;
#[test]
fn sink_print() {
let c = super::to_stdout();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
}
#[test]
fn test_to_log() {
let c = super::to_log();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
}
#[test]
fn test_to_void() {
let c = super::to_void();
let m = c.define_metric(Kind::Marker, "test", 1.0);
c.open_scope(true)(ScopeCmd::Write(&m, 33));
}
}

175
src/output/format.rs Normal file
View File

@ -0,0 +1,175 @@
use self::LineOp::*;
use crate::input::InputKind;
use crate::name::MetricName;
use crate::MetricValue;
use std::io;
use std::io::Write;
use std::sync::Arc;
/// Print commands are steps in the execution of output templates.
pub enum LineOp {
/// Print a string.
Literal(Vec<u8>),
/// Lookup and print label value for key, if it exists.
LabelExists(String, Vec<LabelOp>),
/// Print metric value as text.
ValueAsText,
/// Print metric value, divided by the given scale, as text.
ScaledValueAsText(f64),
/// Print the newline character.labels.lookup(key)
NewLine,
}
/// Print commands are steps in the execution of output templates.
pub enum LabelOp {
/// Print a string.
Literal(Vec<u8>),
/// Print the label key.
LabelKey,
/// Print the label value.
LabelValue,
}
/// An sequence of print commands, embodying an output strategy for a single metric.
pub struct LineTemplate {
ops: Vec<LineOp>,
}
impl From<Vec<LineOp>> for LineTemplate {
fn from(ops: Vec<LineOp>) -> Self {
LineTemplate::new(ops)
}
}
impl LineTemplate {
/// Make a new LineTemplate
pub fn new(ops: Vec<LineOp>) -> Self {
LineTemplate { ops }
}
/// Template execution applies commands in turn, writing to the output.
pub fn print<L>(&self, output: &mut dyn Write, value: MetricValue, lookup: L) -> io::Result<()>
where
L: Fn(&str) -> Option<Arc<String>>,
{
for cmd in &self.ops {
match cmd {
Literal(src) => output.write_all(src.as_ref())?,
ValueAsText => output.write_all(format!("{}", value).as_ref())?,
ScaledValueAsText(scale) => {
let scaled = value as f64 / scale;
output.write_all(format!("{}", scaled).as_ref())?
}
NewLine => writeln!(output)?,
LabelExists(label_key, print_label) => {
if let Some(label_value) = lookup(label_key.as_ref()) {
for label_cmd in print_label {
match label_cmd {
LabelOp::LabelValue => output.write_all(label_value.as_bytes())?,
LabelOp::LabelKey => output.write_all(label_key.as_bytes())?,
LabelOp::Literal(src) => output.write_all(src.as_ref())?,
}
}
}
}
};
}
Ok(())
}
}
/// Format output config support.
pub trait Formatting {
/// Specify formatting of output.
fn formatting(&self, format: impl LineFormat + 'static) -> Self;
}
/// Forges metric-specific printers
pub trait LineFormat: Send + Sync {
/// Prepare a template for output of metric values.
fn template(&self, name: &MetricName, kind: InputKind) -> LineTemplate;
}
/// A simple metric output format of "MetricName {Value}"
#[derive(Default)]
pub struct SimpleFormat {
// TODO make separator configurable
// separator: String,
}
impl LineFormat for SimpleFormat {
fn template(&self, name: &MetricName, _kind: InputKind) -> LineTemplate {
let mut header = name.join(".");
header.push(' ');
LineTemplate {
ops: vec![Literal(header.into_bytes()), ValueAsText, NewLine],
}
}
}
#[cfg(test)]
pub mod test {
use super::*;
use crate::label::Labels;
pub struct TestFormat;
impl LineFormat for TestFormat {
fn template(&self, name: &MetricName, kind: InputKind) -> LineTemplate {
let mut header: String = format!("{:?}", kind);
header.push('/');
header.push_str(&name.join("."));
header.push(' ');
LineTemplate {
ops: vec![
Literal(header.into()),
ValueAsText,
Literal(" ".into()),
ScaledValueAsText(1000.0),
Literal(" ".into()),
LabelExists(
"test_key".into(),
vec![
LabelOp::LabelKey,
LabelOp::Literal("=".into()),
LabelOp::LabelValue,
],
),
NewLine,
],
}
}
}
#[test]
fn print_label_exists() {
let labels: Labels = labels!("test_key" => "456");
let format = TestFormat {};
let mut name = MetricName::from("abc");
name = name.prepend("xyz");
let template = format.template(&name, InputKind::Counter);
let mut out = vec![];
template
.print(&mut out, 123000, |key| labels.lookup(key))
.unwrap();
assert_eq!(
"Counter/xyz.abc 123000 123 test_key=456\n",
String::from_utf8(out).unwrap()
);
}
#[test]
fn print_label_not_exists() {
let format = TestFormat {};
let mut name = MetricName::from("abc");
name = name.prepend("xyz");
let template = format.template(&name, InputKind::Counter);
let mut out = vec![];
template.print(&mut out, 123000, |_key| None).unwrap();
assert_eq!(
"Counter/xyz.abc 123000 123 \n",
String::from_utf8(out).unwrap()
);
}
}

225
src/output/graphite.rs Executable file
View File

@ -0,0 +1,225 @@
//! Send metrics to a graphite server.
use crate::attributes::{Attributes, Buffered, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::InputKind;
use crate::input::{Input, InputMetric, InputScope};
use crate::metrics;
use crate::name::MetricName;
use crate::output::socket::RetrySocket;
use crate::{CachedInput, QueuedInput};
use crate::{Flush, MetricValue};
use std::net::ToSocketAddrs;
use std::fmt::Debug;
use std::io::Write;
use std::time::{SystemTime, UNIX_EPOCH};
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::{RwLock, RwLockWriteGuard};
#[cfg(feature = "parking_lot")]
use parking_lot::{RwLock, RwLockWriteGuard};
use std::io;
/// Graphite Input holds a socket to a graphite server.
/// The socket is shared between scopes opened from the Input.
#[derive(Clone, Debug)]
pub struct Graphite {
attributes: Attributes,
socket: Arc<RwLock<RetrySocket>>,
}
impl Input for Graphite {
type SCOPE = GraphiteScope;
fn metrics(&self) -> Self::SCOPE {
GraphiteScope {
attributes: self.attributes.clone(),
buffer: Arc::new(RwLock::new(String::new())),
socket: self.socket.clone(),
}
}
}
impl Graphite {
/// Send metrics to a graphite server at the address and port provided.
pub fn send_to<A: ToSocketAddrs + Debug + Clone>(address: A) -> io::Result<Graphite> {
debug!("Connecting to graphite {:?}", address);
let socket = Arc::new(RwLock::new(RetrySocket::new(address)?));
Ok(Graphite {
attributes: Attributes::default(),
socket,
})
}
}
impl WithAttributes for Graphite {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for Graphite {}
/// Graphite Input
#[derive(Debug, Clone)]
pub struct GraphiteScope {
attributes: Attributes,
buffer: Arc<RwLock<String>>,
socket: Arc<RwLock<RetrySocket>>,
}
impl InputScope for GraphiteScope {
/// Define a metric of the specified type.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let mut prefix = self.prefix_prepend(name.clone()).join(".");
prefix.push(' ');
let scale = match kind {
// timers are in µs, but we give graphite milliseconds
InputKind::Timer => 1000,
_ => 1,
};
let cloned = self.clone();
let metric = GraphiteMetric { prefix, scale };
let metric_id = MetricId::forge("graphite", name);
InputMetric::new(metric_id, move |value, _labels| {
cloned.print(&metric, value);
})
}
}
impl Flush for GraphiteScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let buf = write_lock!(self.buffer);
self.flush_inner(buf)
}
}
impl GraphiteScope {
fn print(&self, metric: &GraphiteMetric, value: MetricValue) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let start = SystemTime::now();
let mut buffer = write_lock!(self.buffer);
match start.duration_since(UNIX_EPOCH) {
Ok(timestamp) => {
buffer.push_str(&metric.prefix);
buffer.push_str(&value_str);
buffer.push(' ');
buffer.push_str(&timestamp.as_secs().to_string());
buffer.push('\n');
if buffer.len() > BUFFER_FLUSH_THRESHOLD {
metrics::GRAPHITE_OVERFLOW.mark();
warn!("Graphite Buffer Size Exceeded: {}", BUFFER_FLUSH_THRESHOLD);
let _ = self.flush_inner(buffer);
buffer = write_lock!(self.buffer);
}
}
Err(e) => {
warn!("Could not compute epoch timestamp. {}", e);
}
};
if self.is_buffered() {
if let Err(e) = self.flush_inner(buffer) {
debug!("Could not send to graphite {}", e)
}
}
}
fn flush_inner(&self, mut buf: RwLockWriteGuard<String>) -> io::Result<()> {
if buf.is_empty() {
return Ok(());
}
let mut sock = write_lock!(self.socket);
match sock.write_all(buf.as_bytes()) {
Ok(()) => {
metrics::GRAPHITE_SENT_BYTES.count(buf.len());
trace!("Sent {} bytes to graphite", buf.len());
buf.clear();
Ok(())
}
Err(e) => {
metrics::GRAPHITE_SEND_ERR.mark();
debug!("Failed to send buffer to graphite: {}", e);
Err(e)
}
}
}
}
impl WithAttributes for GraphiteScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for GraphiteScope {}
impl QueuedInput for Graphite {}
impl CachedInput for Graphite {}
/// Its hard to see how a single scope could get more metrics than this.
// TODO make configurable?
const BUFFER_FLUSH_THRESHOLD: usize = 65_536;
/// Key of a graphite metric.
#[derive(Debug, Clone)]
pub struct GraphiteMetric {
prefix: String,
scale: isize,
}
/// Any remaining buffered data is flushed on Drop.
impl Drop for GraphiteScope {
fn drop(&mut self) {
if let Err(err) = self.flush() {
warn!("Could not flush graphite metrics upon Drop: {}", err)
}
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
use crate::attributes::*;
use crate::input::*;
#[bench]
pub fn immediate_graphite(b: &mut test::Bencher) {
let sd = Graphite::send_to("localhost:2003").unwrap().metrics();
let timer = sd.new_metric("timer".into(), InputKind::Timer);
b.iter(|| test::black_box(timer.write(2000, labels![])));
}
#[bench]
pub fn buffering_graphite(b: &mut test::Bencher) {
let sd = Graphite::send_to("localhost:2003")
.unwrap()
.buffered(Buffering::BufferSize(65465))
.metrics();
let timer = sd.new_metric("timer".into(), InputKind::Timer);
b.iter(|| test::black_box(timer.write(2000, labels![])));
}
}

186
src/output/log.rs Executable file
View File

@ -0,0 +1,186 @@
use crate::attributes::{Attributes, Buffered, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::{Input, InputKind, InputMetric, InputScope};
use crate::name::MetricName;
use crate::output::format::{Formatting, LineFormat, SimpleFormat};
use crate::Flush;
use crate::{CachedInput, QueuedInput};
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
use std::io;
use std::io::Write;
/// Buffered metrics log output.
#[derive(Clone)]
pub struct Log {
attributes: Attributes,
format: Arc<dyn LineFormat>,
level: log::Level,
target: Option<String>,
}
impl Input for Log {
type SCOPE = LogScope;
fn metrics(&self) -> Self::SCOPE {
LogScope {
attributes: self.attributes.clone(),
entries: Arc::new(RwLock::new(Vec::new())),
log: self.clone(),
}
}
}
impl WithAttributes for Log {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for Log {}
impl Formatting for Log {
fn formatting(&self, format: impl LineFormat + 'static) -> Self {
let mut cloned = self.clone();
cloned.format = Arc::new(format);
cloned
}
}
/// A scope for metrics log output.
#[derive(Clone)]
pub struct LogScope {
attributes: Attributes,
entries: Arc<RwLock<Vec<Vec<u8>>>>,
log: Log,
}
impl Log {
/// Write metric values to the standard log using `info!`.
// TODO parameterize log level, logger
pub fn to_log() -> Log {
Log {
attributes: Attributes::default(),
format: Arc::new(SimpleFormat::default()),
level: log::Level::Info,
target: None,
}
}
/// Sets the log `target` to use when logging metrics.
/// See the (log!)[https://docs.rs/log/0.4.6/log/macro.log.html] documentation.
pub fn level(&self, level: log::Level) -> Self {
let mut cloned = self.clone();
cloned.level = level;
cloned
}
/// Sets the log `target` to use when logging metrics.
/// See the (log!)[https://docs.rs/log/0.4.6/log/macro.log.html] documentation.
pub fn target(&self, target: &str) -> Self {
let mut cloned = self.clone();
cloned.target = Some(target.to_string());
cloned
}
}
impl WithAttributes for LogScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for LogScope {}
impl QueuedInput for Log {}
impl CachedInput for Log {}
impl InputScope for LogScope {
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name = self.prefix_append(name);
let template = self.log.format.template(&name, kind);
let entries = self.entries.clone();
if self.is_buffered() {
// buffered
InputMetric::new(MetricId::forge("log", name), move |value, labels| {
let mut buffer = Vec::with_capacity(32);
match template.print(&mut buffer, value, |key| labels.lookup(key)) {
Ok(()) => {
let mut entries = write_lock!(entries);
entries.push(buffer)
}
Err(err) => debug!("Could not format buffered log metric: {}", err),
}
})
} else {
// unbuffered
let level = self.log.level;
let target = self.log.target.clone();
InputMetric::new(MetricId::forge("log", name), move |value, labels| {
let mut buffer = Vec::with_capacity(32);
match template.print(&mut buffer, value, |key| labels.lookup(key)) {
Ok(()) => {
if let Some(target) = &target {
log!(target: target, level, "{:?}", &buffer)
} else {
log!(level, "{:?}", &buffer)
}
}
Err(err) => debug!("Could not format buffered log metric: {}", err),
}
})
}
}
}
impl Flush for LogScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let mut entries = write_lock!(self.entries);
if !entries.is_empty() {
let mut buf: Vec<u8> = Vec::with_capacity(32 * entries.len());
for entry in entries.drain(..) {
writeln!(&mut buf, "{:?}", &entry)?;
}
if let Some(target) = &self.log.target {
log!(target: target, self.log.level, "{:?}", &buf)
} else {
log!(self.log.level, "{:?}", &buf)
}
}
Ok(())
}
}
impl Drop for LogScope {
fn drop(&mut self) {
if let Err(e) = self.flush() {
warn!("Could not flush log metrics on Drop. {}", e)
}
}
}
#[cfg(test)]
mod test {
use crate::input::*;
#[test]
fn test_to_log() {
let c = super::Log::to_log().metrics();
let m = c.new_metric("test".into(), InputKind::Marker);
m.write(33, labels![]);
}
}

86
src/output/map.rs Executable file
View File

@ -0,0 +1,86 @@
use crate::attributes::{Attributes, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::{Input, InputKind, InputMetric, InputScope};
use crate::name::MetricName;
use crate::{Flush, MetricValue};
use std::collections::BTreeMap;
use std::io;
use std::sync::{Arc, RwLock};
/// A BTreeMap wrapper to receive metrics or stats values.
/// Every received value for a metric replaces the previous one (if any).
#[derive(Clone, Default)]
pub struct StatsMap {
attributes: Attributes,
}
impl WithAttributes for StatsMap {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Input for StatsMap {
type SCOPE = StatsMapScope;
fn metrics(&self) -> Self::SCOPE {
StatsMapScope {
attributes: self.attributes.clone(),
inner: Arc::new(RwLock::new(BTreeMap::new())),
}
}
}
/// A BTreeMap wrapper to receive metrics or stats values.
/// Every received value for a metric replaces the previous one (if any).
#[derive(Clone, Default)]
pub struct StatsMapScope {
attributes: Attributes,
inner: Arc<RwLock<BTreeMap<String, MetricValue>>>,
}
impl WithAttributes for StatsMapScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl InputScope for StatsMapScope {
fn new_metric(&self, name: MetricName, _kind: InputKind) -> InputMetric {
let name = self.prefix_append(name);
let write_to = self.inner.clone();
let key: String = name.join(".");
InputMetric::new(MetricId::forge("map", name), move |value, _labels| {
let _previous = write_to.write().expect("Lock").insert(key.clone(), value);
})
}
}
impl Flush for StatsMapScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
Ok(())
}
}
impl From<StatsMapScope> for BTreeMap<String, MetricValue> {
fn from(map: StatsMapScope) -> Self {
// FIXME this is is possibly a full map copy, for no reason.
// into_inner() is what we'd really want here but would require some `unsafe`? don't know how to do this yet.
map.inner.read().unwrap().clone()
}
}
impl StatsMapScope {
/// Extract the backing BTreeMap.
pub fn into_map(self) -> BTreeMap<String, MetricValue> {
self.into()
}
}

18
src/output/mod.rs Executable file
View File

@ -0,0 +1,18 @@
pub mod void;
pub mod format;
pub mod map;
pub mod stream;
pub mod log;
pub mod socket;
pub mod graphite;
pub mod statsd;
//#[cfg(feature="prometheus")]
pub mod prometheus;

217
src/output/prometheus.rs Executable file
View File

@ -0,0 +1,217 @@
//! Send metrics to a Prometheus server.
use crate::attributes::{Attributes, Buffered, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::InputKind;
use crate::input::{Input, InputMetric, InputScope};
use crate::label::Labels;
use crate::metrics;
use crate::name::MetricName;
use crate::{CachedInput, QueuedInput};
use crate::{Flush, MetricValue};
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::{RwLock, RwLockWriteGuard};
#[cfg(feature = "parking_lot")]
use parking_lot::{RwLock, RwLockWriteGuard};
use std::io;
/// Prometheus Input holds a socket to a Prometheus server.
/// The socket is shared between scopes opened from the Input.
#[derive(Clone, Debug)]
pub struct Prometheus {
attributes: Attributes,
push_url: String,
}
impl Input for Prometheus {
type SCOPE = PrometheusScope;
fn metrics(&self) -> Self::SCOPE {
PrometheusScope {
attributes: self.attributes.clone(),
buffer: Arc::new(RwLock::new(String::new())),
push_url: self.push_url.clone(),
}
}
}
impl Prometheus {
/// Send metrics to a Prometheus "push gateway" at the URL provided.
/// URL path must include group identifier labels `job`
/// as shown in https://github.com/prometheus/pushgateway#command-line
/// For example `http://pushgateway.example.org:9091/metrics/job/some_job`
pub fn push_to(url: &str) -> io::Result<Prometheus> {
debug!("Pushing to Prometheus {:?}", url);
Ok(Prometheus {
attributes: Attributes::default(),
push_url: url.to_string(),
})
}
}
impl WithAttributes for Prometheus {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for Prometheus {}
/// Prometheus Input
#[derive(Debug, Clone)]
pub struct PrometheusScope {
attributes: Attributes,
buffer: Arc<RwLock<String>>,
push_url: String,
}
impl InputScope for PrometheusScope {
/// Define a metric of the specified type.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let prefix = self.prefix_prepend(name.clone()).join("_");
let scale = match kind {
// timers are in µs, but we give Prometheus milliseconds
InputKind::Timer => 1000,
_ => 1,
};
let cloned = self.clone();
let metric = PrometheusMetric { prefix, scale };
let metric_id = MetricId::forge("prometheus", name);
InputMetric::new(metric_id, move |value, labels| {
cloned.print(&metric, value, labels);
})
}
}
impl Flush for PrometheusScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let buf = write_lock!(self.buffer);
self.flush_inner(buf)
}
}
impl PrometheusScope {
fn print(&self, metric: &PrometheusMetric, value: MetricValue, labels: Labels) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let mut strbuf = String::new();
// prometheus format be like `http_requests_total{method="post",code="200"} 1027 1395066363000`
strbuf.push_str(&metric.prefix);
let labels_map = labels.into_map();
if !labels_map.is_empty() {
strbuf.push('{');
let mut i = labels_map.into_iter();
let mut next = i.next();
while let Some((k, v)) = next {
strbuf.push_str(&k);
strbuf.push_str("=\"");
strbuf.push_str(&v);
next = i.next();
if next.is_some() {
strbuf.push_str("\",");
} else {
strbuf.push('"');
}
}
strbuf.push_str("} ");
} else {
strbuf.push(' ');
}
strbuf.push_str(&value_str);
strbuf.push('\n');
let mut buffer = write_lock!(self.buffer);
if strbuf.len() + buffer.len() > BUFFER_FLUSH_THRESHOLD {
metrics::PROMETHEUS_OVERFLOW.mark();
warn!(
"Prometheus Buffer Size Exceeded: {}",
BUFFER_FLUSH_THRESHOLD
);
let _ = self.flush_inner(buffer);
buffer = write_lock!(self.buffer);
}
buffer.push_str(&strbuf);
if !self.is_buffered() {
if let Err(e) = self.flush_inner(buffer) {
debug!("Could not send to statsd {}", e)
}
}
}
fn flush_inner(&self, mut buf: RwLockWriteGuard<String>) -> io::Result<()> {
if buf.is_empty() {
return Ok(());
}
match minreq::post(self.push_url.as_str())
.with_body(buf.as_str())
.send()
{
Ok(http_result) => {
metrics::PROMETHEUS_SENT_BYTES.count(buf.len());
trace!(
"Sent {} bytes to Prometheus (resp status code: {})",
buf.len(),
http_result.status_code
);
buf.clear();
Ok(())
}
Err(e) => {
metrics::PROMETHEUS_SEND_ERR.mark();
debug!("Failed to send buffer to Prometheus: {}", e);
Err(io::Error::new(io::ErrorKind::Other, e))
}
}
}
}
impl WithAttributes for PrometheusScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for PrometheusScope {}
impl QueuedInput for Prometheus {}
impl CachedInput for Prometheus {}
/// Its hard to see how a single scope could get more metrics than this.
// TODO make configurable?
const BUFFER_FLUSH_THRESHOLD: usize = 65_536;
/// Key of a Prometheus metric.
#[derive(Debug, Clone)]
pub struct PrometheusMetric {
prefix: String,
scale: isize,
}
/// Any remaining buffered data is flushed on Drop.
impl Drop for PrometheusScope {
fn drop(&mut self) {
if let Err(err) = self.flush() {
warn!("Could not flush Prometheus metrics upon Drop: {}", err)
}
}
}

51
src/socket.rs → src/output/socket.rs Normal file → Executable file
View File

@ -1,13 +1,14 @@
use std::net::TcpStream;
use std::net::{ToSocketAddrs, SocketAddr};
use std::io;
use std::time::{Duration, Instant};
use std::fmt::{Debug, Formatter};
//! A TCP Socket wrapper that reconnects automatically.
use std::fmt;
use std::io;
use std::io::Write;
use std::net::TcpStream;
use std::net::{SocketAddr, ToSocketAddrs};
use std::time::{Duration, Instant};
const MIN_RECONNECT_DELAY_MS: u64 = 50;
const MAX_RECONNECT_DELAY_MS: u64 = 10000;
const MAX_RECONNECT_DELAY_MS: u64 = 10_000;
/// A socket that retries
pub struct RetrySocket {
@ -17,25 +18,26 @@ pub struct RetrySocket {
socket: Option<TcpStream>,
}
impl Debug for RetrySocket {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
impl fmt::Debug for RetrySocket {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.next_try.fmt(f)?;
self.socket.fmt(f)
}
}
impl RetrySocket {
/// Create a new socket that will retry
pub fn new<A: ToSocketAddrs>(addresses: A) -> io::Result<Self> {
// FIXME instead of collecting addresses early, store ToSocketAddrs as trait object
// FIXME apparently this can not be one because of Associated Types clusterfuck (?!)
let addresses = addresses.to_socket_addrs()?.collect();
const INIT_DELAY: Duration = Duration::from_millis(MIN_RECONNECT_DELAY_MS);
let next_try = Instant::now().checked_add(INIT_DELAY).expect("init delay");
let mut socket = RetrySocket {
retries: 0,
next_try: Instant::now() - Duration::from_millis(MIN_RECONNECT_DELAY_MS),
next_try,
addresses,
socket: None
socket: None,
};
// try early connect
@ -63,42 +65,45 @@ impl RetrySocket {
fn backoff(&mut self, e: io::Error) -> io::Error {
self.socket = None;
self.retries += 1;
let delay = MAX_RECONNECT_DELAY_MS.min(MIN_RECONNECT_DELAY_MS << self.retries);
warn!("Could not connect to {:?} after {} trie(s). Backing off reconnection by {}ms. {}",
self.addresses, self.retries, delay, e);
self.next_try = Instant::now() + Duration::from_millis(delay);
// double the delay with each retry
let exp_delay = MIN_RECONNECT_DELAY_MS << self.retries;
let max_delay = MAX_RECONNECT_DELAY_MS.min(exp_delay);
warn!(
"Could not connect to {:?} after {} trie(s). Backing off reconnection by {}ms. {}",
self.addresses, self.retries, exp_delay, e
);
self.next_try = Instant::now() + Duration::from_millis(max_delay);
e
}
fn with_socket<F, T>(&mut self, operation: F) -> io::Result<T>
where F: FnOnce(&mut TcpStream) -> io::Result<T>
where
F: FnOnce(&mut TcpStream) -> io::Result<T>,
{
if let Err(e) = self.try_connect() {
return Err(self.backoff(e))
return Err(self.backoff(e));
}
let opres = if let Some(ref mut socket) = self.socket {
operation(socket)
} else {
// still none, quiescent
return Err(io::Error::from(io::ErrorKind::NotConnected))
return Err(io::Error::from(io::ErrorKind::NotConnected));
};
match opres {
Ok(r) => Ok(r),
Err(e) => Err(self.backoff(e))
Err(e) => Err(self.backoff(e)),
}
}
}
impl Write for RetrySocket {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.with_socket(|sock| sock.write(buf))
}
fn flush(&mut self) -> io::Result<()> {
self.with_socket(|sock| sock.flush())
self.with_socket(TcpStream::flush)
}
}
}

290
src/output/statsd.rs Executable file
View File

@ -0,0 +1,290 @@
//! Send metrics to a statsd server.
use crate::attributes::{
Attributes, Buffered, MetricId, OnFlush, Prefixed, Sampled, Sampling, WithAttributes,
};
use crate::input::InputKind;
use crate::input::{Input, InputMetric, InputScope};
use crate::metrics;
use crate::name::MetricName;
use crate::pcg32;
use crate::{CachedInput, QueuedInput};
use crate::{Flush, MetricValue};
use std::fmt::Write;
use std::net::ToSocketAddrs;
use std::net::UdpSocket;
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::{RwLock, RwLockWriteGuard};
#[cfg(feature = "parking_lot")]
use parking_lot::{RwLock, RwLockWriteGuard};
use std::io;
/// Use a safe maximum size for UDP to prevent fragmentation.
// TODO make configurable?
const MAX_UDP_PAYLOAD: usize = 576;
/// Statsd Input holds a datagram (UDP) socket to a statsd server.
/// The socket is shared between scopes opened from the Input.
#[derive(Clone, Debug)]
pub struct Statsd {
attributes: Attributes,
socket: Arc<UdpSocket>,
}
impl Statsd {
/// Send metrics to a statsd server at the address and port provided.
pub fn send_to<ADDR: ToSocketAddrs>(address: ADDR) -> io::Result<Statsd> {
let socket = Arc::new(UdpSocket::bind("0.0.0.0:0")?);
socket.set_nonblocking(true)?;
socket.connect(address)?;
Ok(Statsd {
attributes: Attributes::default(),
socket,
})
}
}
impl Buffered for Statsd {}
impl Sampled for Statsd {}
impl QueuedInput for Statsd {}
impl CachedInput for Statsd {}
impl Input for Statsd {
type SCOPE = StatsdScope;
fn metrics(&self) -> Self::SCOPE {
StatsdScope {
attributes: self.attributes.clone(),
buffer: Arc::new(RwLock::new(String::with_capacity(MAX_UDP_PAYLOAD))),
socket: self.socket.clone(),
}
}
}
impl WithAttributes for Statsd {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
/// Statsd Input
#[derive(Debug, Clone)]
pub struct StatsdScope {
attributes: Attributes,
buffer: Arc<RwLock<String>>,
socket: Arc<UdpSocket>,
}
impl Sampled for StatsdScope {}
impl InputScope for StatsdScope {
/// Define a metric of the specified type.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let mut prefix = self.prefix_prepend(name.clone()).join(".");
prefix.push(':');
let mut suffix = String::with_capacity(16);
suffix.push('|');
suffix.push_str(match kind {
InputKind::Marker | InputKind::Counter => "c",
InputKind::Gauge | InputKind::Level => "g",
InputKind::Timer => "ms",
});
let scale = match kind {
// timers are in µs, statsd wants ms
InputKind::Timer => 1000,
_ => 1,
};
let cloned = self.clone();
let metric_id = MetricId::forge("statsd", name);
if let Sampling::Random(float_rate) = self.get_sampling() {
let _ = writeln!(suffix, "|@{}", float_rate);
let int_sampling_rate = pcg32::to_int_rate(float_rate);
let metric = StatsdMetric {
prefix,
suffix,
scale,
};
InputMetric::new(metric_id, move |value, _labels| {
if pcg32::accept_sample(int_sampling_rate) {
cloned.print(&metric, value)
}
})
} else {
suffix.push('\n');
let metric = StatsdMetric {
prefix,
suffix,
scale,
};
InputMetric::new(metric_id, move |value, _labels| {
cloned.print(&metric, value)
})
}
}
}
impl Flush for StatsdScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let buf = write_lock!(self.buffer);
self.flush_inner(buf)
}
}
impl StatsdScope {
fn print(&self, metric: &StatsdMetric, value: MetricValue) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let entry_len = metric.prefix.len() + value_str.len() + metric.suffix.len();
let mut buffer = write_lock!(self.buffer);
if entry_len > buffer.capacity() {
// TODO report entry too big to fit in buffer (!?)
return;
}
let available = buffer.capacity() - buffer.len();
if entry_len + 1 > available {
// buffer is nearly full, make room
let _ = self.flush_inner(buffer);
buffer = write_lock!(self.buffer);
} else {
if !buffer.is_empty() {
// separate from previous entry
buffer.push('\n')
}
buffer.push_str(&metric.prefix);
buffer.push_str(&value_str);
buffer.push_str(&metric.suffix);
}
if !self.is_buffered() {
if let Err(e) = self.flush_inner(buffer) {
debug!("Could not send to statsd {}", e)
}
}
}
fn flush_inner(&self, mut buffer: RwLockWriteGuard<String>) -> io::Result<()> {
if !buffer.is_empty() {
match self.socket.send(buffer.as_bytes()) {
Ok(size) => {
metrics::STATSD_SENT_BYTES.count(size);
trace!("Sent {} bytes to statsd", buffer.len());
}
Err(e) => {
metrics::STATSD_SEND_ERR.mark();
return Err(e);
}
};
buffer.clear();
}
Ok(())
}
}
impl WithAttributes for StatsdScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Buffered for StatsdScope {}
/// Key of a statsd metric.
#[derive(Debug, Clone)]
pub struct StatsdMetric {
prefix: String,
suffix: String,
scale: isize,
}
/// Any remaining buffered data is flushed on Drop.
impl Drop for StatsdScope {
fn drop(&mut self) {
if let Err(err) = self.flush() {
warn!("Could not flush statsd metrics upon Drop: {}", err)
}
}
}
// use crate::output::format::LineOp::{ScaledValueAsText, ValueAsText};
//
// impl LineFormat for StatsdScope {
// fn template(&self, name: &MetricName, kind: InputKind) -> LineTemplate {
// let mut prefix = name.join(".");
// prefix.push(':');
//
// let mut suffix = String::with_capacity(16);
// suffix.push('|');
// suffix.push_str(match kind {
// InputKind::Marker | InputKind::Counter => "c",
// InputKind::Gauge | InputKind::Level => "g",
// InputKind::Timer => "ms",
// });
//
// // specify sampling rate if any
// if let Sampling::Random(float_rate) = self.get_sampling() {
// suffix.push_str(&format! {"|@{}\n", float_rate});
// }
//
// // scale timer values
// let op_value_text = match kind {
// // timers are in µs, statsd wants ms
// InputKind::Timer => ScaledValueAsText(1000.0),
// _ => ValueAsText,
// };
//
// LineTemplate::new(vec![
// LineOp::Literal(prefix.into_bytes()),
// op_value_text,
// LineOp::Literal(suffix.into_bytes()),
// LineOp::NewLine,
// ])
// }
// }
#[cfg(feature = "bench")]
mod bench {
use super::*;
use crate::attributes::*;
use crate::input::*;
#[bench]
pub fn immediate_statsd(b: &mut test::Bencher) {
let sd = Statsd::send_to("localhost:2003").unwrap().metrics();
let timer = sd.new_metric("timer".into(), InputKind::Timer);
b.iter(|| test::black_box(timer.write(2000, labels![])));
}
#[bench]
pub fn buffering_statsd(b: &mut test::Bencher) {
let sd = Statsd::send_to("localhost:2003")
.unwrap()
.buffered(Buffering::BufferSize(65465))
.metrics();
let timer = sd.new_metric("timer".into(), InputKind::Timer);
b.iter(|| test::black_box(timer.write(2000, labels![])));
}
}

257
src/output/stream.rs Executable file
View File

@ -0,0 +1,257 @@
//! Standard stateless metric Inputs.
// TODO parameterize templates
use crate::attributes::{Attributes, Buffered, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::InputKind;
use crate::name::MetricName;
use crate::Flush;
use crate::{CachedInput, QueuedInput};
use std::fs::{File, OpenOptions};
use std::io::{self, Write};
use std::path::Path;
use std::sync::Arc;
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
use crate::{Formatting, Input, InputMetric, InputScope, LineFormat, SimpleFormat};
/// Buffered metrics text Input.
pub struct Stream<W: Write + Send + Sync + 'static> {
attributes: Attributes,
format: Arc<dyn LineFormat + Send + Sync>,
inner: Arc<RwLock<W>>,
}
impl<W: Write + Send + Sync + 'static> QueuedInput for Stream<W> {}
impl<W: Write + Send + Sync + 'static> CachedInput for Stream<W> {}
impl<W: Write + Send + Sync + 'static> Formatting for Stream<W> {
fn formatting(&self, format: impl LineFormat + 'static) -> Self {
let mut cloned = self.clone();
cloned.format = Arc::new(format);
cloned
}
}
impl<W: Write + Send + Sync + 'static> Stream<W> {
/// Write metric values to provided Write target.
pub fn write_to(write: W) -> Stream<W> {
Stream {
attributes: Attributes::default(),
format: Arc::new(SimpleFormat::default()),
inner: Arc::new(RwLock::new(write)),
}
}
}
impl Stream<File> {
/// Write metric values to a file.
#[deprecated(since = "0.8.0", note = "Use write_to_file()")]
#[allow(clippy::wrong_self_convention)]
pub fn to_file<P: AsRef<Path>>(file: P) -> io::Result<Stream<File>> {
Self::write_to_file(file)
}
/// Write metric values to a file.
pub fn write_to_file<P: AsRef<Path>>(file: P) -> io::Result<Stream<File>> {
let file = OpenOptions::new()
.write(true)
.create(true)
.append(true)
.open(file)?;
Ok(Stream::write_to(file))
}
/// Write metrics to a new file.
///
/// Creates a new file to dump data into. If `clobber` is set to true, it allows overwriting
/// existing file, if false, the attempt will result in an error.
#[deprecated(since = "0.8.0", note = "Use write_to_new_file()")]
#[allow(clippy::wrong_self_convention)]
pub fn to_new_file<P: AsRef<Path>>(file: P, clobber: bool) -> io::Result<Stream<File>> {
Self::write_to_new_file(file, clobber)
}
/// Write metrics to a new file.
///
/// Creates a new file to dump data into. If `clobber` is set to true, it allows overwriting
/// existing file, if false, the attempt will result in an error.
pub fn write_to_new_file<P: AsRef<Path>>(file: P, clobber: bool) -> io::Result<Stream<File>> {
let file = OpenOptions::new()
.write(true)
.create(true)
.create_new(!clobber)
.open(file)?;
Ok(Stream::write_to(file))
}
}
impl Stream<io::Stderr> {
/// Write metric values to stderr.
#[deprecated(since = "0.8.0", note = "Use write_to_stderr()")]
pub fn to_stderr() -> Stream<io::Stderr> {
Stream::write_to(io::stderr())
}
/// Write metric values to stderr.
pub fn write_to_stderr() -> Stream<io::Stderr> {
Stream::write_to(io::stderr())
}
}
impl Stream<io::Stdout> {
/// Write metric values to stdout.
#[deprecated(since = "0.8.0", note = "Use write_to_stdout()")]
pub fn to_stdout() -> Stream<io::Stdout> {
Stream::write_to(io::stdout())
}
/// Write metric values to stdout.
pub fn write_to_stdout() -> Stream<io::Stdout> {
Stream::write_to(io::stdout())
}
}
// FIXME manual Clone impl required because auto-derive is borked (https://github.com/rust-lang/rust/issues/26925)
impl<W: Write + Send + Sync + 'static> Clone for Stream<W> {
fn clone(&self) -> Self {
Stream {
attributes: self.attributes.clone(),
format: self.format.clone(),
inner: self.inner.clone(),
}
}
}
impl<W: Write + Send + Sync + 'static> WithAttributes for Stream<W> {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl<W: Write + Send + Sync + 'static> Buffered for Stream<W> {}
impl<W: Write + Send + Sync + 'static> Input for Stream<W> {
type SCOPE = TextScope<W>;
fn metrics(&self) -> Self::SCOPE {
TextScope {
attributes: self.attributes.clone(),
entries: Arc::new(RwLock::new(Vec::new())),
input: self.clone(),
}
}
}
/// A scope for text metrics.
pub struct TextScope<W: Write + Send + Sync + 'static> {
attributes: Attributes,
entries: Arc<RwLock<Vec<Vec<u8>>>>,
input: Stream<W>,
}
impl<W: Write + Send + Sync + 'static> Clone for TextScope<W> {
fn clone(&self) -> Self {
TextScope {
attributes: self.attributes.clone(),
entries: self.entries.clone(),
input: self.input.clone(),
}
}
}
impl<W: Write + Send + Sync + 'static> WithAttributes for TextScope<W> {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl<W: Write + Send + Sync + 'static> Buffered for TextScope<W> {}
impl<W: Write + Send + Sync + 'static> InputScope for TextScope<W> {
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name = self.prefix_append(name);
let template = self.input.format.template(&name, kind);
let entries = self.entries.clone();
let metric_id = MetricId::forge("stream", name);
if self.is_buffered() {
InputMetric::new(metric_id, move |value, labels| {
let mut buffer = Vec::with_capacity(32);
match template.print(&mut buffer, value, |key| labels.lookup(key)) {
Ok(()) => {
let mut entries = write_lock!(entries);
entries.push(buffer)
}
Err(err) => debug!("{}", err),
}
})
} else {
// unbuffered
let input = self.input.clone();
InputMetric::new(metric_id, move |value, labels| {
let mut buffer = Vec::with_capacity(32);
match template.print(&mut buffer, value, |key| labels.lookup(key)) {
Ok(()) => {
let mut input = write_lock!(input.inner);
if let Err(e) = input.write_all(&buffer).and_then(|_| input.flush()) {
debug!("Could not write text metrics: {}", e)
}
}
Err(err) => debug!("{}", err),
}
})
}
}
}
impl<W: Write + Send + Sync + 'static> Flush for TextScope<W> {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
let mut entries = write_lock!(self.entries);
if !entries.is_empty() {
let mut input = write_lock!(self.input.inner);
for entry in entries.drain(..) {
input.write_all(&entry)?
}
input.flush()?;
}
Ok(())
}
}
impl<W: Write + Send + Sync + 'static> Drop for TextScope<W> {
fn drop(&mut self) {
if let Err(e) = self.flush() {
warn!("Could not flush text metrics on Drop. {}", e)
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::input::InputKind;
use std::io;
#[test]
fn sink_print() {
let c = super::Stream::write_to(io::stdout()).metrics();
let m = c.new_metric("test".into(), InputKind::Marker);
m.write(33, labels![]);
}
}

68
src/output/void.rs Executable file
View File

@ -0,0 +1,68 @@
use crate::name::MetricName;
use crate::Flush;
use crate::attributes::MetricId;
use crate::{Input, InputDyn, InputKind, InputMetric, InputScope};
use std::io;
use std::sync::Arc;
lazy_static! {
/// The reference instance identifying an uninitialized metric config.
pub static ref VOID_INPUT: Arc<dyn InputDyn + Send + Sync> = Arc::new(Void::new());
/// The reference instance identifying an uninitialized metric scope.
pub static ref NO_METRIC_SCOPE: Arc<dyn InputScope + Send + Sync> = VOID_INPUT.input_dyn();
}
/// Discard metrics Input.
#[derive(Clone, Default)]
pub struct Void {}
/// Discard metrics Input.
#[derive(Clone)]
pub struct VoidInput {}
impl Void {
/// Void metrics builder.
#[deprecated(since = "0.7.2", note = "Use new()")]
pub fn metrics() -> Self {
Self::new()
}
/// Void metrics builder.
pub fn new() -> Self {
Void {}
}
}
impl Input for Void {
type SCOPE = VoidInput;
fn metrics(&self) -> Self::SCOPE {
VoidInput {}
}
}
impl InputScope for VoidInput {
fn new_metric(&self, name: MetricName, _kind: InputKind) -> InputMetric {
InputMetric::new(MetricId::forge("void", name), |_value, _labels| {})
}
}
impl Flush for VoidInput {
fn flush(&self) -> io::Result<()> {
Ok(())
}
}
#[cfg(test)]
pub mod test {
use super::*;
#[test]
fn test_to_void() {
let c = Void::new().metrics();
let m = c.new_metric("test".into(), InputKind::Marker);
m.write(33, labels![]);
}
}

View File

@ -1,17 +1,19 @@
//! PCG32 random number generation for fast sampling
//! Kept here for low dependency count.
#![cfg_attr(feature = "tool_lints", allow(clippy::unreadable_literal))]
#![allow(clippy::unreadable_literal)]
// TODO use https://github.com/codahale/pcg instead?
use std::cell::RefCell;
use time;
fn seed() -> u64 {
let seed = 5573589319906701683_u64;
let seed = seed.wrapping_mul(6364136223846793005)
let seed = seed
.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407)
.wrapping_add(time::precise_time_ns());
seed.wrapping_mul(6364136223846793005).wrapping_add(
1442695040888963407,
)
seed.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407)
}
/// quickly return a random int
@ -21,12 +23,12 @@ fn pcg32_random() -> u32 {
}
PCG32_STATE.with(|state| {
let oldstate: u64 = *state.borrow();
let old_state: u64 = *state.borrow();
// XXX could generate the increment from the thread ID
*state.borrow_mut() = oldstate.wrapping_mul(6364136223846793005).wrapping_add(
1442695040888963407,
);
((((oldstate >> 18) ^ oldstate) >> 27) as u32).rotate_right((oldstate >> 59) as u32)
*state.borrow_mut() = old_state
.wrapping_mul(6364136223846793005)
.wrapping_add(1442695040888963407);
((((old_state >> 18) ^ old_state) >> 27) as u32).rotate_right((old_state >> 59) as u32)
})
}
@ -37,8 +39,8 @@ fn pcg32_random() -> u32 {
/// all | 1.0 | 0x0 | 100%
/// none | 0.0 | 0xFFFFFFFF | 0%
pub fn to_int_rate(float_rate: f64) -> u32 {
assert!(float_rate <= 1.0 && float_rate >= 0.0);
((1.0 - float_rate) * ::std::u32::MAX as f64) as u32
assert!((0.0..=1.0).contains(&float_rate));
((1.0 - float_rate) * f64::from(u32::MAX)) as u32
}
/// randomly select samples based on an int rate

300
src/proxy.rs Executable file
View File

@ -0,0 +1,300 @@
//! Decouple metric definition from configuration with trait objects.
use crate::attributes::{Attributes, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::{InputKind, InputMetric, InputScope};
use crate::name::{MetricName, NameParts};
use crate::output::void::VOID_INPUT;
use crate::Flush;
use std::collections::{BTreeMap, HashMap};
use std::sync::{Arc, Weak};
use std::{fmt, io};
#[cfg(not(feature = "parking_lot"))]
use std::sync::RwLock;
#[cfg(feature = "parking_lot")]
use parking_lot::RwLock;
use atomic_refcell::*;
lazy_static! {
/// Root of the default metrics proxy, usable by all libraries and apps.
/// Libraries should create their metrics into sub subspaces of this.
/// Applications should configure on startup where the proxied metrics should go.
/// Exceptionally, one can create its own ProxyInput root, separate from this one.
static ref ROOT_PROXY: Proxy = Proxy::new();
}
/// A dynamically proxied metric.
#[derive(Debug)]
struct ProxyMetric {
// basic info for this metric, needed to recreate new corresponding trait object if target changes
name: NameParts,
kind: InputKind,
// the metric trait object to proxy metric values to
// the second part can be up to namespace.len() + 1 if this metric was individually targeted
// 0 if no target assigned
target: AtomicRefCell<(InputMetric, usize)>,
// a reference to the the parent proxy to remove the metric from when it is dropped
proxy: Arc<RwLock<InnerProxy>>,
}
/// Dispatcher weak ref does not prevent dropping but still needs to be cleaned out.
impl Drop for ProxyMetric {
fn drop(&mut self) {
write_lock!(self.proxy).drop_metric(&self.name)
}
}
/// A dynamic proxy for app and lib metrics.
/// Decouples metrics definition from backend configuration.
/// Allows defining metrics before a concrete type is configured.
/// Allows replacing metrics backend on the fly at runtime.
#[derive(Clone, Debug)]
pub struct Proxy {
attributes: Attributes,
inner: Arc<RwLock<InnerProxy>>,
}
impl Default for Proxy {
/// Return the default root metric proxy.
fn default() -> Self {
ROOT_PROXY.clone()
}
}
struct InnerProxy {
// namespaces can target one, many or no metrics
targets: HashMap<NameParts, Arc<dyn InputScope + Send + Sync>>,
// last part of the namespace is the metric's name
metrics: BTreeMap<NameParts, Weak<ProxyMetric>>,
}
impl fmt::Debug for InnerProxy {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "metrics: {:?}", self.metrics.keys())?;
write!(f, "targets: {:?}", self.targets.keys())
}
}
impl InnerProxy {
fn new() -> Self {
Self {
targets: HashMap::new(),
metrics: BTreeMap::new(),
}
}
fn set_target(
&mut self,
namespace: &NameParts,
target_scope: Arc<dyn InputScope + Send + Sync>,
) {
self.targets.insert(namespace.clone(), target_scope.clone());
for (metric_name, metric) in self.metrics.range_mut(namespace.clone()..) {
if let Some(metric) = metric.upgrade() {
// check for range end
if !metric_name.is_within(namespace) {
break;
}
// check if metric targeted by _lower_ namespace
if metric.target.borrow().1 > namespace.len() {
continue;
}
let target_metric = target_scope.new_metric(metric.name.short(), metric.kind);
*metric.target.borrow_mut() = (target_metric, namespace.len());
}
}
}
fn get_effective_target(
&self,
namespace: &NameParts,
) -> Option<(Arc<dyn InputScope + Send + Sync>, usize)> {
if let Some(target) = self.targets.get(namespace) {
return Some((target.clone(), namespace.len()));
}
// no 1:1 match, scan upper namespaces
let mut name = namespace.clone();
while let Some(_popped) = name.pop_back() {
if let Some(target) = self.targets.get(&name) {
return Some((target.clone(), name.len()));
}
}
None
}
fn unset_target(&mut self, namespace: &NameParts) {
if self.targets.remove(namespace).is_none() {
// nothing to do
return;
}
let (up_target, up_nslen) = self
.get_effective_target(namespace)
.unwrap_or_else(|| (VOID_INPUT.input_dyn(), 0));
// update all affected metrics to next upper targeted namespace
for (name, metric) in self.metrics.range_mut(namespace.clone()..) {
// check for range end
if !name.is_within(namespace) {
break;
}
if let Some(metric) = metric.upgrade() {
// check if metric targeted by _lower_ namespace
if metric.target.borrow().1 > namespace.len() {
continue;
}
let new_metric = up_target.new_metric(name.short(), metric.kind);
*metric.target.borrow_mut() = (new_metric, up_nslen);
}
}
}
fn drop_metric(&mut self, name: &NameParts) {
if self.metrics.remove(name).is_none() {
panic!("Could not remove DelegatingMetric weak ref from delegation point")
}
}
fn flush(&self, namespace: &NameParts) -> io::Result<()> {
if let Some((target, _nslen)) = self.get_effective_target(namespace) {
target.flush()
} else {
Ok(())
}
}
}
impl Proxy {
/// Create a new "private" metric proxy root. This is usually not what you want.
/// Since this proxy will not be part of the standard proxy tree,
/// it will need to be configured independently and since downstream code may not know about
/// its existence this may never happen and metrics will not be proxyed anywhere.
/// If you want to use the standard proxy tree, use #metric_proxy() instead.
pub fn new() -> Self {
Proxy {
attributes: Attributes::default(),
inner: Arc::new(RwLock::new(InnerProxy::new())),
}
}
/// Replace target for this proxy and its children.
#[deprecated(since = "0.7.2", note = "Use target()")]
pub fn set_target<T: InputScope + Send + Sync + 'static>(&self, target: T) {
self.target(target)
}
/// Replace target for this proxy and its children.
pub fn target<T: InputScope + Send + Sync + 'static>(&self, target: T) {
write_lock!(self.inner).set_target(self.get_prefixes(), Arc::new(target))
}
/// Replace target for this proxy and its children.
pub fn unset_target(&self) {
write_lock!(self.inner).unset_target(self.get_prefixes())
}
/// Install a new default target for all proxies.
#[deprecated(since = "0.7.2", note = "Use default_target()")]
pub fn set_default_target<T: InputScope + Send + Sync + 'static>(target: T) {
Self::default_target(target)
}
/// Install a new default target for all proxies.
pub fn default_target<T: InputScope + Send + Sync + 'static>(target: T) {
ROOT_PROXY.target(target)
}
/// Revert to initial state any installed default target for all proxies.
pub fn unset_default_target(&self) {
ROOT_PROXY.unset_target()
}
}
impl<S: AsRef<str>> From<S> for Proxy {
fn from(name: S) -> Proxy {
Proxy::new().named(name.as_ref())
}
}
impl InputScope for Proxy {
/// Lookup or create a proxy stub for the requested metric.
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name: MetricName = self.prefix_append(name);
let mut inner = write_lock!(self.inner);
let proxy = inner
.metrics
.get(&name)
// TODO validate that InputKind matches existing
.and_then(Weak::upgrade)
.unwrap_or_else(|| {
let namespace = &*name;
{
// not found, define new
let (target, target_namespace_length) = inner
.get_effective_target(namespace)
.unwrap_or_else(|| (VOID_INPUT.input_dyn(), 0));
let metric_object = target.new_metric(namespace.short(), kind);
let proxy = Arc::new(ProxyMetric {
name: namespace.clone(),
kind,
target: AtomicRefCell::new((metric_object, target_namespace_length)),
proxy: self.inner.clone(),
});
inner
.metrics
.insert(namespace.clone(), Arc::downgrade(&proxy));
proxy
}
});
InputMetric::new(MetricId::forge("proxy", name), move |value, labels| {
proxy.target.borrow().0.write(value, labels)
})
}
}
impl Flush for Proxy {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
write_lock!(self.inner).flush(self.get_prefixes())
}
}
impl WithAttributes for Proxy {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
use crate::AtomicBucket;
#[bench]
fn proxy_marker_to_aggregate(b: &mut test::Bencher) {
ROOT_PROXY.target(AtomicBucket::new());
let metric = ROOT_PROXY.marker("event_a");
b.iter(|| test::black_box(metric.mark()));
}
#[bench]
fn proxy_marker_to_void(b: &mut test::Bencher) {
let metric = ROOT_PROXY.marker("event_a");
b.iter(|| test::black_box(metric.mark()));
}
}

View File

@ -1,150 +0,0 @@
//! Publishes metrics values from a source to a sink.
//!
//! Publishing can be done on request:
//! ```
//! use dipstick::*;
//!
//! let (sink, source) = aggregate();
//! publish(&source, &log("aggregated"), publish::all_stats);
//! ```
//!
//! Publishing can be scheduled to run recurrently.
//! ```
//! use dipstick::*;
//! use std::time::Duration;
//!
//! let (sink, source) = aggregate();
//! let job = publish_every(Duration::from_millis(100), &source, &log("aggregated"), publish::all_stats);
//! // publish will go on until cancelled
//! job.cancel();
//! ```
use core::*;
use core::Kind::*;
use scores::{ScoreType, ScoreSnapshot};
use scores::ScoreType::*;
use std::fmt::Debug;
/// A trait to publish metrics.
pub trait Publish: Send + Sync + Debug {
/// Publish the provided metrics data downstream.
fn publish(&self, scores: Vec<ScoreSnapshot>);
}
/// Define and write metrics from aggregated scores to the target channel
/// If this is called repeatedly it can be a good idea to use the metric cache
/// to prevent new metrics from being created every time.
#[derive(Derivative, Clone)]
#[derivative(Debug)]
pub struct Publisher<E, M> {
#[derivative(Debug = "ignore")]
statistics: Box<E>,
target_chain: Chain<M>,
}
impl<E, M> Publisher<E, M>
where
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
M: Clone + Send + Sync + 'static,
{
/// Define a new metrics publishing strategy, from a transformation
/// function and a target metric chain.
pub fn new(stat_fn: E, target_chain: Chain<M>) -> Self {
Publisher {
statistics: Box::new(stat_fn),
target_chain,
}
}
}
impl<E, M> Publish for Publisher<E, M>
where
M: Clone + Send + Sync + Debug,
E: Fn(Kind, &str, ScoreType) -> Option<(Kind, Vec<&str>, Value)> + Send + Sync + 'static,
{
fn publish(&self, snapshot: Vec<ScoreSnapshot>) {
let publish_scope_fn = self.target_chain.open_scope(false);
if snapshot.is_empty() {
// no data was collected for this period
// TODO repeat previous frame min/max ?
// TODO update some canary metric ?
} else {
for metric in snapshot {
for score in metric.2 {
if let Some(ex) = self.statistics.as_ref()(metric.0, metric.1.as_ref(), score) {
let temp_metric = self.target_chain.define_metric(ex.0, &ex.1.concat(), 1.0);
publish_scope_fn(ScopeCmd::Write(&temp_metric, ex.2));
}
}
}
}
// TODO parameterize whether to keep ad-hoc metrics after publish
// source.cleanup();
publish_scope_fn(ScopeCmd::Flush)
}
}
/// A predefined export strategy reporting all aggregated stats for all metric types.
/// Resulting stats are named by appending a short suffix to each metric's name.
pub fn all_stats(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match score {
Count(hit) => Some((Counter, vec![name, ".count"], hit)),
Sum(sum) => Some((kind, vec![name, ".sum"], sum)),
Mean(mean) => Some((kind, vec![name, ".mean"], mean.round() as Value)),
Max(max) => Some((Gauge, vec![name, ".max"], max)),
Min(min) => Some((Gauge, vec![name, ".min"], min)),
Rate(rate) => Some((Gauge, vec![name, ".rate"], rate.round() as Value))
}
}
/// A predefined export strategy reporting the average value for every non-marker metric.
/// Marker metrics export their hit count instead.
/// Since there is only one stat per metric, there is no risk of collision
/// and so exported stats copy their metric's name.
pub fn average(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match kind {
Marker => {
match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
}
}
_ => {
match score {
Mean(avg) => Some((Gauge, vec![name], avg.round() as Value)),
_ => None,
}
}
}
}
/// A predefined single-stat-per-metric export strategy:
/// - Timers and Counters each export their sums
/// - Markers each export their hit count
/// - Gauges each export their average
/// Since there is only one stat per metric, there is no risk of collision
/// and so exported stats copy their metric's name.
pub fn summary(kind: Kind, name: &str, score: ScoreType) -> Option<(Kind, Vec<&str>, Value)> {
match kind {
Marker => {
match score {
Count(count) => Some((Counter, vec![name], count)),
_ => None,
}
}
Counter | Timer => {
match score {
Sum(sum) => Some((kind, vec![name], sum)),
_ => None,
}
}
Gauge => {
match score {
Mean(mean) => Some((Gauge, vec![name], mean.round() as Value)),
_ => None,
}
}
}
}

211
src/queue.rs Executable file
View File

@ -0,0 +1,211 @@
//! Queue metrics for write on a separate thread,
//! Metrics definitions are still synchronous.
//! If queue size is exceeded, calling code reverts to blocking.
use crate::attributes::{Attributes, MetricId, OnFlush, Prefixed, WithAttributes};
use crate::input::{Input, InputDyn, InputKind, InputMetric, InputScope};
use crate::label::Labels;
use crate::metrics;
use crate::name::MetricName;
use crate::CachedInput;
use crate::{Flush, MetricValue};
#[cfg(not(feature = "crossbeam-channel"))]
use std::sync::mpsc;
use std::sync::Arc;
use std::{io, thread};
#[cfg(feature = "crossbeam-channel")]
use crossbeam_channel as crossbeam;
/// Wrap this output behind an asynchronous metrics dispatch queue.
/// This is not strictly required for multi threading since the provided scopes
/// are already Send + Sync but might be desired to lower the latency
pub trait QueuedInput: Input + Send + Sync + 'static + Sized {
/// Wrap this output with an asynchronous dispatch queue of specified length.
fn queued(self, max_size: usize) -> InputQueue {
InputQueue::new(self, max_size)
}
}
/// # Panics
///
/// Panics if the OS fails to create a thread.
#[cfg(not(feature = "crossbeam-channel"))]
fn new_async_channel(length: usize) -> Arc<mpsc::SyncSender<InputQueueCmd>> {
let (sender, receiver) = mpsc::sync_channel::<InputQueueCmd>(length);
thread::Builder::new()
.name("dipstick-queue-in".to_string())
.spawn(move || {
let mut done = false;
while !done {
match receiver.recv() {
Ok(InputQueueCmd::Write(metric, value, labels)) => metric.write(value, labels),
Ok(InputQueueCmd::Flush(scope)) => {
if let Err(e) = scope.flush() {
debug!("Could not asynchronously flush metrics: {}", e);
}
}
Err(e) => {
debug!("Async metrics receive loop terminated: {}", e);
// cannot break from within match, use safety pin instead
done = true
}
}
}
})
.unwrap(); // TODO: Panic, change API to return Result?
Arc::new(sender)
}
/// # Panics
///
/// Panics if the OS fails to create a thread.
#[cfg(feature = "crossbeam-channel")]
fn new_async_channel(length: usize) -> Arc<crossbeam::Sender<InputQueueCmd>> {
let (sender, receiver) = crossbeam::bounded::<InputQueueCmd>(length);
thread::Builder::new()
.name("dipstick-queue-in".to_string())
.spawn(move || {
let mut done = false;
while !done {
match receiver.recv() {
Ok(InputQueueCmd::Write(metric, value, labels)) => metric.write(value, labels),
Ok(InputQueueCmd::Flush(scope)) => {
if let Err(e) = scope.flush() {
debug!("Could not asynchronously flush metrics: {}", e);
}
}
Err(e) => {
debug!("Async metrics receive loop terminated: {}", e);
// cannot break from within match, use safety pin instead
done = true
}
}
}
})
.unwrap(); // TODO: Panic, change API to return Result?
Arc::new(sender)
}
/// Wrap new scopes with an asynchronous metric write & flush dispatcher.
#[derive(Clone)]
pub struct InputQueue {
attributes: Attributes,
target: Arc<dyn InputDyn + Send + Sync + 'static>,
#[cfg(not(feature = "crossbeam-channel"))]
sender: Arc<mpsc::SyncSender<InputQueueCmd>>,
#[cfg(feature = "crossbeam-channel")]
sender: Arc<crossbeam::Sender<InputQueueCmd>>,
}
impl InputQueue {
/// Wrap new scopes with an asynchronous metric write & flush dispatcher.
pub fn new<OUT: Input + Send + Sync + 'static>(target: OUT, queue_length: usize) -> Self {
InputQueue {
attributes: Attributes::default(),
target: Arc::new(target),
sender: new_async_channel(queue_length),
}
}
}
impl CachedInput for InputQueue {}
impl WithAttributes for InputQueue {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl Input for InputQueue {
type SCOPE = InputQueueScope;
/// Wrap new scopes with an asynchronous metric write & flush dispatcher.
fn metrics(&self) -> Self::SCOPE {
let target_scope = self.target.input_dyn();
InputQueueScope {
attributes: self.attributes.clone(),
sender: self.sender.clone(),
target: target_scope,
}
}
}
/// This is only `pub` because `error` module needs to know about it.
/// Async commands should be of no concerns to applications.
pub enum InputQueueCmd {
/// Send metric write
Write(InputMetric, MetricValue, Labels),
/// Send metric flush
Flush(Arc<dyn InputScope + Send + Sync + 'static>),
}
/// A metric scope wrapper that sends writes & flushes over a Rust sync channel.
/// Commands are executed by a background thread.
#[derive(Clone)]
pub struct InputQueueScope {
attributes: Attributes,
#[cfg(not(feature = "crossbeam-channel"))]
sender: Arc<mpsc::SyncSender<InputQueueCmd>>,
#[cfg(feature = "crossbeam-channel")]
sender: Arc<crossbeam::Sender<InputQueueCmd>>,
target: Arc<dyn InputScope + Send + Sync + 'static>,
}
impl InputQueueScope {
/// Wrap new scopes with an asynchronous metric write & flush dispatcher.
pub fn wrap<SC: InputScope + Send + Sync + 'static>(
target_scope: SC,
queue_length: usize,
) -> Self {
InputQueueScope {
attributes: Attributes::default(),
sender: new_async_channel(queue_length),
target: Arc::new(target_scope),
}
}
}
impl WithAttributes for InputQueueScope {
fn get_attributes(&self) -> &Attributes {
&self.attributes
}
fn mut_attributes(&mut self) -> &mut Attributes {
&mut self.attributes
}
}
impl InputScope for InputQueueScope {
fn new_metric(&self, name: MetricName, kind: InputKind) -> InputMetric {
let name = self.prefix_append(name);
let target_metric = self.target.new_metric(name.clone(), kind);
let sender = self.sender.clone();
InputMetric::new(MetricId::forge("queue", name), move |value, mut labels| {
labels.save_context();
if let Err(e) = sender.send(InputQueueCmd::Write(target_metric.clone(), value, labels))
{
metrics::SEND_FAILED.mark();
debug!("Failed to send async metrics: {}", e);
}
})
}
}
impl Flush for InputQueueScope {
fn flush(&self) -> io::Result<()> {
self.notify_flush_listeners();
if let Err(e) = self.sender.send(InputQueueCmd::Flush(self.target.clone())) {
metrics::SEND_FAILED.mark();
debug!("Failed to flush async metrics: {}", e);
Err(io::Error::new(io::ErrorKind::Other, e))
} else {
Ok(())
}
}
}

View File

@ -1,48 +0,0 @@
//! Reduce the amount of data to process or transfer by statistically dropping some of it.
use core::*;
use pcg32;
use std::sync::Arc;
/// The metric sampling key also holds the sampling rate to apply to it.
#[derive(Debug, Clone)]
pub struct Sample<M> {
target: M,
int_sampling_rate: u32,
}
/// Perform random sampling of values according to the specified rate.
pub fn sample<M, IC>(sampling_rate: Rate, chain: IC) -> Chain<Sample<M>>
where
M: Clone + Send + Sync + 'static,
IC: Into<Chain<M>>,
{
let chain = chain.into();
chain.mod_both(|metric_fn, scope_fn|
(Arc::new(move |kind, name, rate| {
// TODO override only if FULL_SAMPLING else warn!()
if rate != FULL_SAMPLING_RATE {
info!("Metric {} will be downsampled again {}, {}", name, rate, sampling_rate);
}
let new_rate = sampling_rate * rate;
Sample {
target: metric_fn(kind, name, new_rate),
int_sampling_rate: pcg32::to_int_rate(new_rate),
}
}),
Arc::new(move |auto_flush| {
let next_scope = scope_fn(auto_flush);
Arc::new(move |cmd| {
if let ScopeCmd::Write(metric, value) = cmd {
if pcg32::accept_sample(metric.int_sampling_rate) {
next_scope(ScopeCmd::Write(&metric.target, value))
}
}
next_scope(ScopeCmd::Flush)
})
}))
)
}

View File

@ -1,45 +0,0 @@
//! Task scheduling facilities.
use std::time::Duration;
use std::thread;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering::SeqCst;
/// A handle to cancel a scheduled task if required.
#[derive(Debug, Clone)]
pub struct CancelHandle(Arc<AtomicBool>);
impl CancelHandle {
fn new() -> CancelHandle {
CancelHandle(Arc::new(AtomicBool::new(false)))
}
/// Signals the task to stop.
pub fn cancel(&self) {
self.0.store(true, SeqCst);
}
fn is_cancelled(&self) -> bool {
self.0.load(SeqCst)
}
}
/// Schedule a task to run periodically.
/// Starts a new thread for every task.
pub fn schedule<F>(every: Duration, operation: F) -> CancelHandle
where
F: Fn() -> () + Send + 'static,
{
let handle = CancelHandle::new();
let inner_handle = handle.clone();
thread::spawn(move || loop {
thread::sleep(every);
if inner_handle.is_cancelled() {
break;
}
operation();
});
handle
}

336
src/scheduler.rs Normal file
View File

@ -0,0 +1,336 @@
//! Task scheduling facilities.
use crate::input::InputScope;
use std::cmp::{max, Ordering};
use std::collections::BinaryHeap;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering::SeqCst;
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
use std::time::{Duration, Instant};
/// A guard canceling the inner handle when dropped.
///
/// See [Cancel::into_guard](trait.Cancel.html#method.into_guard) to create it.
pub struct CancelGuard<C: Cancel> {
// This is Option, so disarm can work.
// The problem is, Rust won't let us destructure self because we have a destructor.
inner: Option<C>,
}
impl<C: Cancel> CancelGuard<C> {
/// Disarms the guard.
///
/// This disposes of the guard without performing the cancelation. This is similar to calling
/// `forget` on it, but doesn't leak resources, while forget potentially could.
pub fn disarm(mut self) -> C {
self.inner
.take()
.expect("The borrowchecker shouldn't allow anyone to call disarm twice")
}
}
impl<C: Cancel> Drop for CancelGuard<C> {
fn drop(&mut self) {
if let Some(inner) = self.inner.take() {
inner.cancel();
}
}
}
/// A deferred, repeatable, background action that can be cancelled.
pub trait Cancel {
/// Cancel the action.
fn cancel(&self);
/// Create a guard that cancels when it is dropped.
fn into_guard(self) -> CancelGuard<Self>
where
Self: Sized,
{
CancelGuard { inner: Some(self) }
}
}
/// A handle to cancel a scheduled task if required.
#[derive(Debug, Clone)]
pub struct CancelHandle(Arc<AtomicBool>);
impl CancelHandle {
fn new() -> CancelHandle {
CancelHandle(Arc::new(AtomicBool::new(false)))
}
fn is_cancelled(&self) -> bool {
self.0.load(SeqCst)
}
}
impl Cancel for CancelHandle {
/// Signals the task to stop.
fn cancel(&self) {
if self.0.swap(true, SeqCst) {
warn!("Scheduled task was already cancelled.")
}
}
}
/// Enable background periodical publication of metrics
pub trait ScheduleFlush {
/// Flush this scope at regular intervals.
fn flush_every(&self, period: Duration) -> CancelHandle;
}
impl<T: InputScope + Send + Sync + Clone + 'static> ScheduleFlush for T {
/// Flush this scope at regular intervals.
fn flush_every(&self, period: Duration) -> CancelHandle {
let scope = self.clone();
SCHEDULER.schedule(period, move |_| {
if let Err(err) = scope.flush() {
error!("Could not flush metrics: {}", err);
}
})
}
}
lazy_static! {
pub static ref SCHEDULER: Scheduler = Scheduler::new();
}
struct ScheduledTask {
next_time: Instant,
period: Duration,
handle: CancelHandle,
operation: Arc<dyn Fn(Instant) + Send + Sync + 'static>,
}
impl Ord for ScheduledTask {
fn cmp(&self, other: &ScheduledTask) -> Ordering {
other.next_time.cmp(&self.next_time)
}
}
impl PartialOrd for ScheduledTask {
fn partial_cmp(&self, other: &ScheduledTask) -> Option<Ordering> {
other.next_time.partial_cmp(&self.next_time)
}
}
impl PartialEq for ScheduledTask {
fn eq(&self, other: &ScheduledTask) -> bool {
self.next_time.eq(&other.next_time)
}
}
impl Eq for ScheduledTask {}
pub struct Scheduler {
next_tasks: Arc<(Mutex<BinaryHeap<ScheduledTask>>, Condvar)>,
}
pub static MIN_DELAY: Duration = Duration::from_millis(50);
impl Scheduler {
/// Launch a new scheduler thread.
fn new() -> Self {
let sched: Arc<(Mutex<BinaryHeap<ScheduledTask>>, Condvar)> =
Arc::new((Mutex::new(BinaryHeap::new()), Condvar::new()));
let sched1 = Arc::downgrade(&sched);
thread::Builder::new()
.name("dipstick_scheduler".to_string())
.spawn(move || {
let mut wait_for = MIN_DELAY;
while let Some(sss) = sched1.upgrade() {
let (heap_mutex, condvar) = &*sss;
let heap = heap_mutex.lock().unwrap();
let (mut tasks, _timed_out) = condvar.wait_timeout(heap, wait_for).unwrap();
'work: loop {
let now = Instant::now();
match tasks.peek() {
Some(task) if task.next_time > now => {
// next task is not ready yet, update schedule
wait_for = max(MIN_DELAY, task.next_time - now);
break 'work;
}
None => {
// TODO no tasks left. exit thread?
break 'work;
}
_ => {}
}
if let Some(mut task) = tasks.pop() {
if task.handle.is_cancelled() {
// do not execute, do not reinsert
continue;
}
(task.operation)(now);
task.next_time = now + task.period;
tasks.push(task);
}
}
}
})
.unwrap();
Scheduler { next_tasks: sched }
}
#[cfg(test)]
pub fn task_count(&self) -> usize {
self.next_tasks.0.lock().unwrap().len()
}
/// Schedule a task to run periodically.
pub fn schedule<F>(&self, period: Duration, operation: F) -> CancelHandle
where
F: Fn(Instant) + Send + Sync + 'static,
{
let handle = CancelHandle::new();
let new_task = ScheduledTask {
next_time: Instant::now() + period,
period,
handle: handle.clone(),
operation: Arc::new(operation),
};
self.next_tasks.0.lock().unwrap().push(new_task);
self.next_tasks.1.notify_one();
handle
}
}
#[cfg(test)]
pub mod test {
use super::*;
use std::sync::atomic::AtomicUsize;
#[test]
fn schedule_one_and_cancel() {
let trig1a = Arc::new(AtomicUsize::new(0));
let trig1b = trig1a.clone();
let sched = Scheduler::new();
let handle1 = sched.schedule(Duration::from_millis(50), move |_| {
trig1b.fetch_add(1, SeqCst);
});
assert_eq!(sched.task_count(), 1);
thread::sleep(Duration::from_millis(170));
assert_eq!(3, trig1a.load(SeqCst));
handle1.cancel();
thread::sleep(Duration::from_millis(70));
assert_eq!(sched.task_count(), 0);
assert_eq!(3, trig1a.load(SeqCst));
}
#[test]
fn schedule_and_cancel_by_guard() {
let trig1a = Arc::new(AtomicUsize::new(0));
let trig1b = trig1a.clone();
let sched = Scheduler::new();
let handle1 = sched.schedule(Duration::from_millis(50), move |_| {
trig1b.fetch_add(1, SeqCst);
});
{
let _guard = handle1.into_guard();
assert_eq!(sched.task_count(), 1);
thread::sleep(Duration::from_millis(170));
assert_eq!(3, trig1a.load(SeqCst));
} // Here, the guard is dropped, cancelling
thread::sleep(Duration::from_millis(70));
assert_eq!(sched.task_count(), 0);
assert_eq!(3, trig1a.load(SeqCst));
}
#[test]
fn schedule_and_disarm_guard() {
let trig1a = Arc::new(AtomicUsize::new(0));
let trig1b = trig1a.clone();
let sched = Scheduler::new();
let handle1 = sched.schedule(Duration::from_millis(50), move |_| {
trig1b.fetch_add(1, SeqCst);
});
{
let guard = handle1.into_guard();
assert_eq!(sched.task_count(), 1);
thread::sleep(Duration::from_millis(170));
assert_eq!(3, trig1a.load(SeqCst));
guard.disarm();
}
thread::sleep(Duration::from_millis(70));
assert_eq!(sched.task_count(), 1); // Not canceled
}
#[test]
fn schedule_two_and_cancel() {
let trig1a = Arc::new(AtomicUsize::new(0));
let trig1b = trig1a.clone();
let trig2a = Arc::new(AtomicUsize::new(0));
let trig2b = trig2a.clone();
let sched = Scheduler::new();
let handle1 = sched.schedule(Duration::from_millis(50), move |_| {
trig1b.fetch_add(1, SeqCst);
println!("ran 1");
});
let handle2 = sched.schedule(Duration::from_millis(100), move |_| {
trig2b.fetch_add(1, SeqCst);
println!("ran 2");
});
thread::sleep(Duration::from_millis(110));
assert_eq!(2, trig1a.load(SeqCst));
assert_eq!(1, trig2a.load(SeqCst));
handle1.cancel();
thread::sleep(Duration::from_millis(110));
assert_eq!(2, trig1a.load(SeqCst));
assert_eq!(2, trig2a.load(SeqCst));
handle2.cancel();
thread::sleep(Duration::from_millis(160));
assert_eq!(2, trig1a.load(SeqCst));
assert_eq!(2, trig2a.load(SeqCst));
}
#[test]
fn schedule_one_and_more() {
let trig1a = Arc::new(AtomicUsize::new(0));
let trig1b = trig1a.clone();
let sched = Scheduler::new();
let handle1 = sched.schedule(Duration::from_millis(100), move |_| {
trig1b.fetch_add(1, SeqCst);
});
thread::sleep(Duration::from_millis(110));
assert_eq!(1, trig1a.load(SeqCst));
let trig2a = Arc::new(AtomicUsize::new(0));
let trig2b = trig2a.clone();
let handle2 = sched.schedule(Duration::from_millis(50), move |_| {
trig2b.fetch_add(1, SeqCst);
});
thread::sleep(Duration::from_millis(110));
assert_eq!(2, trig1a.load(SeqCst));
assert_eq!(2, trig2a.load(SeqCst));
handle1.cancel();
handle2.cancel();
}
}

View File

@ -1,291 +0,0 @@
//! Scope metrics allow an application to emit per-operation statistics,
//! like generating a per-request performance log.
//!
//! Although the scope metrics can be predefined like in [GlobalMetrics], the application needs to
//! create a scope that will be passed back when reporting scoped metric values.
/*!
Per-operation metrics can be recorded and published using `scope_metrics`:
```rust
let scope_metrics = scope_metrics(to_log());
let request_counter = scope_metrics.counter("scope_counter");
{
let request_scope = scope_metrics.open_scope();
request_counter.count(request_scope, 42);
request_counter.count(request_scope, 42);
}
```
*/
use core::*;
use core::ScopeCmd::*;
use std::sync::{Arc, RwLock};
// TODO define an 'AsValue' trait + impl for supported number types, then drop 'num' crate
pub use num::ToPrimitive;
/// Wrap the metrics backend to provide an application-friendly interface.
/// When reporting a value, scoped metrics also need to be passed a [Scope].
pub fn scoped_metrics<M>(chain: Chain<M>) -> ScopedMetrics<M>
where
M: 'static + Clone + Send + Sync,
{
ScopedMetrics {
prefix: "".to_string(),
chain: Arc::new(chain),
}
}
/// A monotonic counter metric.
/// Since value is only ever increased by one, no value parameter is provided,
/// preventing programming errors.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeMarker<M> {
metric: M,
}
impl<M> ScopeMarker<M> {
/// Record a single event occurence.
pub fn mark(&self, scope: &mut ControlScopeFn<M>) {
(scope)(Write(&self.metric, 1));
}
}
/// A counter that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeCounter<M> {
metric: M,
}
impl<M> ScopeCounter<M> {
/// Record a value count.
pub fn count<V>(&self, scope: &mut ControlScopeFn<M>, count: V)
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, count.to_u64().unwrap()));
}
}
/// A gauge that sends values to the metrics backend
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeGauge<M> {
metric: M,
}
impl<M: Clone> ScopeGauge<M> {
/// Record a value point for this gauge.
pub fn value<V>(&self, scope: &mut ControlScopeFn<M>, value: V)
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, value.to_u64().unwrap()));
}
}
/// A timer that sends values to the metrics backend
/// Timers can record time intervals in multiple ways :
/// - with the time! macrohich wraps an expression or block with start() and stop() calls.
/// - with the time(Fn) methodhich wraps a closure with start() and stop() calls.
/// - with start() and stop() methodsrapping around the operation to time
/// - with the interval_us() method, providing an externally determined microsecond interval
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopeTimer<M> {
metric: M,
}
impl<M: Clone> ScopeTimer<M> {
/// Record a microsecond interval for this timer
/// Can be used in place of start()/stop() if an external time interval source is used
pub fn interval_us<V>(&self, scope: &mut ControlScopeFn<M>, interval_us: V) -> V
where
V: ToPrimitive,
{
(scope)(Write(&self.metric, interval_us.to_u64().unwrap()));
interval_us
}
/// Obtain a opaque handle to the current time.
/// The handle is passed back to the stop() method to record a time interval.
/// This is actually a convenience method to the TimeHandle::now()
/// Beware, handles obtained here are not bound to this specific timer instance
/// _for now_ but might be in the future for safety.
/// If you require safe multi-timer handles, get them through TimeType::now()
pub fn start(&self) -> TimeHandle {
TimeHandle::now()
}
/// Record the time elapsed since the start_time handle was obtained.
/// This call can be performed multiple times using the same handle,
/// reporting distinct time intervals each time.
/// Returns the microsecond interval value that was recorded.
pub fn stop(&self, scope: &mut ControlScopeFn<M>, start_time: TimeHandle) -> u64 {
let elapsed_us = start_time.elapsed_us();
self.interval_us(scope, elapsed_us)
}
/// Record the time taken to execute the provided closure
pub fn time<F, R>(&self, scope: &mut ControlScopeFn<M>, operations: F) -> R
where
F: FnOnce() -> R,
{
let start_time = self.start();
let value: R = operations();
self.stop(scope, start_time);
value
}
}
/// Variations of this should also provide control of the metric recording scope.
#[derive(Derivative)]
#[derivative(Debug)]
pub struct ScopedMetrics<M> {
prefix: String,
chain: Arc<Chain<M>>,
}
impl<M> ScopedMetrics<M>
where
M: 'static + Clone + Send + Sync,
{
fn qualified_name<AS>(&self, name: AS) -> String
where
AS: Into<String> + AsRef<str>,
{
// FIXME is there a way to return <S> in both cases?
if self.prefix.is_empty() {
return name.into();
}
let mut buf: String = self.prefix.clone();
buf.push_str(name.as_ref());
buf.to_string()
}
/// Get an event counter of the provided name.
pub fn marker<AS>(&self, name: AS) -> ScopeMarker<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Marker,
&self.qualified_name(name),
1.0,
);
ScopeMarker { metric }
}
/// Get a counter of the provided name.
pub fn counter<AS>(&self, name: AS) -> ScopeCounter<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Counter,
&self.qualified_name(name),
1.0,
);
ScopeCounter { metric }
}
/// Get a timer of the provided name.
pub fn timer<AS>(&self, name: AS) -> ScopeTimer<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Timer,
&self.qualified_name(name),
1.0,
);
ScopeTimer { metric }
}
/// Get a gauge of the provided name.
pub fn gauge<AS>(&self, name: AS) -> ScopeGauge<M>
where
AS: Into<String> + AsRef<str>,
{
let metric = self.chain.define_metric(
Kind::Gauge,
&self.qualified_name(name),
1.0,
);
ScopeGauge { metric }
}
/// Prepend the metrics name with a prefix.
/// Does not affect metrics that were already obtained.
pub fn with_prefix<IS>(&self, prefix: IS) -> Self
where
IS: Into<String>,
{
ScopedMetrics {
prefix: prefix.into(),
chain: self.chain.clone(),
}
}
/// Create a new scope to report metric values.
pub fn open_scope(&self) -> ControlScopeFn<M> {
let scope_buffer = RwLock::new(ScopeBuffer {
buffer: Vec::new(),
scope: self.chain.open_scope(false),
});
Arc::new(move |cmd: ScopeCmd<M>| {
let mut buf = scope_buffer.write().expect("Lock metric scope.");
match cmd {
Write(metric, value) => {
buf.buffer.push(ScopeCommand {
metric: (*metric).clone(),
value,
})
}
Flush => buf.flush(),
}
})
}
}
/// Save the metrics for delivery upon scope close.
struct ScopeCommand<M> {
metric: M,
value: Value,
}
struct ScopeBuffer<M: Clone> {
buffer: Vec<ScopeCommand<M>>,
scope: ControlScopeFn<M>,
}
impl<M: Clone> ScopeBuffer<M> {
fn flush(&mut self) {
for cmd in self.buffer.drain(..) {
(self.scope)(Write(&cmd.metric, cmd.value))
}
(self.scope)(Flush)
}
}
impl<M: Clone> Drop for ScopeBuffer<M> {
fn drop(&mut self) {
self.flush()
}
}
#[cfg(feature = "bench")]
mod bench {
use ::*;
use test;
#[bench]
fn time_bench_direct_dispatch_event(b: &mut test::Bencher) {
let sink = aggregate(5, summary, to_stdout());
let metrics = global_metrics(sink);
let marker = metrics.marker("aaa");
b.iter(|| test::black_box(marker.mark()));
}
}

View File

@ -1,178 +0,0 @@
use time;
use std::mem;
use core::*;
use core::Kind::*;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering::*;
use std::usize;
use self::ScoreType::*;
#[derive(Debug, Clone, Copy)]
/// Possibly aggregated scores.
pub enum ScoreType {
/// Number of times the metric was used.
Count(u64),
/// Sum of metric values reported.
Sum(u64),
/// Biggest value reported.
Max(u64),
/// Smallest value reported.
Min(u64),
/// Approximative average value (hit count / sum, non-atomic)
Mean(f64),
/// Approximative mean rate (hit count / period length in seconds, non-atomic)
Rate(f64),
}
/// A snapshot of multiple scores for a single metric.
pub type ScoreSnapshot = (Kind, String, Vec<ScoreType>);
/// A metric that holds aggregated values.
/// Some fields are kept public to ease publishing.
#[derive(Debug)]
pub struct Scoreboard {
/// The kind of metric.
kind: Kind,
/// The metric's name.
name: String,
scores: [AtomicUsize; 5]
}
impl Scoreboard {
/// Create a new Scoreboard to track summary values of a metric
pub fn new(kind: Kind, name: String) -> Self {
let now = time::precise_time_ns() as usize;
Scoreboard {
kind,
name,
scores: unsafe { mem::transmute(Scoreboard::blank(now)) }
}
}
#[inline]
fn blank(now: usize) -> [usize; 5] {
[now, 0, 0, usize::MIN, usize::MAX]
}
/// Update scores with new value
pub fn update(&self, value: Value) -> () {
// TODO report any concurrent updates / resets for measurement of contention
let value = value as usize;
self.scores[1].fetch_add(1, Acquire);
match self.kind {
Marker => {},
_ => {
// optimization - these fields are unused for Marker stats
self.scores[2].fetch_add(value, Acquire);
swap_if_more(&self.scores[3], value);
swap_if_less(&self.scores[4], value);
}
}
}
/// Reset scores to zero, return previous values
fn snapshot(&self, now: usize, scores: &mut [usize; 5]) -> bool {
// SNAPSHOT OF ATOMICS IN PROGRESS, HANG TIGHT
scores[0] = self.scores[0].swap(now, Release);
scores[1] = self.scores[1].swap(0, Release);
scores[2] = self.scores[2].swap(0, Release);
scores[3] = self.scores[3].swap(usize::MIN, Release);
scores[4] = self.scores[4].swap(usize::MAX, Release);
// SNAPSHOT COMPLETE, YOU CAN RELAX NOW
// if hit count is zero, then no values were recorded.
scores[1] != 0
}
/// Map raw scores (if any) to applicable statistics
pub fn reset(&self) -> Option<ScoreSnapshot> {
let now = time::precise_time_ns() as usize;
let mut scores = Scoreboard::blank(now);
if self.snapshot(now, &mut scores) {
let duration_seconds = (now - scores[0]) as f64 / 1_000_000_000.0;
let mut snapshot = Vec::new();
match self.kind {
Marker => {
snapshot.push(Count(scores[1] as u64));
snapshot.push(Rate(scores[2] as f64 / duration_seconds))
},
Gauge => {
snapshot.push(Max(scores[3] as u64));
snapshot.push(Min(scores[4] as u64));
snapshot.push(Mean(scores[2] as f64 / scores[1] as f64));
},
Timer | Counter => {
snapshot.push(Count(scores[1] as u64));
snapshot.push(Sum(scores[2] as u64));
snapshot.push(Max(scores[3] as u64));
snapshot.push(Min(scores[4] as u64));
snapshot.push(Mean(scores[2] as f64 / scores[1] as f64));
snapshot.push(Rate(scores[2] as f64 / duration_seconds))
},
}
Some((self.kind, self.name.clone(), snapshot))
} else {
None
}
}
}
/// Spinlock until success or clear loss to concurrent update.
#[inline]
fn swap_if_more(counter: &AtomicUsize, new_value: usize) {
let mut current = counter.load(Acquire);
while current < new_value {
if counter.compare_and_swap(current, new_value, Release) == new_value { break }
current = counter.load(Acquire);
}
}
/// Spinlock until success or clear loss to concurrent update.
#[inline]
fn swap_if_less(counter: &AtomicUsize, new_value: usize) {
let mut current = counter.load(Acquire);
while current > new_value {
if counter.compare_and_swap(current, new_value, Release) == new_value { break }
current = counter.load(Acquire);
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
use test;
#[bench]
fn bench_score_update_marker(b: &mut test::Bencher) {
let metric = Scoreboard::new(Marker, "event_a".to_string());
b.iter(|| test::black_box(metric.update(1)));
}
#[bench]
fn bench_score_update_count(b: &mut test::Bencher) {
let metric = Scoreboard::new(Counter, "event_a".to_string());
b.iter(|| test::black_box(metric.update(4)));
}
#[bench]
fn bench_score_snapshot(b: &mut test::Bencher) {
let metric = Scoreboard::new(Counter, "event_a".to_string());
let now = time::precise_time_ns() as usize;
let mut scores = Scoreboard::blank(now);
b.iter(|| test::black_box(metric.snapshot(now, &mut scores)));
}
}

View File

@ -1,34 +0,0 @@
//! Internal Dipstick metrics.
//! Collect statistics about various metrics modules at runtime.
//! Stats can can be obtained for publication from `selfstats::SOURCE`.
pub use global_metrics::*;
pub use aggregate::*;
pub use publish::*;
pub use scores::*;
pub use core::*;
use output::to_void;
fn build_aggregator() -> Chain<Aggregate> {
aggregate(32, summary, to_void())
}
/// Capture a snapshot of Dipstick's internal metrics since the last snapshot.
pub fn snapshot() -> Vec<ScoreSnapshot> {
vec![]
}
fn build_self_metrics() -> GlobalMetrics<Aggregate> {
// TODO send to_map() when snapshot() is called
// let agg = aggregate(summary, to_void());
global_metrics(AGGREGATOR.clone()).with_prefix("dipstick.")
}
lazy_static! {
static ref AGGREGATOR: Chain<Aggregate> = build_aggregator();
/// Application metrics are collected to the aggregator
pub static ref SELF_METRICS: GlobalMetrics<Aggregate> = build_self_metrics();
}

94
src/stats.rs Executable file
View File

@ -0,0 +1,94 @@
//! Definitions of standard aggregated statistic types and functions
use crate::input::InputKind;
use crate::name::MetricName;
use crate::MetricValue;
/// Possibly aggregated scores.
#[derive(Debug, Clone, Copy)]
pub enum ScoreType {
/// Number of times the metric was used.
Count(isize),
/// Sum of metric values reported.
Sum(isize),
/// Biggest value observed.
Max(isize),
/// Smallest value observed.
Min(isize),
/// Average value (hit count / sum, non-atomic)
Mean(f64),
/// Mean rate (hit count / period length in seconds, non-atomic)
Rate(f64),
}
/// A predefined export strategy reporting all aggregated stats for all metric types.
/// Resulting stats are named by appending a short suffix to each metric's name.
#[allow(dead_code)]
pub fn stats_all(
kind: InputKind,
name: MetricName,
score: ScoreType,
) -> Option<(InputKind, MetricName, MetricValue)> {
match score {
ScoreType::Count(hit) => Some((InputKind::Counter, name.make_name("count"), hit)),
ScoreType::Sum(sum) => Some((kind, name.make_name("sum"), sum)),
ScoreType::Mean(mean) => Some((kind, name.make_name("mean"), mean.round() as MetricValue)),
ScoreType::Max(max) => Some((InputKind::Gauge, name.make_name("max"), max)),
ScoreType::Min(min) => Some((InputKind::Gauge, name.make_name("min"), min)),
ScoreType::Rate(rate) => Some((
InputKind::Gauge,
name.make_name("rate"),
rate.round() as MetricValue,
)),
}
}
/// A predefined export strategy reporting the average value for every non-marker metric.
/// Marker metrics export their hit count instead.
/// Since there is only one stat per metric, there is no risk of collision
/// and so exported stats copy their metric's name.
#[allow(dead_code)]
pub fn stats_average(
kind: InputKind,
name: MetricName,
score: ScoreType,
) -> Option<(InputKind, MetricName, MetricValue)> {
match kind {
InputKind::Marker => match score {
ScoreType::Count(count) => Some((InputKind::Counter, name, count)),
_ => None,
},
_ => match score {
ScoreType::Mean(avg) => Some((InputKind::Gauge, name, avg.round() as MetricValue)),
_ => None,
},
}
}
/// A predefined single-stat-per-metric export strategy:
/// - Timers and Counters each export their sums
/// - Markers each export their hit count
/// - Gauges each export their average
/// Since there is only one stat per metric, there is no risk of collision
/// and so exported stats copy their metric's name.
#[allow(dead_code)]
pub fn stats_summary(
kind: InputKind,
name: MetricName,
score: ScoreType,
) -> Option<(InputKind, MetricName, MetricValue)> {
match kind {
InputKind::Marker => match score {
ScoreType::Count(count) => Some((InputKind::Counter, name, count)),
_ => None,
},
InputKind::Counter | InputKind::Timer => match score {
ScoreType::Sum(sum) => Some((kind, name, sum)),
_ => None,
},
InputKind::Gauge | InputKind::Level => match score {
ScoreType::Mean(mean) => Some((InputKind::Gauge, name, mean.round() as MetricValue)),
_ => None,
},
}
}

View File

@ -1,164 +0,0 @@
//! Send metrics to a statsd server.
use core::*;
use error;
use self_metrics::*;
use std::net::UdpSocket;
use std::sync::{Arc, RwLock};
pub use std::net::ToSocketAddrs;
/// Send metrics to a statsd server at the address and port provided.
pub fn to_statsd<ADDR>(address: ADDR) -> error::Result<Chain<Statsd>>
where
ADDR: ToSocketAddrs,
{
let socket = Arc::new(UdpSocket::bind("0.0.0.0:0")?);
socket.set_nonblocking(true)?;
socket.connect(address)?;
Ok(Chain::new(
move |kind, name, rate| {
let mut prefix = String::with_capacity(32);
prefix.push_str(name.as_ref());
prefix.push(':');
let mut suffix = String::with_capacity(16);
suffix.push('|');
suffix.push_str(match kind {
Kind::Marker | Kind::Counter => "c",
Kind::Gauge => "g",
Kind::Timer => "ms",
});
if rate < FULL_SAMPLING_RATE {
suffix.push_str("|@");
suffix.push_str(&rate.to_string());
}
let scale = match kind {
// timers are in µs, statsd wants ms
Kind::Timer => 1000,
_ => 1,
};
Statsd {
prefix,
suffix,
scale,
}
},
move |auto_flush| {
let buf = RwLock::new(ScopeBuffer {
buffer: String::with_capacity(MAX_UDP_PAYLOAD),
socket: socket.clone(),
auto_flush,
});
Arc::new(move |cmd| {
if let Ok(mut buf) = buf.write() {
match cmd {
ScopeCmd::Write(metric, value) => buf.write(metric, value),
ScopeCmd::Flush => buf.flush(),
}
}
})
},
))
}
lazy_static! {
static ref STATSD_METRICS: GlobalMetrics<Aggregate> = SELF_METRICS.with_prefix("statsd.");
static ref SEND_ERR: Marker<Aggregate> = STATSD_METRICS.marker("send_failed");
static ref SENT_BYTES: Counter<Aggregate> = STATSD_METRICS.counter("sent_bytes");
}
/// Key of a statsd metric.
#[derive(Debug, Clone)]
pub struct Statsd {
prefix: String,
suffix: String,
scale: u64,
}
/// Use a safe maximum size for UDP to prevent fragmentation.
// TODO make configurable?
const MAX_UDP_PAYLOAD: usize = 576;
/// Wrapped string buffer & socket as one.
#[derive(Debug)]
struct ScopeBuffer {
buffer: String,
socket: Arc<UdpSocket>,
auto_flush: bool,
}
/// Any remaining buffered data is flushed on Drop.
impl Drop for ScopeBuffer {
fn drop(&mut self) {
self.flush()
}
}
impl ScopeBuffer {
fn write (&mut self, metric: &Statsd, value: Value) {
let scaled_value = value / metric.scale;
let value_str = scaled_value.to_string();
let entry_len = metric.prefix.len() + value_str.len() + metric.suffix.len();
if entry_len > self.buffer.capacity() {
// TODO report entry too big to fit in buffer (!?)
return;
}
let remaining = self.buffer.capacity() - self.buffer.len();
if entry_len + 1 > remaining {
// buffer is full, flush before appending
self.flush();
} else {
if !self.buffer.is_empty() {
// separate from previous entry
self.buffer.push('\n')
}
self.buffer.push_str(&metric.prefix);
self.buffer.push_str(&value_str);
self.buffer.push_str(&metric.suffix);
}
if self.auto_flush {
self.flush();
}
}
fn flush(&mut self) {
if !self.buffer.is_empty() {
match self.socket.send(self.buffer.as_bytes()) {
Ok(size) => {
SENT_BYTES.count(size);
trace!("Sent {} bytes to statsd", self.buffer.len());
}
Err(e) => {
SEND_ERR.mark();
debug!("Failed to send packet to statsd: {}", e);
}
};
self.buffer.clear();
}
}
}
#[cfg(feature = "bench")]
mod bench {
use super::*;
use test;
#[bench]
pub fn timer_statsd(b: &mut test::Bencher) {
let sd = to_statsd("localhost:8125").unwrap();
let timer = sd.define_metric(Kind::Timer, "timer", 1000000.0);
let scope = sd.open_scope(false);
b.iter(|| test::black_box((scope)(ScopeCmd::Write(&timer, 2000))));
}
}

1
src/todo/http_serve.rs Normal file
View File

@ -0,0 +1 @@

1
src/todo/kafka.rs Normal file
View File

@ -0,0 +1 @@

2
tests/skeptic.rs Normal file
View File

@ -0,0 +1,2 @@
#[cfg(feature = "skeptic")]
include!(concat!(env!("OUT_DIR"), "/skeptic-tests.rs"));