Commit Graph

59 Commits

Author SHA1 Message Date
R. Tyler Croy 39db2912ad
Update the build script to be more explicity about cleaning and running default tasks
Giving this another try with an auto-deploy
2015-07-04 14:27:46 -07:00
R. Tyler Croy 08e4076f95
Update the readme with release notes 2015-07-04 14:12:27 -07:00
R. Tyler Croy cb3465f715 Merge pull request #27 from lookout/auto-deployment
Add support for automatically publishing to Bintray
2015-07-04 14:09:13 -07:00
R. Tyler Croy 27d51c84e4
Use Gradle 2.4 for future builds 2015-07-04 14:06:24 -07:00
R. Tyler Croy 1c588f6bff
Change the automatic deployment to only operate when a user has published a tag that successfully builds 2015-07-04 14:02:58 -07:00
R. Tyler Croy 5cc83eef08
Dry run for now to avoid publishing builds 2015-07-04 13:50:39 -07:00
R. Tyler Croy 685f0353f2
Add support for automatically publishing to Bintray when a build succeeds 2015-07-04 13:40:25 -07:00
R. Tyler Croy 81d15d53ec Let's call this 0.2.0 2015-03-22 11:11:03 -07:00
R. Tyler Croy f053d9b68c Catch and log exceptions coming from the dumpMetadata() call
Fixes #23
2015-03-22 11:08:40 -07:00
R. Tyler Croy 3a9caa2535 Avoid computing negative values for offsets, making zero the lowest possible value
Fixes #25
2015-03-22 10:56:26 -07:00
R. Tyler Croy dc33298435 Groovy's getStrings() method with string interpolation doesn't do what you think it does
groovy:000> user = "Ron"
    ===> Ron
    groovy:000> "hello ${user}"
    ===> hello Ron
    groovy:000> "hello ${user}".getStrings()
    ===> [hello , ]
    groovy:000> "hello ${user}".toString()
    ===> hello Ron

Fixes #24
2015-03-20 15:13:40 -07:00
R. Tyler Croy 789e7e863c Merge pull request #20 from lookout/dropwizard-metrics
Introduce dropwizard-metrics
2015-03-20 11:37:14 -07:00
R. Tyler Croy 38e8c62e00 Properly report metrics to datadog with the appropriate tags
This requires a much more recent version of our metrics-datadog library but
does result in the right values being reported into datadog.
2015-03-20 10:58:45 -07:00
R. Tyler Croy 71347594ca Major refactor to support dropwizard-metrics as the means of outputting metrics
This commit changes the structure of Verspaetung pretty dramatically to allow
for the registering of Gauges for the various offsets

With this change the KafkaPoller is pushing the latest offsets into a map and the
ZK consumer tree watchers are pushing consumer offsets into a separate map.

References #17
2015-02-06 08:34:19 -08:00
R. Tyler Croy f4042894fe Add a compile and runtime dependency on the metrics core and graphite code
References #17
2015-01-30 07:30:15 -08:00
R. Tyler Croy 7b13050ee2 Restructure KafkaPoller to make catching and logging exceptions easier
I've not cleaned this up enough to make things easier to test, partially
because mocking the Kafka client APIs is a giant pain in the ass

Fixes #16
2015-01-30 02:45:16 -08:00
R. Tyler Croy 0fa983d0e8 Add support for prefixing the metrics with a CLI supplied option
Fixes #15
2015-01-30 02:45:16 -08:00
R. Tyler Croy 95c244867a Leave a skipped test in place for later
Going to get back to #14
2015-01-30 02:45:16 -08:00
R. Tyler Croy 0c59dcee70 Add some notes about using Verspaetung 2015-01-28 03:33:52 -08:00
R. Tyler Croy 100b6ab28a Add latest version badge from Bintray 2015-01-28 03:20:10 -08:00
R. Tyler Croy 5b086f670e Bump versions 2015-01-28 03:17:46 -08:00
R. Tyler Croy 8995351bba Parse the consumer group from the KafkaSpout ZK path instead of using the JSON
The JSON in the Znode is actually the name of the topology, not necessarily the
name of the consumer group used by a KafkSpout

Fixes #9
2015-01-28 03:05:30 -08:00
R. Tyler Croy d0c99b9a34 Implement the KafkaSpoutTreeWatcher for processing Storm KafkaSpout offset data
Fixes #9
2015-01-28 02:45:49 -08:00
R. Tyler Croy b4b9fe9860 Introduce the AbstractConsumerTreeWatcher to handle watchers on Kafka consumer trees
This gives us two kinds of AbstractTreeWatcher instances, those that watch
special-case subtrees (e.g. the BrokerTreeWatcher) and then those which need to
watch and report Kafka consumer offset information (e.g. StandardTreeWatcher)

References #9
2015-01-28 02:02:59 -08:00
R. Tyler Croy af19abfacb Refactor the handling of TreeCache into the AbstractTreeWatcher itself
This should lay the groundwork for refactoring much of the BrokerTreeWatcher up
into the AbstractTreeWatcher
2015-01-28 01:44:53 -08:00
R. Tyler Croy 0706895af1 Refactor some of the KafkaPoller setup out and add support for --dry-run
Fixes #10
2015-01-28 01:13:13 -08:00
R. Tyler Croy a444f49b46 Invoke the onDelta callbacks for zero deltas too
It turns out that datadog doesn't indicate that the delta is shrinking unless
you report a zero value.
2015-01-26 09:40:16 -08:00
R. Tyler Croy 55008192f8 Update to a newer java-statsd-client that doesn't use bleeding edge technology like String.join
JDK7, get your shit together.
2015-01-26 09:39:34 -08:00
R. Tyler Croy 18e747788e Add a simple heartbeat to the configured statsd host for every second9
Fixes #4
2015-01-26 07:00:07 -08:00
R. Tyler Croy 5e1a4e11ba Use a ocpy on write list for the consumers map to avoid thrash between threads
Once we've finished caching our consumers map the majority of the operations on
this consumersMap list will be traversals inside of the KafkaPoller. Thus
making the performance hit worth it.

What we don't want to happen is to try to iterate through the consumersmap's
list of consumers while receiving new ZK childEvents

Fixes #6
2015-01-26 06:52:54 -08:00
R. Tyler Croy 4f16c3dfd1 Avoid NPEs when starting up and scanning brokers
The problem was we were trying to parse the JSON of /brokers/ids' when the
treecache found that node.

Fixes #8
2015-01-26 06:20:46 -08:00
R. Tyler Croy a022252157 Add some simple command line overrides for ZK and statsd hosts
Fixes #5
2015-01-26 06:15:35 -08:00
R. Tyler Croy dd1b3322a4 Bump to java-statsd-client 3.1.1 which is built for JDK7 and 8 2015-01-26 05:15:43 -08:00
R. Tyler Croy d76a7d8e8f Silly mistakes happen when you rush 2015-01-26 04:29:31 -08:00
R. Tyler Croy c183bdc8a9 Temporarily add a personal fork of java-statsd-client with dogstatsd tagging support 2015-01-26 04:27:03 -08:00
R. Tyler Croy c15c25534c Remove unnecessary synchronized statement 2015-01-26 04:17:28 -08:00
R. Tyler Croy 33df1aa7dc Default the zookeeper hosts value to localhost 2015-01-26 04:11:27 -08:00
R. Tyler Croy 49c2d7fc1d Add logback as a dependency and gut the printlns from the codebase 2015-01-26 04:11:27 -08:00
R. Tyler Croy 6d82735b3c Ignore "owners" and other subtrees from the Kafka High Level Consumer ZK space
References #2
2015-01-26 04:06:27 -08:00
R. Tyler Croy 699e141c22 Upgrade to Groovy 2.4.0 2015-01-26 02:40:53 -08:00
R. Tyler Croy 671d62075c Add commons-cli for the soon-to-be-added CLI options! 2015-01-20 14:18:09 -08:00
R. Tyler Croy ba2fa562ac Stupidly and blindly fire a bunch of crap into statsd on localhost
This is not the final product by a long shot, just needed to figure out how
things would look inside datadog
2015-01-20 13:23:01 -08:00
R. Tyler Croy 5fe4d8efaa Hack together the actual delta reporting, UNCLEAN CODE ALERT
Still very much in the experimental phase, need to refactor KafkaPoller a lot
at this point and decide which code should live where and give it plenty of
tests
2015-01-19 16:22:22 -08:00
R. Tyler Croy 193b147064 Exploratory testing, a veritable boatload of garbage and hacks
This will at least create some stupid consumers and try to fetch the latest
offsets for a bunch of stupid topics.

The Kafka/Scala internals are so immensely frustrating
2015-01-19 15:55:47 -08:00
R. Tyler Croy 293ebb6fa9 Prefer callback lists instead of a single callback per object
This doesn't really matter but I prefer this approach from a style vantage
point
2015-01-19 13:15:00 -08:00
R. Tyler Croy 8b792d72fd Properly use the passed in AbstractMap for the StandardTreeWatcher 2015-01-19 13:08:01 -08:00
R. Tyler Croy eae5e7436e Add build badge 2015-01-19 11:54:28 -08:00
R. Tyler Croy 34d8923e7c Add support for TravisCI 2015-01-19 11:46:57 -08:00
R. Tyler Croy 7e593f1235 Introducing the BrokerTreeWatcher to track changes to the broker list in Zookeeper
This commit includes a lot of work in progress kind of code. STill
experimenting with how to bind the events from the Zookeeper event-driven
system, in with the KafkaPoller busy-wait-loop system.
2015-01-19 11:45:22 -08:00
R. Tyler Croy c53413d5b9 Introduce TopicPartition for acting as a Hash key for (topic, partition_id) tuples
This means the consumersMap that we're going to keep track of will have the key
of a TopicPartition, e.g.: ConcurrentHashMap<TopicPartition, List<ConsumerOffset>>

When we receive the data from the Kafka meta-data calls (to be added) all we'll
need to do is create the right TopicPartition and walk the list of
ConsumerOffset instances to start reporting metrics
2015-01-19 10:12:41 -08:00