You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
R. Tyler Croy dfbb26399d
Experimenting with observing ChildData and turning into a stream of Brokers
8 years ago
gradle/wrapper Borrow TopicPartition from Verspaetung 8 years ago
src Experimenting with observing ChildData and turning into a stream of Brokers 8 years ago
.gitignore initial commit 8 years ago
.travis.yml Add a travis.yml so Travis doesn't think this is a ruby project 8 years ago
HACKING.adoc Clean up some JSON 8 years ago
LICENSE.md Add brokerId to the Broker pojo 8 years ago
README.adoc Mark psuedo-code as java 8 years ago
build.gradle Experimenting with observing ChildData and turning into a stream of Brokers 8 years ago
gradle.properties Add support for running arbitrary experiments with our classpath 8 years ago
gradlew initial commit 8 years ago
gradlew.bat initial commit 8 years ago

README.adoc

<html lang="en"> <head> </head>

Beetle

Build Status

Beetle is a somewhat higher level Java API on top of the client libraries distributed distributed with Apache Kafka. The goal of this library is not to replace the use of those libraries, but to wrap the library in a more easy to use package.

System Requirements

  • JDK7 or later

Hacking

This project uses Gradle so building and testing should be as easy as executing:

% ./gradlew

Design/Notes/Thoughts

Note
Right now this section is very much just a brain-dump/work-in-progress

What is fundamentally missing from the upstream Kafka clients is an evented view on the world. Despite Zookeeper and Kafkas models effectively being event-driven, implementing a lower-level SimpleConsumer utilizes busy-loops and rather disjointed logic for reconnects and error handling. The higher-level consumer API is also awkward to use as far as receiving messages (using an iterator) and handling parallel operations (stuffing a thread pool somewhere for receiving).

A high-level Kafka consumer API maps rather nicely to the RxJava usage model of Observers and Subscribers, e.g.g

Consumer.java
/*
 * Prototype code showing how a typical end-user might use Beetle
 */

Brokers.fromZookeeper("localhost:2181")
    /* assuming a custom subscribe() operator exists in Beetle */
    .subscribe("some-topic")
    /* assuming a custom consume() operator exists in Beetle */
    .consume(message -> doSomethingWithMessage(message))
    .map(message -> message.commitOffset());
ZookeeperlessConsumer.java
/*
 * Assuming we already know which broker we want to talk to and
 * we don't care at all about leader changes or committing offsets
 */
long startOffset = 0;

Brokers.just('localhost:6667')
    .subscribe("some-topic", startOffset)
    .consume(m -> doSomethingWithMessage(m))
LowLevelBeetle.java
/*
 * The following is a prototype of some ideas for how the above examples
 * might be implemented internally
 */
CuratorFramework cf = CuratorFrameworkFactory.newClient("localhost:2181");

BrokersObserver.observe(cf)
    .subscribe(brokers -> TopicsObserver.observe(cf, brokers))
    .subscribe(partitions -> ConsumerObserver.observe(cf, partitions))
    .map(message ->
        doCustomBehaviorWith(message))
    .map(message -> m.commitOffsetTo(cf));

Similar Projects

  1. kafka-rx: Scala-based client which provides a push alternative to kafkas pull-based stream

</html>