This will at least create some stupid consumers and try to fetch the latest
offsets for a bunch of stupid topics.
The Kafka/Scala internals are so immensely frustrating
This means the consumersMap that we're going to keep track of will have the key
of a TopicPartition, e.g.: ConcurrentHashMap<TopicPartition, List<ConsumerOffset>>
When we receive the data from the Kafka meta-data calls (to be added) all we'll
need to do is create the right TopicPartition and walk the list of
ConsumerOffset instances to start reporting metrics