Big Data, Small Cloud

I have decided to learn about data, Big Data, really Big Data in fact. It's going to be an adventure, and this is your invitation.

bigdata

The pretext

I was at the GCP Next '16 conference in London towards the end of last year, watching Reza Rokni, GCP Architect, import and process phenomenal amounts of data in arbitrary amounts of time, with hardly any effort. I listened to keynote talks from people like James Tromans, Citi FX technology - Global Head of Risk and Data, about how much they process on GCP, how they can scale and react to change in an instant - not to mention associated cost savings of not having to build and host this infrastructure themselves.

"Wunderbar" - I said to myself, as my brain started whirring away with potential applications.

Making it simple

I do love Googles model of abstracting away complexity behind commonly understood interfaces to make it more accessible. The general principle is that it isn't scalable, or even feasible to have experts in many of these advanced technologies in all of your teams, let alone have them build and host the infrastructure. However chances are that each of these teams most probably do have uses for the technology, so let's make it available in a way which they can easily consume.

Look at BigQuery and its SQL interface. BigTable and its HBase compatible API, and more recently their Machine Learning platform and associated APIs like vision and speech. Suddenly that graduate developer on your team can import terabytes of social media data. Take a picture on their smartphone and do image analysis using Vision, then correlate what's in that image with the social media data they're aggregating to find associated tweets. Try building that in Ruby...

The abstraction works well.

And this is where I'll hold my hands up, I'm a consumer of these services, not a builder. The effort of building a Solr cluster on top of a HBase cluster on top a HDFS cluster, then learning all of their quirks has definitely stopped me doing it myself. And don't even get me started on rectified linear unit algorithms, softmax classifiers and backprop neural networks to try and do some image recognition. I struggled to write that, let alone build it.

So I do ask myself, why bother?

Well, as the use of these technologies becomes more common place, the types of capabilities that they offer are going to be more in demand, even expected by clients. Therefore it feels inevitable that the day will come where I'm working with a client who demands a social media image correlation service, and they absolutely cannot use cloud services to store and aggregate their publically collected data.

dino-min

Anyway at this point, you've got this far, you're not doubt probably starting to question the value that reading any further gives. And where this so called adventure I spoke of earlier starts.

Fear not as your commitment is about to pay off! The outcome of all this reflection is that I've decided I want to learn how the voodoo actually works. So I've set myself the task of building a distributed BigData solution on my really small cloud (my MacBook), in docker, naturally.

HDFS, HBase, Hue and NiFi etc

GitHub: Stono/bigdata-fun

In summary, the above repository contains the following docker containers (so far):

Getting data in:

  • Apache NiFi
  • Apache Flume

Storing data:

  • HBase RegionServer
  • HBase Master
  • HBase ZooKeeper
  • HDFS NameNode
  • HDFS DataNode x2

Getting data out:

  • HBase Rest
  • HBase Thrift
  • Hue

I plan to add Solr and Spark in the coming days too.

Have a read of the README, in summary you can have your own BigData solution on your own very little cloud by simply typing docker-compose up. Go ahead, join my adventure.