Skip to main content

Enterprise Data Workflows with Cascading, by Paco Nathan, O'Reilly Media

For people interested in developing Hadoop analytic applications there is a plethora of options. The options range from writing low-level, hand-tuned Java map-reduce code, to using a higher level language to manipulate the data such as Pig and Hive. There are pros and cons for each option. For the first, the code becomes complex for anything other than the canonical word-count example, and for the latter, to do anything meaningful, you almost always end up augmenting the higher level language with user-defined functions written in a different language to regain power and flexibility, causing maintenance nightmares. A happy medium in between is to use one of the data-flow libraries for Hadoop, of which Cascading is one.

Since Cascading has been around for some time, the online documentation is relatively mature, and includes a gentle introduction to the library, with example source code, and a well written user's guide. However this does not obviate the need for a book that describes the library and walks the reader gently through its usage and subtleties. "Enterprise Data Workflows with Cascading" is such a book.

The book starts with a simple example of copying a file on Hadoop, and introduces the concepts of taps for data sources, and data sinks, as well as data pipes that connect them. It then graduates to the canonical word count example, using it as a vehicle to explain flows, and the operations that can be performed on them through the use of functions and aggregation functions.

Next comes more complex tasks that require joins. The book starts with HashJoins, and then progresses to LeftJoins and distributed joins. The book then uses a meaty example of a text analytics pipeline to calculate term frequencies/inverse document frequency for a text corpus (TF-IDF), and uses that as a vehicle to walk through splits, merges, and more complex joins.

By then, the reader has become familiar and comfortable with Cascading, and the author walks him through the benefits of developing applications in a data-flow language instead of the other options available for Hadoop developers. Some of these benefits are the ability to test the code before deployment, and the author walks through an example of a TDD pipeline.  Others include using a consistent pattern language to describe the workflows, and having a single deployable JAR that can be used in dev/test/production environments.

Toward the end the author lists other language bindings for Cascading, such as Scalding (Scala), and Cascalog (Clojure). The later chapters contain good references for further reading on TDD/Scala/Clojure. The book closes with an open-data use-case.

Throughout the book, the author provides ample links to the source code, and code gists on github, as well as alternate implementations in different languages.

I liked the style of the book: it is a gentle introduction to Cascading, interspersed with some good advice on doing TDD for enterprise applications, the use of a pattern language for describing data-flows, and an introduction to other language bindings for Cascading.

Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the doctor away: NoDB

In most database systems, the user defines the shape of the data that is stored and queried using concepts such as entities and relations. The database system takes care of translating that shape into physical storage, and managing its lifecycle. Most of the systems store data in the form of tuples, either in row format, or broken down into columns and stored in columnar format. The system also stores metadata associated with the data, that helps with speedy retrieval and processing. Defining the shape of the data a priori, and transforming it from the raw or ingestion format to the storage format is a cost that database systems incur to make queries faster. What if we can have fast queries without incurring that initial cost? In the paper " NoDB: Efficient Query Execution on Raw Data Files ", the authors examine that question, and advocate a system (NoDB) that answers it. The authors start with the motivation for such a system. With the recent explosion of data...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...