Skip to main content

A paper a day keeps the doctor away: The 8 Requirements of Real-Time Stream Processing

In recent years there has been an explosion of data all around us. The data comes in from a variety of sources, such as financial real-time systems, cell phone networks, sensor networks--RFID and IoT, and GPS. Commensurate with this dramatic increase in data, is a corresponding unquenchable thirst for analysis and insights. The natural question arises: how do we build systems that process and makes sense of this vast amount of data, in as close to real-time as possible? What patterns of software and systems should we look at?

Michael Stonebraker of database fame et al. offer some advice on what to consider in their paper: "The 8 requirements of real-time stream processing" published a decade ago. In the paper, the authors list eight guiding principles that high-volume low-latency systems should follow to be able to process vast amounts of data in near real-time.

First, the systems have to keep the data moving, and do straight-through processing with minimal to no writes to disk to achieve the low-latency desired. The authors compare passive (polling) systems versus active (event driven systems) and recommend the latter.

Second, the authors recommend supporting a high-level language--dubbed StreamSQL, with built-in extensible stream oriented primitives and operators to process the data instead of writing custom code in languages such as C++ and Java.

Third, the system has to handle stream imperfections such as delayed data, missing data, or out of order data, and have timeouts for potentially blocking data to ensure system liveness.

Fourth, the system has to integrate stored and streaming data, to be able to reprocess data when necessary.

Fifth, the system has to generate predictable outcomes and repeatable results, such as when it needs to reprocess data for recovery, or handling duplicate data.

Sixth, the systems have to guarantee data safety and availability, with uninterrupted fail-over between primary and backup systems ala "Tandem-style" computing.

Seventh, the system has to partition and scale applications automatically, between cores and across machines to be able to seamlessly handle any increase in load.

Finally, the system has to be quick, process and respond instantaneously to streaming data, which requires careful planning and coding to minimize boundary crossing, and maximize the ratio of useful work to computation overhead.

The authors examine common architectures that fulfill parts of the requirements they listed above including databases (DBMS), rule engines that are built on condition/action pairs, and stream processing engines. They present in tabular form where the systems excel at, and where they don't. The table leans toward using stream processing engines instead of DBMS which are not optimized for the task.

Despite being a decade old, the paper is still relevant, and referenced in the modern literature. Moreover, it is well written and a pleasure to read.


Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...

Mining the Social Web, by Mathew Russell, O'Reilly Media

"Mining the social web" is a book about how to access social data from the most popular social services today by using the services' public APIs, and analyzing the retrieved data to gain insights about it. The book uses the Python programming language to access and manipulate the data, and provides code snippets of common tasks within the book, as well as full iPython notebooks on Github. The book is written as documentation for the freely available iPython notebooks, with the documentation providing context and background for the code, as well as describing the algorithms used to mine the social data. The author tries to be as concise as possible, although he did not succeed in the first chapter, where the first three section were verbose, and relatively unnecessary,  describing what twitter is and why people use it as a microblogging platform. With that out of the way, the writing style improves as the book progresses, and is a mixture of code examples and step ...