Skip to main content

A paper a day keeps the doctor away: The 8 Requirements of Real-Time Stream Processing

In recent years there has been an explosion of data all around us. The data comes in from a variety of sources, such as financial real-time systems, cell phone networks, sensor networks--RFID and IoT, and GPS. Commensurate with this dramatic increase in data, is a corresponding unquenchable thirst for analysis and insights. The natural question arises: how do we build systems that process and makes sense of this vast amount of data, in as close to real-time as possible? What patterns of software and systems should we look at?

Michael Stonebraker of database fame et al. offer some advice on what to consider in their paper: "The 8 requirements of real-time stream processing" published a decade ago. In the paper, the authors list eight guiding principles that high-volume low-latency systems should follow to be able to process vast amounts of data in near real-time.

First, the systems have to keep the data moving, and do straight-through processing with minimal to no writes to disk to achieve the low-latency desired. The authors compare passive (polling) systems versus active (event driven systems) and recommend the latter.

Second, the authors recommend supporting a high-level language--dubbed StreamSQL, with built-in extensible stream oriented primitives and operators to process the data instead of writing custom code in languages such as C++ and Java.

Third, the system has to handle stream imperfections such as delayed data, missing data, or out of order data, and have timeouts for potentially blocking data to ensure system liveness.

Fourth, the system has to integrate stored and streaming data, to be able to reprocess data when necessary.

Fifth, the system has to generate predictable outcomes and repeatable results, such as when it needs to reprocess data for recovery, or handling duplicate data.

Sixth, the systems have to guarantee data safety and availability, with uninterrupted fail-over between primary and backup systems ala "Tandem-style" computing.

Seventh, the system has to partition and scale applications automatically, between cores and across machines to be able to seamlessly handle any increase in load.

Finally, the system has to be quick, process and respond instantaneously to streaming data, which requires careful planning and coding to minimize boundary crossing, and maximize the ratio of useful work to computation overhead.

The authors examine common architectures that fulfill parts of the requirements they listed above including databases (DBMS), rule engines that are built on condition/action pairs, and stream processing engines. They present in tabular form where the systems excel at, and where they don't. The table leans toward using stream processing engines instead of DBMS which are not optimized for the task.

Despite being a decade old, the paper is still relevant, and referenced in the modern literature. Moreover, it is well written and a pleasure to read.


Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the doctor away: NoDB

In most database systems, the user defines the shape of the data that is stored and queried using concepts such as entities and relations. The database system takes care of translating that shape into physical storage, and managing its lifecycle. Most of the systems store data in the form of tuples, either in row format, or broken down into columns and stored in columnar format. The system also stores metadata associated with the data, that helps with speedy retrieval and processing. Defining the shape of the data a priori, and transforming it from the raw or ingestion format to the storage format is a cost that database systems incur to make queries faster. What if we can have fast queries without incurring that initial cost? In the paper " NoDB: Efficient Query Execution on Raw Data Files ", the authors examine that question, and advocate a system (NoDB) that answers it. The authors start with the motivation for such a system. With the recent explosion of data...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...