Skip to main content

PiQL tech talk

Big Data has been gaining a lot of press lately, and NoSQL even more. A lot of new software development is moving to using NoSQL databases to alleviate the scaling pains of traditional RDBMS systems when the data size grows very large. The NoSQL databases usually expose a simple key value based API, where you can set, retrieve, and scan values based on the value of the keys you're interested in. The API is sufficient for most applications, but sometimes you want more than the simple retrieval API; you want SQL, where you can join keys, and have more complex filtering rules.

Here is where PiQL plays a role. Last week I attended a talk about PiQL by Michael Armbrust. In the talk, Michael introduced PiQL: a query system that runs on top of NoSQL key-value stores, and allows developers to write SQL queries that execute efficiently against the simple key-value retrieval API.

The system operates in two stages: the first is the static analysis stage, and the second is the execution stage. In the analysis stage, the optimizer analyzes all the SQL queries in the application offline. The analysis phase produces one of two results. If the queries can't be executed efficiently, the system warns the developer, and gets their explicit approval to serve the results of these queries.

However for the SQL queries that return bounded results, such as select ... limit 100 type queries, the system creates views and indexes in the key-value store, and translates the SQL queries into an execution plan over these views and indexes to make the execution efficient.

At run-time, the system uses the previously created views and indexes, with an efficient execution plan to transform SQL queries to simple key-value retrieval APIs, and serve the results. Michael brushed upon the transformation patterns for different SQL query types including joins and how PiQL creates views and indexes to serve these.

PiQL works well with read intensive systems. The issue with updates is that in addition to updating the data, the system also needs to update the views and indexes, and that could take a bit of time. When I asked Michael at the end of the talk he mentioned the updates typically take 150ms, but for some queries where the created views and indexes are a cross product of two keys, the updates take longer. If a read comes in during the update time, then you might miss the updated data. The updates are still bounded though, because the key spaces are bounded--a prerequisite for efficient execution in PiQL.

You can read more about PiQL and Michael's research at: http://www.cs.berkeley.edu/~marmbrus/

Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...