Skip to main content

A paper a day keeps the doctor away: FIT A Distributed Database Performance Tradeoff

In distributed systems, the CAP theorem provides a framework for thinking about the consistency, availability, and partition tolerance guarantees a system can provide. In their paper "FIT, a distributed database performance trandeoff", Faleiro and Abadi present a similar framework for thinking about distributed database performance.

The authors start with some intuition about distributed transactions: ones that rely on data that sits in different nodes in a distributed system. For the distributed transaction to guarantee atomicity, coordination between the participating nodes is required, The coordination offers systems designers a tradeoff choice between throughput and strong isolation. Guaranteeing strong isolation impacts the system throughput, and increasing throughput would imply allowing transactions to execute concurrently in spite of the presence of conflicts.

The authors introduce another variable, fairness, that interplays with the tradeoffs between strong isolation and throughput. The idea is that when the system is given license to selectively prioritize or delay transactions, it can improve throughput while still guaranteeing strong isolation. Instead of thinking about the tradeoff between strong isolation and throughput, the authors present the three way tradeoff between fairness, isolation, and throughput "FIT", and postulate that a system that forgoes one of them can guarantee the other two.

The authors provide some of examples of fairness play, such as "group commit" for in-memory databases, where the transaction cost is small, but the cost of writing the logs to durable storage is high and limits the throughput. In "group commit", the database accumulates log records from multiple transactions, and writes them to disk in one batch, working around the disk write bottleneck and increasing the system throughput at the cost of decreasing fairness, since the transactions can't commit until their buffered log records are flushed to disk.

Another example the authors provide is "lazy evaluation", where transactions are deferred to ensure that data dependent transactions are executed together, to amortize the cost of bringing the affected data into the processor cache and main memory across the transactions, improving throughput but decreasing fairness.

The authors categorize systems according to the interplay between fairness, isolation, and throughput, and present three classes of systems, with practical examples of each class:
  • Ones that guarantee strong isolation and fairness at the expense of throughput
    • Spanner--Google's geo-scale distributed database

  • Ones that guarantee strong isolation and good throughput at the expense of fairness
    • G-Store--a key value store with support for multi-key transactions
    • Calvin--a database system designed to reduce the impact of coordination in distributed transactions through imposing a total order on the transactions

  • Ones that guarantee good throughput and fairness at the expense of strong isolation
    • Eventually consistent systems--Cassandra for example
    • RAMP systems--read atomic multi partition transactions
The authors close by pointing that the FIT tradeoff interplay is also applicable to multi-core database systems such as Silo--a main memory database system designed to reduce contention on shared memory, and Doppel--a main memory database system that exploits commutativity to increase concurrency.


Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...