Tuesday, September 27, 2016

A paper a day keeps the doctor away: Brewer's Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services

Sixteen year ago, Eric Brewer introduced what is now known as the CAP theorem, which states that for a web service it is impossible to guarantee consistency, availability, and partition tolerance.  The conjecture was based on Brewer's experiences at Inktomi--a search engine company he cofounded, and was published without proof.  Gilbert and Lynch presented one in their paper: "Brewer's Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services."

The paper is a good theoretical read, and the proofs the authors present are very tractable. They first begin by  formalizing the concepts of consistency (the authors use atomic in the paper), availability, and partition tolerance. For a consistent service, there is a total order on all operations such that each operation looks as if it were completed at a single instant. For availability, every request received by a non-failing node in the system must result in a response. Finally for partition tolerance the network is  allowed to lose arbitrarily many messages sent from one node to another.

The authors use these definitions to present their first impossibility result:

"It is impossible in the asynchronous network model to implement a read/write data object that guarantees the following properties:
  • Availability 
  • Atomic consistency
 in all fair executions (including those in which messages are lost). "

They prove the assertion by contradiction. The proof uses  two nodes/partitions in the system $A$ and $B$, where all the messages between $A$ and $B$ are lost. The proof assumes two operations $\alpha$ and $\beta$ that execute separately on $A$ and $B$, and are ordered such that $\beta$ occurs after $\alpha$ has ended.  $\alpha$ executes a write on partition $A$, $\beta$ executes a read from partition $B$ with all messages between $A$ and $B$ lost. Each operation on its own returns consistent results, while combined together as a new operation $\alpha+\beta$, return inconsistent data, proving the theorem.

The authors extend the result through a similar method of argument to all types of executions, since nodes $A$ and $B$ can't tell if the messages between them are lost in an asynchronous network (without the concept of clocks or time). The authors provide some example systems for asynchronous networks that provide two of the three guarantees (C,A, and P).

For partially synchronous systems, where every node has a clock, and all clocks increase at the same rate, but are not synchronized, the authors present another impossibility result:

"It is impossible in the partially synchronous network model to implement a read/write data object that guarantees the following properties:
  • Availability
  • Atomic consistency
in all executions (even those in which messages are lost)"

The proof is similar to the original impossibility result, with execution $\beta$ sufficiently delayed for the messages not to reach partition $B$.

The authors close by providing a weaker consistency condition that allows stale data to be returned when there are partitions through the use of a centralized node, and the formal requirements it places on the quality of the stale data returned .



Monday, September 19, 2016

A paper a day keeps the doctor away: Medians and Beyond: New Aggregation Techniques for Sensor Networks

The Internet of Things has spurred renewed interest in sensors and sensor telemetry.  For each IoT application, modern sensors--ones that support sensing, computation, and communication, collect data about their environment, and relay it back to a "base station". The base station aggregates data from all the sensors to make sense of the telemetry.

Since most of the sensors are deployed in settings that have severe constrains on power consumption and battery life,  it is usually expensive to send all the raw telemetry data to the "base station" without aggregation or sampling at each sensor. For certain class of metrics data, such as counts, sums, and averages, the base station can combine the aggregates from each sensor, and come up with meaningful statistics overall. However for sensor aggregates such as median, mode, and percentiles, the job of the base station becomes harder. The authors of the paper "Medians and Beyond: New Aggregation Techniques for Sensor Networks" present an algorithm called "Q-digest" that helps the base station calculate statistics such as medians, modes, histograms, quantiles, and range queries from aggregated sensor statistics.

The authors start by highlighting the need for sensors to send aggregated data by citing that the cost of transmitting one bit over the radio is more expensive than running one instruction on the sensor.  They also highlight that from a user's perspective, in most cases, individual sensor values are not important, and that users are fine with aggregate functions such as min/max, count, sum, average, and percentiles. They review projects such as TinyDB and Cougar, and expand on that work to enable computing percentiles through the Q-digest algorithm.

The authors first describe Q-digest using one sensor, and then explain how to combine Q-digests from multiple sensors to answer percentile and other queries. For one sensor, the authors assume that the sensor captures integers with the range $[1, \sigma]$.

The authors define a binary partition tree over the range, with $\sigma$ leaf nodes representing the values in the range, and the frequency of these values in the sensor data stream. The binary tree has a  height   $ \log \sigma $, and each node in the tree represents a bucket that has a range defining the position and the width of the bucket. For example, the root node of the tree describes the full range $[1,\sigma]$, while its two children describe the ranges $[1,\sigma/2]$, and $[\sigma/2+1,\sigma]$ respectively, and the leaf nodes are the raw data in the stream. The Q-digest is a special subset of that conceptual tree that encodes the distribution of sensor values in a compressed format, and satisfies the following properties:

Given a compression parameter $k$, and a data stream of size $n$, a tree node $v$ is in the digest if:
$count(v) < [n/k]$ and $count(v) + count(v_p) + count (v_s) > [n/k]$ where $v_p$ is the parent of $v$, and $v_s$ are the siblings of $v$. The root and leaf nodes are exempt from satisfying both properties.

Intuitively the first property states that no node except leaf nodes should have a high count, and the second states that no node and its siblings should have low counts. The two properties help compress the tree into the Q-digest, and help with error bounds.

The authors walk through an example of building and compressing the Q-digest, and provide proofs of bounds on the size of the digest and maximum count error per node.

The authors then show how we can use the Q-digest to answer different types of queries. For example, to find the $q$th quantile, we sort the nodes of the Q-digest in increasing bucket right endpoint values. The sorted list gives a post-order traversal of the nodes of the Q-digest, that when scanned from the left and adding the counts of value frequencies seen so far, will at some point exceed $qn$. The node at which this happens is a good estimate of the $q$th quantile. The authors show that the Q-digest can also be used to calculate inverse quantile queries, range queries, and consensus queries.

The authors close by sharing the experimental evaluation of accuracy, and message size, power consumption of the algorithm, and discuss how to evolve the work to deal with dropped values in the sensor network.