Skip to main content

Down memory lane: Approximate counting

At work we do a lot of counting. For some counts we need an accurate result, while for others we can get by with an approximate count. For these approximate counts, as long as the result is within the same order of magnitude of the true count, we're ok.

There is a lot of literature on approximate counting techniques, and the blog post "Probabilistic Data Structures for Web Analytics and Data Mining" does a great job at explaining some of them in detail, with references to the original papers.

One of the original approximate counting papers was the 1978 paper: Counting large numbers of events in small registers by Robert Morris. In the paper Morris explains the context for the problem, which might seem foreign today. Back in 1978 he was faced with the problem of counting a large number of different events, and because of memory restrictions he could only use 8-bit registers for each counter. Since he was not interested in exact counts, but with accurate enough counts, he came with the technique of probabilistic counting.

The first solution he explored in the paper was to count every other event based on a flip of a coin. If the probability of the coin flip is $p$, for a true value $n$, the expected value of the counter is $n/2$, with a standard deviation of $\sqrt{n/4}$. The error for such a solution is high for small values of $n$.

Morris then presents his second solution, by introducing a counter that stores the logarithm of the number of events that have occurred $v$, and not the raw count of the events. He increments the counter value $v$ probabilistically based on a flip of a coin. The stored value in the counter is $\log{1+n}$ and the probability of increasing the counter is $1/\Delta = \exp{(v+1)}-\exp{(v)}$.

The paper is an enjoyable short read, and it is amazing that the techniques introduced in the 70's are still applicable to technology problems today.

Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...

Mining the Social Web, by Mathew Russell, O'Reilly Media

"Mining the social web" is a book about how to access social data from the most popular social services today by using the services' public APIs, and analyzing the retrieved data to gain insights about it. The book uses the Python programming language to access and manipulate the data, and provides code snippets of common tasks within the book, as well as full iPython notebooks on Github. The book is written as documentation for the freely available iPython notebooks, with the documentation providing context and background for the code, as well as describing the algorithms used to mine the social data. The author tries to be as concise as possible, although he did not succeed in the first chapter, where the first three section were verbose, and relatively unnecessary,  describing what twitter is and why people use it as a microblogging platform. With that out of the way, the writing style improves as the book progresses, and is a mixture of code examples and step ...