Skip to main content

Randomized Algorithms: Polynomial equivalence

We are all accustomed to deterministic algorithms; we work with them every day, and feel comfortable in knowing that the results of running them are predictable, barring a coding error of course. The idea of randomized algorithms feels remote and uncomfortable, despite their usefulness and elegance.  There are a couple of great examples in the introductory chapters of the book "Probability and Computing" that are an eye opener.

One is verifying polynomial identities: how can you tell that two different representations of polynomials are the same? For example, if we have two polynomials $P(x)$ and $Q(x)$, both of degree $d$ described by the following formulas:
\[
P(x) = \Sigma_{i=0}^{i=d} a_i x^i \\
Q(x) = \Pi_{i=1}^{i=d} (x-b_i)
\]

how can we determine that they are the same polynomial?

Intuitively we first check that the degrees are the same, then we could try to transform one form into the other, either by multiplying out the terms for $Q(x)$, collecting like terms and reducing it to $P(x)$, or finding the $d$ roots $r$ of $P(x)$, and expressing it as a product of $d$ terms of the form $(x-r_i)$ and comparing them to $Q(x)$. The first approach is easier, but can we do better?

The book presents a randomized algorithm to do the same. What if we pick a random number $r$ from the range $[0, Nd]$, where $N$ is a natural number greater than zero. For example if $N$ is $100$, we pick a random number $r$ from the range $[0,100d]$, and evaluate $P(r)$ and $Q(r)$, which could be done in $O(d)$ time. What would the result tell us?

If $P(r) \ne Q(r)$ then the polynomials are not the same. If $P(r) = Q(r)$, then there is a chance that the polynomials are equivalent, but there is also a chance that they are not, and that $r$ in this case is a root of the equation $P(x)-Q(x)=0$. The chance that we picked an $r$ that satisfies the last equation is no more than $1/N$---($1/100$ in the concrete example).

How do we minimize that chance? By repeating the evaluation by drawing another random value $r$ from the interval $[0,Nd]$. The book describes the probability of producing a false result as the evaluations are repeated with and without replacement, and they are less than $(1/N)^k$, where $k$ is the number of evaluations.

Isn't it quite elegant to find out that two polynomials are the same or not simply through repeated evaluations, and not through algebraic manipulations to transform either or both to a canonical form?

Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the doctor away: NoDB

In most database systems, the user defines the shape of the data that is stored and queried using concepts such as entities and relations. The database system takes care of translating that shape into physical storage, and managing its lifecycle. Most of the systems store data in the form of tuples, either in row format, or broken down into columns and stored in columnar format. The system also stores metadata associated with the data, that helps with speedy retrieval and processing. Defining the shape of the data a priori, and transforming it from the raw or ingestion format to the storage format is a cost that database systems incur to make queries faster. What if we can have fast queries without incurring that initial cost? In the paper " NoDB: Efficient Query Execution on Raw Data Files ", the authors examine that question, and advocate a system (NoDB) that answers it. The authors start with the motivation for such a system. With the recent explosion of data...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...