Monday, September 11, 2017

Po'boy

Lunches at work are usually nothing to write about, however every now and then, we get an external restaurant that makes a memorable dish. A couple of weeks ago, that was the Cajun shrimp po’boy. The sandwich was relatively simple: a toasted baguette, a big of mayonnaise, some shredded lettuce, and seasoned Cajun shrimp, however the taste was amazing. I had an inkling about the origin of the name po’boy, but the price of the sandwich betrayed that thinking. A bit of research on the web revealed a couple of origin stories. The most plausible and heartwarming was on Wikipedia: that during a 1929 street car workers’ strike, restaurant owners served the sandwich to their striking colleagues for free, jokingly referring to the strikers as poor boys, after which the sandwiches took the name, and in the Louisiana dialect shortened to po’boy.

Thursday, September 7, 2017

Amazon Bookstores


In an era where most retailers are trying to decrease their physical presence, Amazon is doing the opposite  by opening physical bookstores across the US. I visited one last weekend, and was very impressed with the genius of using the store to promote Amazon services and devices.

First, unlike most bookstores that carry books on nearly every subject in the Dewey decimal system, the Amazon store carries a much smaller collection of books that are bestselling amongst customers in the geographical area. There were best sellers in fiction, art, cooking, business, self-improvement, health, children’s books, popular science, and that’s it. The pricing model was genius: if you have a prime membership you get the Amazon discounted price, and if you don’t you pay the book’s list price. My guess is that this will drive prime memberships as store patrons will opt to become prime members to nab the books they like at a discount.

Second, the store has prominent displays of the Kindle, Fire, and Echo devices, with helpful staff that answers any questions you have. Having the devices on display, and allowing customers to hold them and learn how they feel in their hands is very powerful. I was tempted to get a Kindle device despite my love of the Kindle App on my phone and tablet, and a Fire HD for the kids in addition to their iPads.

Third, despite the store being much smaller than typical bookstores, it felt warm and inviting. The store even had some comfortable chairs and couches for people to sit down, and enjoy a book or two.

It would be interesting to see how these bookstores fare when other stores failed. Apple, Microsoft, and now Amazon are proving that properly designed store can work, allowing people to touch and feel the products they are thinking of buying ahead of making a decision.

Friday, April 14, 2017

A paper a day keeps the dr away: FaRM -- Fast Remote Memory

Distributed systems have allowed applications to use more computation power, memory, and physical storage than is available on a single machine, enabling applications to tackle more complex problems. The capacity increase however comes at a cost: accessing remote resources is slower than accessing ones that are local to the machine. The paper “FaRM:Fast Remote Memory” tries to address the cost of accessing remote memory, and ways to make it faster.
The authors start by acknowledging that the major cost of accessing remote memory is the networking cost between machines through the TCP/IP stack, and that faster networks could do so much. They cite the case of MemC3—a state of the art key-value store—which performed 7x worse in a client-server setup than in a single machine setup, despite request batching. The authors ask the question if the TCP/IP stack overhead is that high, what if you bypass the complex protocol stacks, and use RDMA—remote direct memory access—to access memory on another machine? How would the performance look? The rest of the paper explores that question, and introduces FaRM: fast remote access memory.
The authors start with some background on RDMA. In RDMA, requests are sent between nodes over queue-pairs, and network failures are exposed as terminated connections. The requests go directly to the NIC, without involving the kernel, and are serviced by the remote NIC without the involvement of the CPU. Similar to DMA—direct memory access—a memory region is registered with the NIC before use, and the NIC driver pins the memory regions in physical memory, and stores virtual to physical page mappings in a page table in the NIC. When an RDMA request is received, the NIC gets the page table for the target, and uses DMA to access the memory.  Since NICs have limited memory for page tables, the tables are stored in system memory, and the NIC memory acts a cache. RDMA connects to remote machines typically over InfiniBand, but recently RoCE—RDMA over converged Ethernet—is becoming more attractive, with flow control, congestion notification, and a much cheaper price--$10/Gbps for 40 Gbps RoCE compared to $60/Gpbs for 10 Gbps Ethernet.
FaRM uses a circular buffer to implement a uni-directional channel. The buffer is stored on the receiver, and there is one buffer for each sender/receiver pair. The sender uses RDMA to write messages to the buffer tail, and advances the tail pointer on every send. It also maintains a local copy of the head pointer to prevent writing messages past the head. The receiver updates the head in the sender’s copy using RDMA as well, to create space in the circular buffer. The receiver polls for new items at the head of the buffer, and process them creating space as needed. The authors indicate that the polling overhead is negligible with 78 machines. They found that at that scale, the RDMA writes and polling significantly outperform the complex InfiniBand send and receive verbs. The authors ran a micro-benchmark to compare the performance of FaRM communication with TCP/IP on a cluster of 20 machines connected by a 40 Gbps RoCE network. The results show that FaRM’s RDMA based messaging achieves an 9x-11x higher request rate than TCP/IP for request sizes between 16 and 512 bytes. Another latency micro-benchmark showed that TCP/IP latency at peak request rate is 145x higher than that of RDMA based messaging for all request sizes.
To achieve that high performance, the authors had to do some optimizations. The first was using larger pages to reduce the number of entries in the NIC page tables by implementing a kernel driver for Windows and Linux that allocates large number of physically contiguous and aligned 2GB memory regions at boot time.
The authors also optimized the number of queue-pair data, by reusing a single connection between a thread and each remote machine, and sharing queue-pairs among many threads in a machine.
The authors introduce the FaRM API, which provides an event based programming model, with operations that require polling to complete a task taking a continuation argument—continuation function, and context pointer. The continuation function is called when the operation is done, and is passed the result of the operation and the context pointer. FaRM API also provides convenience functions to allocate and free objects, and support lock-free operations.
The authors use the APIs to implement a distributed hashtable, and a graph store similar to Facebook’s Tao, and evaluate the performance of both.
The experiments used an isolated cluster with 20 machines, with 40Gbps RoCE NIC. The machines ran Windows Server 2012RC, on 2.4GHz Xeon CPUs with 8 cores and two hyper-threads per core, and a total of 128GB of DRAM, and 240GB SSDs.
For the key-value store experiments, the authors used 120 million key-value pairs per machine, and configured the hash stores for 90% occupancy, and measured the performance for 1 min after a 20 second warm-up. The results show that FaRM achieves 146 million lookups per second with a latency of 35 microseconds. The throughput is an order of magnitude higher than MemC3, and the latency two orders of magnitude lower.
For the graph store, the authors implemented a store similar to Tao, and used Facebook’s LinkBench with its default parameters for degree and data size distributions, with a resulting graph with 1 billion nodes and 4.35 billion edges. On the 20 machine clusters, the authors got 126 million operations per second, with a per machine throughput of 6.3 million operations per second that is 10x that reported for Tao. The average latency at peak was 41 microseconds, which is 40-50x lower than the reported Tao latencies.
The authors end by describing some of the systems and libraries that use RDMA to improve performance.

Friday, March 24, 2017

Virtual machine could not be started because the hypervisor is not running


I wanted to experiment with TensorFlow, and decided to do that in a Linux VM, despite the fact that Windows Subsystem for Linux exists. In the past I used Sun’s, and then Oracle’s VirtualBox to manage virtual machines, but since my Windows install had Hyper-V, I decided to use that instead. The virtual machine configuration was easy, with disk, networking, and memory configurations non-eventful. However when I tried to start the virtual machine to setup Ubuntu from an ISO, I was greeted with the following error:

“Virtual machine could not be started because the hypervisor is not running”

A quick Internet search revealed that a lot of people have faced that problem, and most of the community board solutions did not make any sense. The hidden gem is this technet article, which included detailed steps to find if the Windows Hypervisor was running or not, and the error message if it failed to launch. In my case, the error was:

“Hyper-V launch failed; Either VMX not present or not enabled in BIOS."

The fix here is easy, and buried in another technet article. Simply reboot the machine entering BIOS setup mode, and disable VT-d and trusted execution settings. After a quick reboot, the hypervisor is happily humming along, and the setup of my Ubuntu VM is complete.

Monday, January 30, 2017

On brewing tea

I watched a video interview with the 10th heir of Twinings Tea Company, that has been merchandising tea for over 300 years. In the interview, among talking about the family history, and the story behind their bestselling tea flavor—Earl Grey—he talked about the best way to brew tea, whether using loose leaves, or a tea bag.

To extract the most flavor out of tea, he recommended bringing cold water to a boil, and removing the kettle off the stove once the water starts boiling. His theory is that the flavor is extracted through the air in the water, and continuing to boil the water further, will reduce the amount of air in it.

For green teas, he recommends letting the kettle set for 5 mins, then pouring the hot water over the tea, and for black teas, he recommends pouring the hot water immediately over the tea. The heir advised against removing the bag, or repeatedly dunking it in the water during brewing, because that only changes the color of the water, and makes the tea bitter without extracting flavor. On the contrary, he recommends leaving the tea bag still for 3 minutes, and then throwing it away, enjoying the flavorful tea, adding milk, or lemon, but never sugar, as it masks the tea flavor.


I followed his advice verbatim, and while I am not sure if the effect is psychological or real, I drank the best cup of tea in years. No bitterness, no sweetness, just great tea flavor.

Thursday, January 26, 2017

Random acts of kindness

When I have the chance, I like to walk to my meetings instead of using the shuttle service available on campus. When it is not raining, the walk is very refreshing: I get to clear out my thoughts on the walk, and get in some number of steps for my daily activity.  After one of my meetings ended, I started to head back to my building, only to see that it started to down pour. To my luck, there was a shuttle parked upfront. I asked the driver if she could take me back to my building, and she said she was on her lunch break. As I said no worries, I’ll just walk back, she insisted that she can drive me. I hopped in the shuttle, thanking her profusely for taking the time from her lunch break to drive me back, she insisted it was not a big deal. Such an act of kindness made my day, and it is a great reminder to continue doing good things to others, simply for the joy it brings them.

Friday, December 30, 2016

A hole in the wall


I am a big fan of good and delicious food, irrespective of where it is sold. That includes street vendors, and “holes in the wall,” which I have always associated with small nondescript places, with no signs on the venue, no place to sit, and a staff that exudes a slightly higher risk of contracting dysentery, typhoid, or other gastrointestinal diseases. That description might be a bit extreme, but I had some of the best meals in similar places, including the famous Hyderabadi Dum-Biryani in a place not so far from that description.

So where did the phrase a “hole in the wall” come from? On another historical tour of Florence, our tour guide and language enthusiast pointed out some of the palaces where Italian nobility such as the Medici family lived long time ago. Invariably at the entrance there was a slit or a hole in the wall, and the tour guide told us the story that after the nobility hosted lavish dinner parties, instead of throwing the remaining food away, they would give it to the unfortunate lining up in front of the palace through that small hole in the wall of the building. Since the food was delicious, eating at the hole in the wall was sought after during these times, and the tour guide surmised that this is the origin of the phrase. I could not verify that claim, however one site online lists a similar story:

              the hole made in the wall of the debtors' or other prisons, through which the poor prisoners received the money, broken meat, or other donations of the charitably inclined”

Regardless of the origin of the phrase, the story and the imagery were vivid, and they stuck with me.