Skip to main content

Upgrading from WSL1 to WSL2 on Windows 10

I have been using WSL1 for a long time now and have been extremely pleased with it. After setting up Cygwin/X server, I don’t have a need for running Linux in a VM anymore. With the new Windows update, WSL is changing to support GPUs and GUI applications, so I decided to upgrade my install to make use of these improvements once they are shipped. 

The upgrade path is easy: you can choose to configure a new WSL2 environment, or you can migrate your existing one. I decided to upgrade my install, since I had a lot of packages installed, and did not want to reinstall and configure them in a new environment again. 

First, I started by getting everything I have in my Ubuntu environment up to date:
  sudo apt update
 sudo apt upgrade 

Then in a windows command prompt, I updated my default WSL environment to version 2:
  wsl –set-default-version 2 

WSL2 requires Virtual machine platform support, and if that is not enabled, you can easily enable it through the control panel or searching for “Turn Windows Features On or Off”, and enabling the Virtual machine platform feature. 

 After the installation and a reboot, I upgraded my Ubuntu environment (you can list the environments through wsl –list –verbose). In my case, my environment name is “Ubuntu”
   wsl –set-version Ubuntu 2 

This starts the conversion. After that completes, in a bash window, I did an Ubuntu release upgrade:
  sudo do-release-upgrade 

The release upgrade asks a couple of questions about packages and configs; I chose the maintainer’s version, and after everything was completed, I was ready to go. 

 I needed to do a couple of changes to enable X11 different from what I’ve done for WSL1: 

 In my ~/.bashrc I added the following lines: 
  export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0 
 export LIBGL_ALWAYS_INDIRECT=1 

And in my Cygwin/X Xlaunch wizard, I enabled access from everywhere, and added “-listen tcp” to the extra arguments. 

 And everything worked like a charm!

Comments

Popular posts from this blog

Kindle Paperwhite

I have always been allergic to buying specialized electronic devices that do only one thing, such as the Kindle, the iPod, and fitness trackers. Why buy these when technology evolves so fast that a multi-purpose device such as the phone or a smart watch can eventually do the same thing, but with the convenience of updates that fix bugs and add functionality? So, I was shocked when this weekend I made an impulse buy and got the newest Kindle Paperwhite—a special purpose device for reading eBooks. I was walking past the Amazon store in the mall and saw that the newest Kindle Paperwhites were marked down by $40 for the holidays. The device looked good in the display, so I went in to look at it closely. The Paperwhite is small and light, with a 6” screen that is backlit and waterproof.   The text was crisp and readable, and in the ambient light, it felt like I am reading a printed book. I was sold and bought it on the spot. At home I have struggled to put it down. The bo...

A paper a day keeps the dr away: Dapper a Large-Scale Distributed Systems Tracing Infrastructure

Modern Internet scale applications are a challenge to monitor and diagnose. The applications are usually comprised of complex distributed systems that are built by multiple teams, sometimes using different languages and technologies. When one component fails or misbehaves, it becomes a nightmare to figure out what went wrong and where. Monitoring and tracing systems aim to make that problem a bit more tractable, and Dapper, a system by Google for large scale distributed systems tracing is one such system. The paper starts by setting the context for Dapper through the use of a real service: "universal search". In universal search, the user types in a query that gets federated to multiple search backends such as web search, image search, local search, video search, news search, as well as advertising systems to display ads. The results are then combined and presented back to the user. Thousands of machines could be involved in returning that result, and any poor p...

A paper a day keeps the doctor away: MillWheel: Fault-Tolerant Stream Processing at Internet Scale

The recent data explosion, and the increase in appetite for fast results spurred a lot of interest in low-latency data processing systems. One such system is MillWheel, presented in the paper " MillWheel: Fault-Tolerant Stream Processing at Internet Scale ", which is widely used at Google. In MillWheel, the users specify a directed computation graph that describe what they would like to do, and write application code that runs on each individual node in the graph. The system takes care of managing the flow of data within the graph, persisting the state of the computation, and handling any failures that occur, relieving the users from that burden. MillWheel exposes an API for record processing, that handles each record in an idempotent fashion, with an exactly once delivery semantics. The system checkpoints progress with a fine granularity, removing the need to buffer data between external senders. The authors describe the system using the Zeitgeist produ...