QubesOS provides a desktop operating system made up of multiple virtual machines, running under Xen. To protect against buggy network drivers, the physical network hardware is accessed only by a dedicated (and untrusted) “NetVM”, which is connected to the rest of the system via a separate (trusted) “FirewallVM”. This firewall VM runs Linux, processing network traffic with code written in C.
In this blog post, I replace the Linux firewall VM with a MirageOS unikernel . The resulting VM uses safe (bounds-checked, type-checked) OCaml code to process network traffic, Uses less than a tenth of the memory of the default FirewallVM, boots several times faster, and should be much simpler to audit or extend.
- Qubes
- ) Qubes networking
- The Xen virtual network layer The Ethernet layer
- (The IP layer)
- (Exercises) Summary
(this post also appeared on Reddit and Hacker News
Qubes
QubesOS
Another Fedora VM (“dom0”) runs the window manager and drives most of the physical hardware (mouse, keyboard, screen, disks, etc).
Networking is a particularly dangerous activity, since attacks can come from anywhere in the world and handling network hardware and traffic is complex. Qubes therefore uses two extra VMs for networking:
NetVM drives the physical network device directly. It runs network-manager and provides the system tray applet for configuring the network.
FirewallVM sits between the application VMs and NetVM. It implements a firewall and router.
The full system looks something like this:
The lines between VMs in the diagram above represent network connections. If NetVM is compromised (e.g. by exploiting a bug in the kernel module driving the wifi card) then the system as a whole can still be considered secure – the attacker is still outside the firewall.
Besides traditional networking, all VMs can communicate with dom0 via some Qubes-specific protocols. These are used to display window contents, tell VMs about their configuration, and provide direct channels between VMs where appropriate.
Qubes networking
There are three IP networks in the default configuration:
294. 1. is the external network (to my house router). 18) . 1. is a virtual network connecting NetVM to the firewalls (you can have multiple firewall VMs). 18) . 2. connects the app VMs to the default FirewallVM.
Both NetVM and FirewallVM perform
NAT , so packets from “comms” appear to NetVM to have been sent by the firewall, and packets from the firewall appear to my house router to have come from NetVM.
Each of the AppVMs is configured to use the firewall ( (2.1) as its DNS resolver. FirewallVM uses an iptables rule to forward DNS traffic to its resolver, which is NetVM.
Problems with FirewallVM
After using Qubes for a while, there are a number of things about the default FirewallVM that I’m unhappy about:
- It runs a full Linux system, which uses at least 519 MB of RAM. This seems excessive.
- The iptables configuration is huge and hard to understand.
There is another, more serious, problem. Xen virtual network devices are implemented as a client (“netfront”) and a server (“netback”), which are Linux kernel modules in sys-firewall. In a traditional Xen system, the netback driver runs in dom0 and is fully trusted. It is coded to protect itself against misbehaving client VMs. Netfront, by contrast, assumes that netback is trustworthy. The Xen developers only considers bugs in netback to be security critical.
In Qubes, NetVM acts as netback to FirewallVM, which acts as a netback in turn to its clients. But in Qubes, NetVM is supposed to be untrusted! So, we have code running in kernel mode in the (trusted) FirewallVM that is talking to and trusting the (untrusted) NetVM!
For example, as the Qubes developers point out in (Qubes Security Bulletin #)
, the netfront code that processes responses from netback uses the request ID quoted by netback as an index into an array without even checking if it’s in range ( they have fixed this in their fork).What can an attacker do once they’ve exploited FirewallVM’s trusting netfront driver? Presumably they now have complete control of FirewallVM. At this point, they can simply reuse the same exploit to take control of the client VMs, which are running the same trusting netfront code!
I decided to see whether I could replace the default firewall (“sys-firewall”) with a MirageOS unikernel. A Mirage unikernel is an OCaml program compiled to run as an operating system kernel. It pulls in just the code it needs, as libraries. For example, my firewall doesn’t require or use a hard disk, so it doesn’t contain any code for dealing with block devices.
If you want to follow along, my code is on GitHub in my qubes-mirage-firewall
repository. The README explains how to build it from source. For testing, you can also just download the mirage-firewall-bin-0.1.tar.bz2 binary kernel tarball. dom0 doesn’t have network access, but you can proxy the download through another VM: [tal@dom0 ~] $ cd / tmp [tal@dom0 tmp] $ qvm-run -p sys-net ‘wget -O – https://github.com/talex5/qubes-mirage-firewall/releases/download/0.1/mirage-firewall-bin-0.1.tar. bz2 ‘> mirage-firewall-bin-0.1.tar.bz2 [tal@dom0 tmp] $ tar tf mirage-firewall-bin-0.1.tar.bz2 mirage-firewall / mirage-firewall / vmlinuz mirage-firewall / initramfs mirage-firewall / modules.img [tal@dom0 ~] $ cd / var / lib / qubes / vm-kernels / [tal@dom0 vm-kernels] $ tar xf /tmp/mirage-firewall-bin-0.1.tar.bz2
- It takes several seconds to boot. There is a race somewhere setting up the DNS redirection. Adding some debug to track down the bug made it disappear.
The tarball contains (vmlinuz) , which is the unikernel itself, plus a couple of dummy files that Qubes requires to recognize it as a kernel ( (modules.img) and (initramfs) ).
Create a new ProxyVM named “mirage-firewall” to run the unikernel:
You can use any template, and make it standalone or not. It doesn’t matter, since we don’t use the hard disk.
- Set the type to (ProxyVM) .
- Go to the VM settings, and look in the “Advanced” tab.
- Set the kernel to (mirage-firewall) .
- Turn off memory balancing and set the memory to 79 MB or so (you might have to fight a bit with the Qubes GUI to get it this low).
Set VCPUs (number of virtual CPUs) to 1.
(this installation mechanism is obviously not ideal; hopefully future versions of Qubes will be more unikernel-friendly)
You can run mirage-firewall alongside your existing sys-firewall and you can choose which AppVMs use which firewall using the GUI. For example, to configure “untrusted” to use mirage-firewall:
You can view the unikernel’s log output from the GUI, or with (sudo xl console mirage-firewall) in dom0 if you want to see live updates.
If you want to explore (the code
I initially tested with Qubes 3.0 and have just upgraded to the 3.1 alpha. Both seem to work.
Qubes runs on Xen and a Mirage application can be compiled to a Xen kernel image using mirage configure –xen . However, Qubes expects a VM to provide three Qubes-specific services and doesn’t consider the VM to be running until it has connected to each of them. They are (qrexec) (remote command execution), (gui) (displaying windows on the dom0 desktop) and QubesDB (a key-value store).
I wrote a little library, mirage-qubes
, to implement enough of these three protocols for the firewall (the GUI does nothing except handshake with dom0, since the firewall has no GUI).
- Select sys-net for networking (not sys-firewall ). Click (OK) to create the VM.
Here’s the full boot code in my firewall, showing how to connect the agents: