Firecracker on Thinkpad X220
Today I was working on setting up Firecracker on X220 and figuring out how to run them in Kubernetes cluster.
Firecracker is an open-source, lightweight virtual machine monitor written in Rust. It leverages Linux Kernel Virtual Machine (KVM) to isolate multi-tenant cloud workloads like containers and functions. Two major points make them different from containers:
- Containers are considered less secure than VMs. The Firecracker process is statically linked and can be launched using a jailer. Firecracker takes advantage of the acceleration from KVM (which means it runs on Linux only).
- Firecracker replaces QEMU as a minimalistic VMM that provides only required resources needed by a user, reducing memory overhead. Simply put, you can run a Firecracker instance in less than 250ms.
For the most part, it’s relatively simple to run Firecracker on Thinkpad X220.
Download and compile firectl
:
$ git clone https://github.com/firecracker-microvm/firectl $ cd firectl $ make
Download core images:
$ curl -fsSL -o /tmp/hello-vmlinux.bin https://s3.amazonaws.com/spec.ccfc.min/img/hello/kernel/hello-vmlinux.bin $ curl -fsSL -o /tmp/hello-rootfs.ext4 https://s3.amazonaws.com/spec.ccfc.min/img/hello/fsfiles/hello-rootfs.ext4
Start Firecracker using firectl
:
$ firectl \ --kernel=/tmp/hello-vmlinux.bin \ --root-drive=/tmp/hello-rootfs.ext4 \ --kernel-opts="console=ttyS0 noapic reboot=k panic=1 pci=off nomodules rw"
Once you see a prompt, login using root/root
. To exit the process type
reboot
.
I spent the rest of the day trying to figure out how to marry Firecracker with Kubernetes. Firecracker is not a container, but there’s Kata Containers project. In Kata, each container has its own virtual machine, providing container isolation via hardware virtualization.
The recent addition of CRI (Container Runtime Interface) to Kubernetes means Kata Containers can be controlled by any OCI (Open Container Initiative) compatible CRI implementation, CRI-O (lightweight alternative to using Docker as the runtime for kubernetes) being the main one.
The key difference between the Kata approach and other container engines is that Kata uses hardware-backed isolation as the boundary for each container or collection of containers in a Kubernetes container pod.
To run Kata + Firecracker, there are a few mandatory requirements your host system/container stack will need to support:
- Your host must support the
vhost_vsock
kernel module - Your container stack must provide a block-based storage (‘graph driver’), such
as
devicemapper
Without these pre-requisites, Kata + Firecracker will not work.
First of all, let’s test that we can run Kata + Firecracker + Docker.
To configure Docker for devicemapper and Kata, set /etc/docker/daemon.json
with the following contents:
$ cat /etc/docker/daemon.json { "experimental": true, "runtimes": { "kata-fc": { "path": "/opt/kata/bin/kata-fc" } }, "storage-driver": "devicemapper" }
Then restart Docker:
$ sudo systemctl daemon-reload $ sudo systemctl restart docker
Get Kata static binaries. The tarball is designed to be decompressed into /
,
placing all of the files within /opt/kata/
. The runtime configuration is
expected to land at
/opt/kata/share/defaults/kata-containers/configuration.toml
. Ubuntu packages
are out-of-date, and I didn’t risk installing them:
$ wget https://github.com/kata-containers/runtime/releases/download/1.13.0-alpha0/kata-static-1.13.0-alpha0-x86_64.tar.xz $ sudo tar -xvf kata-static-1.5.0-x86_64.tar.xz -C / $ /opt/kata/bin/kata-fc --version kata-runtime : 1.13.0-alpha0 commit : cd63aacc9eaf6b59d32f900fe875d949a55e1b4d OCI specs: 1.0.1-dev
Assuming vsock is supported, run the kata container:
$ docker run --runtime=kata-fc -itd --name=oh-sweet-fc alpine sh
You’ll see firecracker
is now running on your system, as well as a kata-shim
process:
$ ps -ae | grep -E "kata|fire" 43857 ? 00:00:00 firecracker 43864 ? 00:00:00 kata-shim
You can exec into the container, providing a shell into a container which is running inside of a firecracker based virtual machine:
$ docker exec -it oh-sweet-fc sh #
So far, so good. However, Kubernetes turned out to be more challenging to setup.
For Kata Containers to work under a Minikube VM, your host system must support nested virtualization. I already set it up in KVM on Thinkpad X220
Run Minikube:
$ minikube start --driver kvm2 --memory 6144 --cni=bridge --container-runtime=cri-o --bootstrapper=kubeadm 😄 minikube v1.17.1 on Ubuntu 20.04 ✨ Using the kvm2 driver based on user configuration 👍 Starting control plane node minikube in cluster minikube 🔥 Creating kvm2 VM (CPUs=2, Memory=6144MB, Disk=20000MB) ... 🎁 Preparing Kubernetes v1.20.2 on CRI-O 1.19.0 ... ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Clone https://github.com/kata-containers/packaging.git to install Kata containers into Minikube:
$ git clone https://github.com/kata-containers/packaging.git $ cd packaging/kata-deploy $ kubectl apply -f kata-deploy/base/kata-deploy.yaml $ kubectl apply -f kata-rbac/base/kata-rbac.yaml $ kubectl apply -f k8s-1.18/kata-runtimeClasses.yaml $ podname=$(kubectl -n kube-system get pods -o=name | fgrep kata-deploy | sed 's?pod/??') $ echo $podname kata-deploy-rd6b5 $ kubectl -n kube-system exec ${podname} -- ps -ef | fgrep infinity root 49 1 0 19:30 ? 00:00:00 sleep infinity $ kubectl apply -f examples/test-deploy-kata-fc.yaml
Alternatively, you can install Kubernetes inside a Vagrant Clear Linux environment, which I also found very helpful:
Download the latest Vagrant from https://www.vagrantup.com/downloads
Install libvirt
provider from
https://github.com/vagrant-libvirt/vagrant-libvirt#installation
Get setup scripts:
$ git clone https://github.com/clearlinux/cloud-native-setup $ cd ./cloud-native-setup/clr-k8s-examples $ # Ensure the vagrant enviornment is current $ vagrant destroy -f $ vagrant box update $ vagrant box prune $ # Create a vagrant VM to run kubernetes $ NODES=1 CPUS=8 vagrant up --provider=libvirt $ # ssh into the vagrant VM $ vagrant ssh clr-01 $ # Bring up a minimal kubernetes stack $ cd clr-k8s-examples/ $ ./create_stack.sh minimal $ watch kubectl get po --all-namespaces $ # Run a Kata POD using firecracker $ kubectl apply -f ./tests/deploy-svc-ing/test-deploy-kata-fc.yaml
Unfortunally, Minikube and Vagrant failed with Firecracker setup on my machine with the same error. I assume that my hardware may not be supporting something, but I didn’t figure this out.
$ kubectl describe pod Warning FailedCreatePodSandBox 9s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = container create failed: rpc error: code = Unknown desc = rootfs (/run/kata-containers/shared/containers/c0ca607e7091b836cd90a09be11a553f62679de538bbcf84570f99d0e2a25ba0/rootfs) does not exist
I’ll be testing it on a new hardware soon.