What is VSOCK and Why Use it with Unikernels?

VSOCK refers to the AF_VSOCK address family. You might be more familiar with the AF_INET socket family. At it's core, it enables a way to communicate talking from a guest vm to the host or vice-versa. You can utilize both SOCK_STREAM (connection based) and SOCK_DGRAM (connection-less).

The address is composed of a 32-bit CID (context identifier) and port number. It can even be used when your vm has no network interface or when you don't want the security/performance overhead associated with that. Some use cases involve logging or configuration while other use-cases may involve talking with things like AWS Nitro enclaves. We'll look at the logging use-case in particular today as if you aren't using a syslog klib you'll find relying on serial is insanely slow.

Using with Qemu:

Let's take a look at a simple example to show how we might communicate from the guest to the host. Perhaps we'd like to send some logging information out.

#include <sys/socket.h>
#include <unistd.h>

#include <linux/vm_sockets.h>

int main(int argc, char **argv)
{
    int vs;
    struct sockaddr_vm addr;
    const char hello[] = "Hello from Nanos!\n";

    vs = socket(AF_VSOCK, SOCK_STREAM, 0);
    addr.svm_family = AF_VSOCK;
    addr.svm_reserved1 = 0;
    addr.svm_cid = VMADDR_CID_HOST;
    addr.svm_port = 123;
    connect(vs, (struct sockaddr *)&addr, sizeof(addr));
    write(vs, hello, sizeof(hello) - 1);
    close(vs);
    return 0;
}

Pretty basic. We create a socket, connect it, write to it and shut everything down. Straight outta Beej's. Now how do we use this?

Let's take a look at using qemu first. For that we'll want a rust program called vhost-user-vsock. This allows us to connect the vmm to the guest vm. However, before we build that, we need to to grab the latest libgpiod version as a dependency:

git clone --depth 1 --branch v2.0.x https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/
cd libgpiod
                    ./autogen.sh --prefix="$PWD/install/"
make install
export PKG_CONFIG_PATH="/home/eyberg/libgpiod/install/lib/pkgconfig/"

Now we should be able to build vhost-device:

git clone https://github.com/rust-vmm/vhost-device/
cd vhost-device && cargo build

This program sits on the host. Basically you have one socket that talks to the VMM (in this case, qemu) and another socket that talks to the guest (your unikernel). You can have multiple pairs as well but for simplicity sake we are only showcasing one.

./target/debug/vhost-user-vsock --vm guest-cid=3,socket=/tmp/vhost3.socket,uds-path=/tmp/vm3.vsock

Then we can start our netcat listener like so:

nc -l -U /tmp/vm3.vsock_123

The '_123' is important here - remember that is what we used for our port.

We haven't integrated any vsock code into ops yet because most nanos users rely on the cloud for orchestration* and so this wouldn't be something that would be used there. This would be more useful if you were building your own cloud or your own orchestrator. (If you want this feature in ops bring it up in a ticket with your use-case - we also accept PRs). So we'll need to get the qemu output ops produces and tack on our vsock code here. You can get this by cutting/pasting the output from the following:

ops run -v --show-debug

*Note: Most Nanos users do not do their own orchestration and that is confusing for people coming from a k8s/containers world. Most nanos users rely on the cloud for their orchestration needs. This does not mean spinning up a linux vm on AWS and using ops there. It means just using 'image create' and 'instance create'. If this is confusing to you we highly encourage you to download ops and after running a hello world on your laptop deploy to the cloud. It'll answer your orchestration questions immediately.

Finally let's boot it like so:

qemu-system-x86_64 -machine q35 \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \
-device virtio-scsi-pci,bus=pci.2,addr=0x0,id=scsi0 \
-device scsi-hd,bus=scsi0.0,drive=hd0 -vga none -smp 1 -device isa-debug-exit \
-m 2G -device virtio-rng-pci -machine accel=kvm:tcg -cpu host -no-reboot \
-cpu max -drive file=/home/eyberg/.ops/images/main,format=raw,if=none,id=hd0 \
-device virtio-net,bus=pci.3,addr=0x0,netdev=n0,mac=76:b2:8e:38:d2:ec \
-netdev user,id=n0 -display none -serial stdio \
-chardev socket,id=char0,reconnect=0,path=/tmp/vhost3.socket \
-device vhost-user-vsock-pci,chardev=char0 \
-object memory-backend-file,share=on,id=mem0,size=2G,mem-path=bob \
-numa node,memdev=mem0 en1: assigned 10.0.2.15
booting

The extra flags you want to append if you are cutting/pasting from ops are the following:

-chardev socket,id=char0,reconnect=0,path=/tmp/vhost3.socket \
-device vhost-user-vsock-pci,chardev=char0 \
-object memory-backend-file,share=on,id=mem0,size=2G,mem-path=bob \
-numa node,memdev=mem0 

Now if you flip back to your window with the nc listener you'll see your msg:

nc -l -U /tmp/vm3.vsock_123
Hello from Nanos!

There are a variety of memory backends you can use like hugepages or memfd. Here we are being lazy and just throwing it to the file 'bob'.

Using with Firecracker:

Now let us continue the journey and look at how we might utilize with firecracker.

If this is the first time you have ran unikernels under firecracker you'll want to check out this tutorial or the documentation. first. Doubly so if you've never used firecracker even without unikernels.

You just need to apply a stanza to your firecracker config like so:

"vsock": {
    "guest_cid": 3,
    "uds_path": "/tmp/vsock.sock"
  }

First launch our netcat listener:

nc -l -U /tmp/vsock.sock_123

Then we can launch our image like so:

~/fc/release-v1.3.3-x86_64/firecracker-v1.3.3-x86_64 --api-sock /tmp/firecracker.socket --config-file vm_config.json
warning: ACPI MADT not found, default to 1 processor
booting

If we flip back to our other terminal we can see our message:

nc -l -U /tmp/vsock.sock_123
Hello from Nanos!

This was a short tutorial on using vsock with Nanos. If you learn better via video check out our youtube channel where we walk through these examples and more.

Deploy Your First Open Source Unikernel In Seconds

Get Started Now.