A Primer on Isolation
The main two abstractions in modern cloud computing when it comes to isolation are containers and VMs. Yesterday I was thinking about hypervisors and they play a key role in allowing isolation down at the hardware level. This means your server will install a hypervisor and its management tooling will allow you to have control over the virtualization. Simply put, it supports you in deciding how resources are divided and accessed by the virtual machines you are creating.
If you missed it last time, they are virtual because they are software-based. Each VM can then have its own Kernel and OS based on the hardware that was virtualized for them.
This brings one of the better levels of isolation. Since the Kernel and OS are not shared, any vulnerability stays isolated to a given VM. This is important to remember when we talk about containers. Historically VMs come with a layer of programs and libraries that might be unused by the workload running on them. Additionally, some studies have shown a slowness in workload performances compared to containers (I hate to do this but I can't find the reference to this anymore, the idea is to highlight the nimble nature of containers compared to VMs). Portability of containers provides an incredible level of flexibility too, which is vital in modern cloud computing.
This is part of the reason why containers are an appetizing approach. They are packaged conveniently, including just the bare minimum needed by the type of application or workload you're running. They are lighter than VMs and their access to OS features and hardware resources is controlled by Linux namespaces and cgroups (I might expand on this another day).
This is all nice but if you remember the first law of Software Architecture:
Everything in Software Architecture is a trade-off.
When it comes to isolation, containers share the same Kernel and underlying OS, which means that any vulnerability affecting the underlying Kernel can allow escaping the boundaries. Granted it might take quite some effort to do so but it's still a possibility. VMs on the other hand leverage an isolation imposed by the hypervisor so while they are still vulnerable to things like Spectre, it would have to bypass the hypervisor in order to carry an exploit cross-VMs. So the level of isolation is more strict.
It looks like you can't win! This was the world back in at least 2016 where you either have slower but more isolated workloads with VMs or flexible and lightweight less isolated workloads with containers. A lot of work has been done to make this distinction less jarring, with the main goal of providing the level of security and isolation of a VM without the overhead. KATA containers, gVisor, and Firecracker are part of the modern tooling (as far as I can tell) that made strides in running secure serverless workloads.
Thoughts
Not that we needed another proof that the first law of Software Architecture is true but this is a nice reminder. I've been digging a bit into each more modern approach for serverless execution of workloads to understand what the landscape looks like. This made me realize that my Linux knowledge needs some brushing up so I'm excited to explore more of it every day. Knowing me, I'll probably end up buying a good book to also use as a reference.