Emulation, Virtualization & Compatibility Layers

I'm fascinated with the topic of emulation as a way to preserve old things and be able to still use them in the present. And sometimes, it feels kind of like doing digital archaeology. I've used it at work, to play old videogames, to sign documents with a Windows only Java application (an amazing feat to achieve!), and even to keep using my scanner as it wasn't compatible with Linux.

I consider that there are four broad categories of emulating or replicating something (really more if you include "simulation"), and I thought it'd be nice to write a summary, and leave some links with additional details.

Emulator

Let's begin with the simplest and probably most well known term, emulation. Quoting Wikipedia, "an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest)". Whenever you're emulating old hardware (e.g. an old 8086), a videogame console (e.g. the great GameBoy), or an API that emulates a service, they always try tp reproduce in as much detail as possible the source system.

The main goal with emulation is accuracy, but depending on if an emulator aims for high or low level emulation, this might not be totally true and we might get into compatibility layers (explained later). Note also that unless an emulator is complete, can be missing sub-systems or certain fragments.

If you want a deeper but still introductory dive into the topic, I recommend reading the article from Retro Reversing on how emulators work.

Virtual Machine

A virtual machine or VM is a system (commonly, but not only, software) that implements the capacity of running a certain computer machine (guest) inside another (host). While an emulator can be of a certain piece (e.g. a file system emulator), a Virtual Machine always represents a complete machine. Also, a VM does not needs to be a real machine. For example, Another World is an old videogame whose creator decided to implement a VM that generated bytecode, making easier to port the title to many different systems by having that common layer (more details, fascinating reading!).

Virtual machines are very mature, have evolved a lot. Today, existing concepts like Hypervisors, kernel shared memory and proprietary technologies like Intel VT, all allow for very efficient managing of multiple VMs running on a single physical machine. Still, since quite some time you could already perfectly do your daily work on a VM. For example, I did it long ago with Microsoft Virtual PC. And let's be honest, compared with the difficulty of setting up containers for non-trivial scenarios, it's still one of the best options.

OS-Level Virtualization

Operating System Level Virtualization became known after Docker brought to the masses the concept of containers, but the concept of resource groups and restrictions existed before. It basically consists on providing mechanisms to isolate resources, so each container has restricted access to them, but you can then have multiple containers running at once, to better utilize all the available hardware. It is not as safe as virtual machines (where you can fully control almost all boundaries) nor emulators (where you can decide to not emulate or to provide means to disable certain features), but it is very lightweight.

Note that when using containers, if the guest containers is a different operating system than the host one (e.g. Windows or MacOS running Linux containers), it initially wouldn't be able to do so because there is no emulation involved. So in practice, what platforms like Docker do is actually boot up a small Virtual Machine with access to all your configured resources, and then run containers against it. There is a small a sometimes noticeable performance penalty, but it is still faster and more convenient than running a full virtualized OS via a VM.

Compatibility Layer

Last comes the least known category. Despite having used Wine for a while, I never stopped to understand how it worked if it wasn't an emulator. That's when I learned about software compatibility layers, which provide mainly two things: a runtime environment to translate calls from the source system to the destination system, and a set of reimplemented libraries that keep the original interface while adapting the implementation to the new host.

Keeping with the Wine example, it doesn't emulates nor virtualizes Windows, instead providing a runtime that converts Windows API calls into POSIX calls, and re-implements libraries like DirectX or the Windows filesystem. It is mind-blowing that you really don't need to change that much when compared with emulation 🤯.

A compatibility layer's goal is, as the name implies, compatibility. It might not work or look exactly the same as the original system, but as long as it works in the destination, it serves its purpose.

Note that there are also hardware compatibility layers, but that seems to be related with hardware emulation and I have almost no knowledge of them.

Emulation, Virtualization & Compatibility Layers published @ . Author: