The internet of the 90s was largely run by powerful servers nurtured by small teams, each holding arcane knowledge about their machines’ configuration and quirks. Typically, a server would only have one role, so that if it glitched out the failure would have much less of an impact on other services. Updates might take days, and if they went awry then there would be yet more downtime while things were slowly restored from backup tapes.
The mid-2000s saw widespread adoption of virtual machines, which (if deployed properly) made much better use of server capacity. Services could be isolated from one another on the same machine. In the event of hardware failure, virtual machines could easily be restored from backup on new hardware, without the need for reconfiguration. In the event of software failure, VMs could be effortlessly restored to a known-good snapshot.
However, there was still room for improvement. By the 2010s enterprises were operating at a much larger scale, thinking much more in terms of data centres than individual servers, whether actual or virtual. Pouring through the logs of individual machines to diagnose faults is not an operation that scales well. And neither is having to install a whole operating system for each VM. So the next evolution was containers. These were, in a sense, characterised by Docker in 2013, but ultimately owe their existence to the coming of age of a whole bunch of technologies (cgroups and namespaces in