Physical containers, originally created to ease the transportation of goods and materials on cargo ships, developed a standardized way of packing things. Whether a sports car from Italy or coffee from South Africa, they were packed and shipped the same exact way. The simplification this provided sparked an explosion in international trade and economic growth.
Likewise, a century later, when Docker engineers produced container technology for software applications, they did it for the sake of simplifying the shipping of software from the developer’s laptop to the production environment. Containers package everything an application needs to run, including libraries and system tools, into a single image that can be deployed across multiple environments—just like physical containers that are easily loaded by cranes and forklifts onto cargo ships, planes, and trains.
But this technology existed previously in the form of virtual machines. So why not stick with those? Why containers?
Containers are built as a packing tool. You can take an application and all its dependencies and put them in a container, drop it onto any system, and let it run, and it will work exactly as expected. A virtual machine, on the other hand, is a full guest operating system. It layers the application and its dependencies onto that operating system, which brings significant overhead due to hardware virtualization and other factors.