Containers vs. container images¶
When techies talk about containers, they usually refer to Linux containers. This is technology implemented in recent versions of the Linux kernel that permits the creation of isolated subsystems inside a computer. Such a subsystem is called a container. It is not something physical, nor a piece of software. It is just a process with certain properties running on your computer.
A central feature of a container is the files it contains. These files include software that can be run, but also data that can be accessed. When a container is created, those files must come from somewhere. One possibility for providing its initial files to a container is via a container image, which is just a bunch of files packed together, not very different conceptually from a zip file. However, there are other ways. The Guix package manager, for example, gives containers access to a precise set of packages, which are either already available on the host computer or downloaded or constructed when the container is assembled.
Container images are files that can become quite large. Some people store them in generic data archives such as Zenodo. Others submit them to sharing platforms such as DockerHub. Unlike containers, which are a bit abstract, container images are datasets that occupy storage and take time to download. They feel a lot more real than the containers based on them.
That's probably why people frequently use the term container for what should properly be called a container image. That is often a useful shorthand, but it can also lead to confusion, for example when you need to explain how some systems can create containers without using any images.