The concept of the IT container continues to gain ground in the virtualization of information systems. It has the advantage of simplified installation and low overhead.
Born in the Open Source (Linux) universe, the IT container is a kind of virtual envelope, or independent, autonomous “execution box”. This virtualization technique makes it possible to develop and run different applications on the same physical system, independently and without embedding an operating system.
Much heavier, the virtualization platforms use a hypervisor on top of the operating systems (Windows, Linux), which can be launched simultaneously on the same physical system. This implies that the virtual machines (VM) they support integrate their own operating system, with the visual interface, with the system administration tools (port management, peripherals, etc.).
Very minimalist, containers share the same operating system kernel (often Linux) and isolate application processes from the rest of the system. Linux containers are generally the most portable, as long as they are compatible with the underlying system. They are said to “emulate” system environments, from the creation of “software images” that contain all the elements necessary for the application, such as those of a Linux distribution, including RPM packages (Red Hat packet manager) for installing a program or library.
A container can be shared within a team of developers, providing a common development environment
A container therefore contains all the configuration files, software libraries that are useful and necessary to develop and support an application. It is transportable, whatever the system configuration: it goes from one environment to another, without rewriting, without patches. Installation is simplified and resource consumption reduced to a minimum. It is no longer necessary to worry about compatibility with different versions of the operating system.
A standard interface (startup, shutdown, environment variables, etc.) provides application isolation. The data is kept, sheltered in “volumes”.
Thus, a container can be shared within a team of developers, by constituting a common development environment, very close to that of production, while facilitating its implementation. Any developed application can be seamlessly migrated from the development environment to the production environment. Large-scale deployments can be accelerated with additional tools.
Even better: applications distributed across multiple containers can be orchestrated across different clouds. Three subsets are needed for this: a tool to manage container applications and scale them and automate their deployment: this is typically Kubernetes, an open source set designed by Google. Docker solutions (Docker Engine, etc.) are an overlay to facilitate development and deployment. And above, Open Shift modules manage rights, authentication, project-by-project separation, etc.
We want to give thanks to the writer of this post for this awesome material
Computer container, the definition in one click
Check out our social media profiles along with other related pageshttps://yaroos.com/related-pages/