They key point is:
> Other (not yet existing) subsystems could use containers to enforce
> limits on CPU time, I/O bandwidth usage, memory usage, filesystem
> visibility, and so on. Containers are hierarchical, in that one
> container can hold others.
Right now all resource management is done globally or per process/thread, but not much else. Process containers make it possible to group a bunch of processes and do resource allocation for them as a group (think ulimit, but more). What resource that is doesn't matter right now, as this article is about the basic infrastructure which is put into place to make everything possible.
This is useful for multi-purpose and multi-user machines. E.g. if you want your server to spend 50% of its CPU time, disk IO and/or memory on the webserver and a database, 25% on finding aliens, and the rest for reading LWn, it can be done.
It seems it can also function as a sort of jail, limiting the fs and process namespace view/access processes have.
(I might be mixing multiple things though.)
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds