|
|
Subscribe / Log in / New account

The Open Container Project

The Open Container Project has announced its existence. "Housed under the Linux Foundation, the OCP’s mission is to enable users and companies to continue to innovate and develop container-based solutions, with confidence that their pre-existing development efforts will be protected and without industry fragmentation. As part of this initiative, Docker will donate the code for its software container format and its runtime, as well as the associated specifications. The leadership of the Application Container spec (“appc”) initiative, including founding member CoreOS, will also be bringing their technical leadership and support to OCP."

to post comments

The Open Container Project

Posted Jun 22, 2015 19:20 UTC (Mon) by landley (guest, #6789) [Link] (2 responses)

While I'm very interested in container technology, what is the Linux Foundation _saying_ here? "Continue to innovate and develop", "existing development efforts will be protected", "without industry fragmentation". So "go ahead and change stuff, nothing will be rendered obsolete, everything will change in lockstep."

I know the Linux Foundation is a giant pointy-haired bureaucracy, but promising change without change is kinda impressive even for them.

The Open Container Project

Posted Jun 22, 2015 20:00 UTC (Mon) by k8to (guest, #15413) [Link]

An aging young rebel / Called What's His Name
Wanted to be different / While he stayed the same

The Open Container Project

Posted Jun 22, 2015 21:48 UTC (Mon) by dowdle (subscriber, #659) [Link]

After parsing the text a bit... it appears that The Linux Foundation will take over libcontainer... and libcontainer will be a lower level standard upon which everyone else will build on top of.

The Open Container Project

Posted Jun 22, 2015 22:35 UTC (Mon) by ssmith32 (subscriber, #72404) [Link] (2 responses)

The write once, run everywhere dream marches on.
First C attempted write once, build everywhere.
Then Java ( and a slew of others ) tried write-once, run everywhere.
Now we can build once, run everywhere.
It never works out as promised, but it does get a little easier each time :)

The Open Container Project

Posted Jun 23, 2015 17:12 UTC (Tue) by rahvin (guest, #16953) [Link]

Write once would work wonderfully if everyone would stop using OS's other than Linux. We are quite a ways to that goal. Certainly further along than I thought we would be 20 years ago.

The Open Container Project

Posted Jun 25, 2015 20:49 UTC (Thu) by Jandar (subscriber, #85683) [Link]

If you write pure C (ANSI version, or ISO ?) it is easy. Only write to stdout and read from stdin. Compute with double, int and pointer.

Java is a bit trickier but doable.

The next step is next to impossible but nevertheless eventually possible.

Resistance is futile ;-)

The Open Container Project

Posted Jun 23, 2015 0:43 UTC (Tue) by augustz (guest, #37348) [Link] (1 responses)

I'm very happy for consolidation, but would have liked to see some of the various efforts play out a bit further to see if a given approach had some real legs / benefits.

I'm also a bit worried about how vague the thing is. We've seen standards designed by committee that include everyone's desires. All feature variations for all people for all needs can lead to messy cumbersome projects.

Hopefully internally they can be a bit clearer on the vision side, especially efforts like appc.

The Open Container Project

Posted Jun 23, 2015 2:51 UTC (Tue) by raven667 (subscriber, #5198) [Link]

I think this is about trying to nip the infighting between CoreOS and Docker in the bud as much as can be done at this stage to keep from further bifurcating the container marketplace into two incompatible ecosystems of companies and tools. I don't know how that is going to work out as both companies are writing code as fast as they can to have more engineering effort behind their respective systems such that they are difficult to cheaply replace so as to put themselves in a privileged position as a standard part of the container ecosystem and demand higher prices because of it. Kind of like VMware in the traditional virtualization market where they were able to charge more for a while by being a couple of years ahead in engineering effort although now the market is now getting comoditized, moving on to EC2-compatible systems and containers.

The Open Container Project

Posted Jun 23, 2015 5:54 UTC (Tue) by tzafrir (subscriber, #11501) [Link] (1 responses)

Open Container: Isn't this a leaky abstraction?

The Open Container Project

Posted Jun 23, 2015 11:08 UTC (Tue) by tjasper (subscriber, #4310) [Link]

Like the comment... And to add that I would think most containers have a hole of some sort in order to facilitate putting things inside them :).

The Open Container Project

Posted Jun 23, 2015 12:25 UTC (Tue) by kloczek (guest, #6391) [Link] (51 responses)

I'm still curious why Linux needs containerization if most of the separation between processes running on the same system can be done without full caging?

Maybe Linux developers should learn something about RBAC?
Like on http://docs.oracle.com/cd/E36784_01/html/E37123/rbactask-... ??

The same separation can be done using SELinux.
Problem only with SELinux is that it is so complicated that in simple cases is hard to use it.
Second part of problems of SELinux is that overhead added by SELinux is usually up to 10% of CPU time.
The same impact of separation using ext RBAC on Solaris is usually hard to measure.

So maybe instead raising again entropy inside kernel code better would be redesign SELinux?

The Open Container Project

Posted Jun 23, 2015 14:48 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (5 responses)

How are you measuring 10%? I haven't seen any recent benchmarks with anything close to that number.

The Open Container Project

Posted Jun 23, 2015 16:16 UTC (Tue) by kloczek (guest, #6391) [Link] (4 responses)

This is not a matter of benchmarking.
In real cases scenarios benchmarks are useless.
They are gives you only kind of baseline of what can you possibly expect.

It is matter that such slowdown which is possible to observe in some cases (especially on applications interacting over network).

The Open Container Project

Posted Jun 23, 2015 16:23 UTC (Tue) by drag (guest, #31333) [Link] (3 responses)

> In real cases scenarios benchmarks are useless.

Depends on the benchmark and depends on the real case scenarios and your goals with regards to the benchmarking. Sometimes things are too complicated to model in a single benchmark and then they are only good for establishing baselines, other times it's going to be very accurate.

The Open Container Project

Posted Jun 23, 2015 16:28 UTC (Tue) by kloczek (guest, #6391) [Link] (2 responses)

Really result of you benchmark is irrelevant as long as I'll be able to find exact scenario with enough big slowdown.
Do you see it now?

The Open Container Project

Posted Jun 23, 2015 17:16 UTC (Tue) by rahvin (guest, #16953) [Link] (1 responses)

And such anecdotal evidence could be evidence of misconfiguration not the speed impact of SELinux. The entire point of a benchmark is to standardize the comparison so that any misconfiguration, hardware differences, etc affects all the systems the same.

Anecdotal evidence is just that, anecdotal. It's very foolish to use it for any type of evaluation.

The Open Container Project

Posted Jun 23, 2015 19:36 UTC (Tue) by kloczek (guest, #6391) [Link]

> And such anecdotal evidence could be evidence of misconfiguration not the speed impact of SELinux.

SQ (Stupid Question): Do you know anything about SELinux tunables impacting speed of using SELinux?

The Open Container Project

Posted Jun 23, 2015 14:58 UTC (Tue) by ledow (guest, #11753) [Link] (2 responses)

Containers are portable, SELinux configs are not necessarily so.

Kind of matters when you're pushing your containers around a server farm / between datacenters and have little idea (or care) about what the base hardware / layout / distro / VM host / etc. acutally is

The Open Container Project

Posted Jun 23, 2015 16:17 UTC (Tue) by kloczek (guest, #6391) [Link] (1 responses)

RBAC is portable and is available on *BSD and Linux as well (few people are still working on this).

I'm not a fan of SELinux :)

The Open Container Project

Posted Jun 23, 2015 16:37 UTC (Tue) by kloczek (guest, #6391) [Link]

Really problem again is with people who are not able to analyze what has been done up to know to handle some issues or fulfilling some needs.
It is so common case of Linux that even is someone first will review available technologies and/or implemented approaches at the end biggest LinuxWorld(tm) force is "NIH syndrome" and everything ends up in again and again reimplementation of wheel.

In this case is even worse.
Seems like some people been able to convince some sponsors to do together some work. They announced project and at the end "The specification" links on main page on https://www.opencontainers.org/ point to 404 gage.
Fanny part is that in this case github does not know anything about "opencontainers" :D

https://github.com/search?q=opencontainers

The Open Container Project

Posted Jun 23, 2015 16:35 UTC (Tue) by drag (guest, #31333) [Link] (40 responses)

> I'm still curious why Linux needs containerization if most of the separation between processes running on the same system can be done without full caging?

Containerization _IS_ the separation between processes running on the same system. And define 'Full Caging'... it's pretty useful sometimes to have 'containers' that are not completely 'caged', For example they share the same file systems, but writes go off into a snapshot.

And I doubt that 'fully caged' really is possible, anyways. Which is were SELinux and RBAC comes into.

It's not really useful to think of containers as 'Linux installs you run software in', but rather think of them as just application runtimes. They are more like a 'jar' file from Java then anything else. Just much more flexible and not tied to a specific programming language.

So if you look at it that way it's very obvious that you are not eliminating the need for things like RBAC or SELinux when you are dealing with containers.

The reality is that containers are used not because of strict technical necessity, but because it makes it vastly easier to manage software then traditional approach of 'just using apt-get' or whatever. You don't have to worry about system dependencies or conflicts. You don't have to worry about one application wanting a different version of Ruby then another. You don't have to worry about applications getting pissy because they want to both have port 8080 or other junk like that. Yes this sort of thing is relatively easy to deal with by editing configs or whatever, but it's nice not having to deal with it in the first place!

It just makes things easier and faster to deal with. Even when you have your system all managed by something like Chef or Puppet were you can flip a switch to install software and setup applications on your OSes... this takes minutes to happen. With a container you can get a new application deployed and running in milliseconds... very literally you are just executing a application, not configuring a entire environment. All the configuration was done before and it's always going to be the same and then you can run a simple command to roll back changes and make a container 'all new' again.

It's all about breaking away from the traditional Linux software distribution model and making something that 'just works'.

The Open Container Project

Posted Jun 23, 2015 16:56 UTC (Tue) by kloczek (guest, #6391) [Link] (37 responses)

> For example they share the same file systems, but writes go off into a snapshot.

Sorry .. do you want to say that those snapshots are available on PID base in exactly the same mountpoint?

> It's not really useful to think of containers as 'Linux installs you run software in', but rather think of them as just application runtimes

Ja, ja naturlis .. and for providing full namespace separation for every process we are going to introduce CPU-cache-made-out-of-rubber ??

Only in last time Linux has descent CPU time slicing like this one which provides FSS on Solaris/BSD. Still are missing other parts like projects, tasks and contracts allowing to delegate those resources to single thread/process or bunch of processes not on cage layer but on runtime stage.

RBAC exactly does what you wrote: it provides "application runtime" separation and all this without crerating each time yet another instance of userspace namespaces creations.

Just look again on link which I send with Solaris doc to have look how on system management layer you can specify exact RBAC rules in SMF manifest.
It is easy scriptable, transferable, processable .. and OtherAble as well (as long as configuration for admin is presented form of XML.

The Open Container Project

Posted Jun 23, 2015 17:13 UTC (Tue) by nix (subscriber, #2304) [Link] (9 responses)

> Ja, ja naturlis .. and for providing full namespace separation for every process we are going to introduce CPU-cache-made-out-of-rubber ??

Given that the CPU cache has to be flushed whenever you switch between *processes* -- i.e. containers make this no worse -- I see no relevance in this comment.

The Open Container Project

Posted Jun 23, 2015 20:35 UTC (Tue) by kloczek (guest, #6391) [Link] (8 responses)

Maybe you are not aware of this but ctxsw it is not only case which is polluting CPU caches.

The Open Container Project

Posted Jun 23, 2015 22:36 UTC (Tue) by nix (subscriber, #2304) [Link] (7 responses)

Since containers share pagecache pages for similar executables (especially if you turn on KSM so that identical copies of executables and things get their pages shared), containers don't add much to this -- a *lot* less than VMs, since the page cache etc is properly shared in the container world.

Obviously the sharing is worse than for a totally-non-containerized system, but, tbh, the CPU cache is *not* the problem here, nor where the work is going on (look unto cgroups and controllers for *that*).

The Open Container Project

Posted Jun 24, 2015 2:08 UTC (Wed) by kloczek (guest, #6391) [Link] (6 responses)

> Since containers share pagecache pages for similar executables (especially if you turn on KSM so that identical copies of executables and things get their pages shared), containers don't add much to this -- a *lot* less than VMs, since the page cache etc is properly shared in the container world.

On Solaris page cache is old caching infrastructure. More ad more signs says that it will be dropped at some point in time.
Now the same is done over ARC which provides more effective caching.

Be aware of this that in present days where systems are with sometimes terabytes of memory traditional pageable memory is more and more problematic and more ad more OS developers are thinking about drop this model.
Why? try to calculate TLB size with 4K pages when your kit will be quipped with (lets say) 16TB RAM.

Paging was good when applications memory needs where bigger than possible to have RAM size. Present days such applications you can count on all your fingers. More and more signs shows that traditional separation between storage and RAM will be blurring as well.

Where place on this picture glibc(s) ABI bugs/problems? I have no idea ..

The Open Container Project

Posted Jun 24, 2015 2:23 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

ARC is a kludge. Aaaand it doesn't solve the TLB problem - while the file cache gets to use raw hugepages, individual mappings are still at 4k-page level granularity.

Linux has hugepage support and getting it for the file cache is one interesting future task.

The Open Container Project

Posted Jun 24, 2015 8:31 UTC (Wed) by kloczek (guest, #6391) [Link] (3 responses)

> ARC is a kludge.

I don't know even how o respond on this.
Technology which was developed from scratch accumulating consequences of all found bad scenarios were existing OS caching was not enough .. kludge?
What do you know about ARC allowing you to say that this is kludge?

The Open Container Project

Posted Jun 24, 2015 8:38 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Uhm, not really. ARC was designed because ZFS needs a lot of RAM for dedup and making use of the file cache for it was pretty much the only choice.

Oh, and of course by ARC I don't mean the algorithm itself, just its use in ZFS.

The Open Container Project

Posted Jun 24, 2015 12:53 UTC (Wed) by kloczek (guest, #6391) [Link]

Yet another reply completely detached from subject or commented text.

The Open Container Project

Posted Jun 24, 2015 16:24 UTC (Wed) by nye (subscriber, #51576) [Link]

>Uhm, not really. ARC was designed because ZFS needs a lot of RAM for dedup and making use of the file cache for it was pretty much the only choice.

That is definitely not true. ARC has always been a part of ZFS, whereas deduplication is a feature that was added on after quite some years (and despite being talked about a lot is not all that widely used AFAICT because *most* of the time, people who think they might want it find that the downsides are too much to bear).

The Open Container Project

Posted Jun 24, 2015 14:59 UTC (Wed) by nix (subscriber, #2304) [Link]

> Where place on this picture glibc(s) ABI bugs/problems? I have no idea ..

Nowhere. Nobody is claiming that 'glibc(s) ABI bugs/problems' even *exist* but you. Supporting multiple ABIs is a *feature*.

The Open Container Project

Posted Jun 23, 2015 18:39 UTC (Tue) by drag (guest, #31333) [Link] (26 responses)

> RBAC exactly does what you wrote: it provides "application runtime" separation and all this without crerating each time yet another instance of userspace namespaces creations.

RBAC doesn't do anything to make software easier to install and manage. That was my main point.

Containers don't solve the problem that things like SELinux are designed for.

The Open Container Project

Posted Jun 23, 2015 20:27 UTC (Tue) by kloczek (guest, #6391) [Link] (25 responses)

> RBAC doesn't do anything to make software easier to install and manage. That was my main point.

RBAC per se .. of course not.
RBAC tools allowing easy prepare scriptable .. setup of course yes.

Again: try to have look on SMF documentation with examples of ext privs.
To setup anything related to such caging you will need one command: svccfg.

Look .. in case opencontainers and already implemented runc you must organize your own containers of data which will be passed to runc command as json string. Importing, exporting, version control, access control .. who cares?

In case SMF is possible to use central SMF database allowing you to create any property of the service you want and you don't need to think where those keys and values are stored (it can be even of on some library pages of some bunch of dwarfs).
As another consequence runc must come with json parser.
Question: how many json parsers in form of shared libraries, different scripting languages has today base Linux system? (just typical @core kickstart profile), and why it is necessary to have yet-another-one written in yet-another-modern-alnguage like go? Just "because we can"?

Maybe first would be better organize central configuration database as part of systemd?
With this building such things like runc will be way easier. Isn't it?
Transactional BerkeleyDB/libdb (like it is in case of SMF) is really enough.

This project solves roof issues. Problems is that this roof is build without fundaments and wals. Only buy this it has very good chance to collapse.

Another issue is that current implementation of runc instead using kind of abstraction allowing easy port this software to other OSes is straight build on top of Linux specific configuration features.
As consequence opencontainers it is Linux specific approach and other OSes may implement other common method of handling such things.
According what is on main page https://www.opencontainers.org/:

> not bound to higher level constructs such as a particular client or orchestration stack not tightly associated with any particular commercial vendor or project portable across a wide variety of operating systems, hardware, CPU architectures, public clouds, etc.

Which after verification what is already implemented in current source code is total bol*cks/BS.

After reading opencontainers source code my impression is that this project instead solving some general configuration needs is trying to solve only containerization configuration needs.
Everything else if will have similar needs should go on /dev/tree because WeDontCare(tm) about this and ThisIsNotOurProblem(tm).

If in year or two if someone will say "we need to stop doing those nasty configuration things" and if at last will come with descent general configuration framework to storing whatever configuration in common OS DB, allowing to track versions, instances, access control or delegations of permissions (like it is in case of Solaris SMF) this project will doomed because it will be necessary to rewrite most the code of project.
Because it is not product with regular support current maintainers will be already working on SomethingMoreInteresting(tm) than opencontainers people which will already using this will stay like monkey with closed fist with peanuts inside jar.

Linux development models has fundamental flaw.
By how it is organized it never allows improve something by learning what is/was wrong with current state of some software.
Instead trying to solve tomorrow problems it creates new ones by solving past day or today problems.

For me people working on this project (probably honest and good people) are trying to solve something trying answer on wrong questions ..

The Open Container Project

Posted Jun 23, 2015 21:18 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (24 responses)

> RBAC per se .. of course not.
> RBAC tools allowing easy prepare scriptable .. setup of course yes.
So how do RBAC make sure that the glibc is the correct version to run an app?

The Open Container Project

Posted Jun 23, 2015 21:53 UTC (Tue) by kloczek (guest, #6391) [Link] (23 responses)

Sorry but I have no idea how to use different versions of (g)libc providing user space API on top of the same (running) kernel.

How is you question is related to containerization running on top of single kernel?
If you have such pathological needs really you should cleanup first your "backyard" instead asking how to maintain you garbage on any distribution/containerization/isolation level.
Typical deb/rpm based distribution does not allow even to install different glibcs. Using different versions of glibcs on top the same kernel does not make any sense.

If you have such pathological needs really you should cleanup first your "backyard" instead asking how to maintain you garbage on any distribution/containerization/isolation level.

The Open Container Project

Posted Jun 23, 2015 22:05 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (22 responses)

> Sorry but I have no idea how to use different versions of (g)libc providing user space API on top of the same (running) kernel.
So basically, you're just Solaris-bloviating as usual, without actually stopping to think if it has any relation to the task in question.

The Open Container Project

Posted Jun 23, 2015 22:33 UTC (Tue) by kloczek (guest, #6391) [Link] (21 responses)

Do you know any Linux kernel which provides different ABI to different glibs?

The Open Container Project

Posted Jun 23, 2015 22:41 UTC (Tue) by nix (subscriber, #2304) [Link] (16 responses)

That's got nothing to do with Cyberax's question. The problem is not that the kernel provides a different ABI (though in fact the kernel ABI grows constantly, and there are lots of variants of it for 32-bit and 64-bit and halfway and o32 and whatever, and which variants are available on a given kernel depends on the kernel). The problem is that *glibc grows*. If a binary is linked against glibc version $foo, it cannot always be run on an older glibc. This problem exists with any shared library, and indeed on Solaris too (heck, Solaris *invented* symbol versioning to make the inverse of this problem, forced libc soname changes, go away).

The problem is -- how does RBAC note 'hey, this binary depends on these things, at least these versions of these things, make sure the right versions are installed', which kind of matters if you're spinning up a container, just as it does when you're spinning up any kind of distro-like thing. Of course it doesn't: that would require a decent package manager, and until very recently (Solaris 11? 12? I can't recall, I do most of my work on 12 these days) Solaris didn't have one of those.

The Open Container Project

Posted Jun 23, 2015 22:55 UTC (Tue) by dlang (guest, #313) [Link] (1 responses)

> he problem is that *glibc grows*. If a binary is linked against glibc version $foo, it cannot always be run on an older glibc

replace "compiled against" rather than "linked against" and you are right.

The Open Container Project

Posted Jun 24, 2015 14:40 UTC (Wed) by nix (subscriber, #2304) [Link]

The linking is really what does the binding of symbols to versions etc, but since .o files are not guaranteed portable across glibc releases, your point stands too.

The Open Container Project

Posted Jun 24, 2015 1:52 UTC (Wed) by kloczek (guest, #6391) [Link] (13 responses)

Originally containerization/separation was implemented to fulfill resource control and access control reasons.
Seems you want to tell me Linux needs NewContainerization(tm) to solve glibc development.

So why Open Container Project must be (according to description) portable across different OSes if other OSes have no such problems?
Why you want to petrify some glibc bugs threatening them as features creating even more issues?

You may be not aware of this but there is no ABI which "64-bit and halfway and o32 and whatever".

> Of course it doesn't: that would require a decent package manager, and until very recently (Solaris 11? 12? I can't recall, I do most of my work on 12 these days) Solaris didn't have one of those

Branded zones sol8/sol9 are fully supported as prod solutions from early sol10 days. Now sol10 branded zones are available as well.
On using them you don't need different libcs versions because everything is build on top top kernel interface which provides exact kernel space<>user space ABI interface. Using the same technique was possible to develop Linux branded zones in which you can straight execute Linux binaries.

Branded zones are not for to solve libcs bugs.

This has nothing to do with package management and was not introduced "very recently".
IPS as PM was introduced in 2009. Maybe for you it is "recently" (?).

Zones been working even on Solaris 10. Everything was working well on old packaging.

IPS introduction was logical consequence of many other areas problems combined together.
For example IPS without boot environments is only half of the biscuit.

Interestingly yum recently is trying to follow the same path.

> If a binary is linked against glibc version $foo, it cannot always be run on an older glibc

Why people are trying to do things which never ever been part of whole design and even they will be part of such design?
Why some people (in own ignorance) trying even to do something to make it possible!?!?

The Open Container Project

Posted Jun 24, 2015 2:02 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> Originally containerization/separation was implemented to fulfill resource control and access control reasons.
No, it wasn't. It was implemented to allow a 'lightweight virtualization' where you can run multiple 'operating systems' on top of one actual kernel.

Resource use accounting is a natural extension. However, up until recently there had been little interest in supporting complicated access control systems for containers. Each container was a separate 'system' and that was that.

The Open Container Project

Posted Jun 24, 2015 2:15 UTC (Wed) by kloczek (guest, #6391) [Link] (9 responses)

> No, it wasn't. It was implemented to allow a 'lightweight virtualization' where you can run multiple 'operating systems' on top of one actual kernel.

You may be not aware of this but fundamental functionality of the kernel is provide separation and resource control.
Kernel by definition provides "lightweight visualization" (whatever you want to understand by this).

It is really strange or even sad to observe that some people after almost half century of Unix history are thinking that visualization it is something new which we are only recently dealing with.

The Open Container Project

Posted Jun 24, 2015 3:24 UTC (Wed) by drag (guest, #31333) [Link]

> You may be not aware of this but fundamental functionality of the kernel is provide separation and resource control.

I think he is well aware.

> It is really strange or even sad to observe that some people after almost half century of Unix history are thinking that visualization it is something new which we are only recently dealing with.

Yes and lets thank IBM and their CP-40 for paving the way in 1963.

The Open Container Project

Posted Jun 24, 2015 3:28 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

> Kernel by definition provides "lightweight visualization" (whatever you want to understand by this).
No, it doesn't.

If "App A" wants '/etc/timezone' to point to '/usr/share/timezone/UTC' and "App B" wants '/etc/timezone' to point to '/usr/share/timezone/PDT' then there's NOTHING a classical Unix kernel can do about it.

Virtualization or containers solve this problem. RBAC doesn't.

The Open Container Project

Posted Jun 24, 2015 3:39 UTC (Wed) by kloczek (guest, #6391) [Link] (6 responses)

> If "App A" wants '/etc/timezone' to point to '/usr/share/timezone/UTC' and "App B" wants '/etc/timezone' to point to '/usr/share/timezone/PDT' then there's NOTHING a classical Unix kernel can do about it.

> Virtualization or containers solve this problem. RBAC doesn'

Really?
OK so to execute:

$ TZ=UTC /usr/bin/AppA
$ TZ=PDT /usr/bin/AppB

I need virtualization or containers??

Maybe you are not aware of this but timezone it is not a resource.
It is part of the stings which can be localized up to each process and to solve propagation pf such setting you don't need virtualization or containers.

The Open Container Project

Posted Jun 24, 2015 4:22 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

Did I say that TZ variable is sufficient?

And yes, I've seen such apps. In other cases it's hardcoded PHP interpreter path or dependency on certain version of glibc (which can't really be replaced).

In short - classic Unix can't deal with this.

The Open Container Project

Posted Jun 24, 2015 8:21 UTC (Wed) by kloczek (guest, #6391) [Link] (4 responses)

> In short - classic Unix can't deal with this

Sorry but why Unix should care about badly written application? If it is application bug or totally out of the rules developed application why OS should adapt to such situation? Developer(s) of this application was(where) unable to read documentation and understand it correctly? If yes why another guys (os/libraries developers) cares about writing such documentation if any idiot can write bad code and OS developers (in your opinion) must take responsibility here?
If development pathology will be accepted as features how it will be possible to develop any OS or library?

Please explain this.

The Open Container Project

Posted Jun 24, 2015 8:34 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> Sorry but why Unix should care about badly written application? If it is application bug or totally out of the rules developed application why OS should adapt to such situation?
Sure. You can gaze at your navel, remarking about how well designed ARC is on Solaris. Or drink tea, discussing how DTrace is invaluable for finding the origin of species. Or meditate on how RBAC is a path to true enlightenment and the total mindless bliss.

Out there, though, some of us have to deal with the RealWorld(tm) which is full of imperfect software. Some of it is not just imperfect, but downright shoddy. Yet the real users actually NEED it. And it turns out that containers are one of the easiest ways to distribute and run such software.

The Open Container Project

Posted Jun 24, 2015 12:51 UTC (Wed) by kloczek (guest, #6391) [Link]

Could you please read one more time my question and stop writing about me, Solaris, ARC or DTrace?

The Open Container Project

Posted Jun 24, 2015 14:47 UTC (Wed) by nix (subscriber, #2304) [Link]

> Or drink tea, discussing how DTrace is invaluable for finding the origin of species.

As a DTrace for Linux developer I would like to note that I drink coffee most of the time, not tea (thus proving myself to be not a True Englishman).

(And it is invaluable (to me, anyway) for finding the origin of *bugs*. Does that count? We could add a species provider but it seems unnecessarily limiting when we could just add a provider that allows you to probe the entire biosphere! disclaimer: may require hardware such as pervasive monitoring nanomachinery that is not yet developed. May need to wait umpty million years for biosphere::species:start probe firings, though biosphere::species:end seems easy enough to test...)

The Open Container Project

Posted Jun 24, 2015 20:22 UTC (Wed) by flussence (guest, #85566) [Link]

>Sorry but why Unix should care about badly written application?

Computers exist to serve users, and the same logical ordering follows for each step in between.

Makers of user-hostile Unices will find themselves out of business. That's why the only Unix anyone uses in this decade is OS X - not the one you're shilling for.

The Open Container Project

Posted Jun 24, 2015 14:56 UTC (Wed) by nix (subscriber, #2304) [Link]

Answering only things which others have not already knocked into burning shards...

> Seems you want to tell me Linux needs NewContainerization(tm) to solve glibc development.

Hardly. More that if you want to be able to rapidly populate containers with dependencies needed for specific application binaries, you need to be able to take account of the versions of shared libraries that those tools were compiled and linked against. A major such binary is glibc (mostly because of the glibc 2.14 bump of the memcpy() symbol version on x86-64, so that most applications linked against a later version than that cannot run against glibcs older than that).

You do not need containers to develop glibc, indeed it hardly helps: glibc testing has been possible for decades longer than containers have existed on commodity hardware.

> Why you want to petrify some glibc bugs threatening them as features creating even more issues?

The existence of symbol versioning, and the fact that programs linked against older glibc keeps using compatibility symbols while newer glibcs use the latest versions, is not a 'glibc bug': it's a *feature* that originated on, oh yes, *Solaris*, to allow binaries that *depend* on glibc bugs or behaviours that were subsequently fixed or changed to keep working. How you can be a raging Solaris fanboy and not be aware of the fact that Solaris libc contains versioned symbols is beyond my understanding: a lot of noise was made in the release notes about this stuff when it came in (though that was years ago: maybe you are too young to have been using machines big enough to run Solaris that long ago. That would explain your, ahem, excessive enthusiasm, at least.)

> You may be not aware of this but there is no ABI which "64-bit and halfway and o32 and whatever".

I never claimed that there was *one ABI* that did all of those things at once. I claimed that many kernels (e.g. those for x86 and oh yes SPARC) can provide multiple ABIs to userspace programs: glibc needs to adapt. (Generally there is a different glibc for each such ABI, indicated in the ELF header, forming independent universes of programs such that every ELF object in the address space conforms to only one ABI.) Again, how you can be a Solaris fanboy and not know this, given the predominance of 32-bit programs with a very few 64-bit ones on SPARC, is beyond my understanding. This, again, is something Solaris *originated*, but you're blasting Linux for doing the same thing!

The Open Container Project

Posted Jun 27, 2015 12:50 UTC (Sat) by ms_43 (subscriber, #99293) [Link]

> Branded zones sol8/sol9 are fully supported as prod solutions from early sol10 days. Now sol10 branded zones are available as well.

And why do branded sol8/sol9/sol10 zones exist at all? If Solaris 10 were 100% backward compatible, one could run any application that ran on Solaris 8/9 in an ordinary native zone on Solaris 10, no?

> On using them you don't need different libcs versions because everything is build on top top kernel interface which provides exact kernel space<>user space ABI interface.
> Branded zones are not for to solve libcs bugs.

Well the different branded zones contain different libc versions, that's (one of the reasons) why they're different branded zones in the first place.

The Open Container Project

Posted Jun 24, 2015 8:52 UTC (Wed) by paulj (subscriber, #341) [Link] (3 responses)

Linux does have the ability to do that, binaries can have ABI 'personalities'.

(I wonder why some of the trickier ABI changes aren't handled that way).

The Open Container Project

Posted Jun 24, 2015 14:58 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

Mostly because personalities are disruptive and annoying because of their split kernel/userspace nature: to get them in for an ABI change, you'd have to prevent glibc from running on any kernel too old to support that personality. This is rarely considered desirable.

The Open Container Project

Posted Jun 24, 2015 16:45 UTC (Wed) by paulj (subscriber, #341) [Link] (1 responses)

Sure, but you can't run a binary that depends on a changed syscall on an older kernel either. There's never been a guarantee that old kernels would support new binaries - only that old binaries should contain to run.

I'm thinking specifically of the time_t changes: http://lwn.net/Articles/643234/

I still don't understand why we can't come up with some kind of scheme to version the kernel ABI though. I just wonder why it wouldn't be easier to have a single version field that controlled the version of syscall number that applications got, instead of having to define new syscall numbers for each.

I.e., at present, the kernel has only syscall-specific "versions" (i.e. you define a whole new syscall). Seems strange, for updates that affect several/many calls, not to have a more global flag. Looking at the current ELF header spec, EI_VERSION would be most suitable (doesn't seem used at present), though it's limited to 255, but that could be changed if the EI_VERSION was being bumped anyway.

Anyway, I guess the kernel devs have their reasons for using only the very fine-grained, per-syscall "versioning".

The Open Container Project

Posted Jun 25, 2015 20:48 UTC (Thu) by kleptog (subscriber, #1183) [Link]

> Sure, but you can't run a binary that depends on a changed syscall on an older kernel either. There's never been a guarantee that old kernels would support new binaries - only that old binaries should contain to run.

Very few binaries do syscalls directly, most of them do so via glibc. And glibc has various tricks to deal with running older kernels. If it executes a new version of a syscall and finds that it's not implemented it will actually set a flag and fall back to the old interface, possibly even emulation. clock_gettime() was an example, but there have been others (getresuid). Most of those cases got cleaned up with the move to 64-bit.

> Seems strange, for updates that affect several/many calls, not to have a more global flag.

Such changes just don't happen very often, if ever. Generally you're fixing a fubar is a single syscall. About the only time you change a lot of syscalls is when you add a new architecture, and then backward compatibility is not an issue.

The Open Container Project

Posted Jun 23, 2015 23:41 UTC (Tue) by angdraug (subscriber, #7487) [Link] (1 responses)

I think you're too generous when you compare containers to Java jars. A better analogy would be a static binary that can include anything you normally find in a random subfolder of C:\Program Files\ on Windows. Which means it's easier to deal with than apt-get, but doesn't solve any of the reasons that have caused us to invent shared libraries, FHS, and, well, APT.

The Open Container Project

Posted Jun 24, 2015 3:15 UTC (Wed) by drag (guest, #31333) [Link]

I think that Jar files are actually very close to containers.

It's a archive of files in zip format that is used to run software in a virtual machine environment. It's very much like what people do with KVM/Qemu except instead of using x86_64 architecture Java uses a it's own idealized concept of what a computer is.

The biggest difference is that you string jar files together to get the classes you need while containers tend to be self-contained.

> Which means it's easier to deal with than apt-get, but doesn't solve any of the reasons that have caused us to invent shared libraries, FHS, and, well, APT.

Nobody has abandoned FHS/Libraries/Apt. They are all used in containers. It's just that those approaches do not really solve the problems they purport to solve (making it easy to install software) and now people are learning that it's better just to develop a system were they can pull software directly from vendors and developers rather then go through traditional distributions.

Or at least they are trying to develop that system.

Containers really are just 'next-gen Linux OS'. How well it works remains to be seen.

The Open Container Project

Posted Jun 30, 2015 19:03 UTC (Tue) by Baylink (guest, #755) [Link]

To keep people from having to deal with (or find ways to avoid) systemd? <GDRVVF>


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds