Ubuntu’s snap apps are coming to distros everywhere (Ars Technica)
Ubuntu’s snap apps are coming to distros everywhere (Ars Technica)
Posted Jun 15, 2016 14:56 UTC (Wed) by drag (guest, #31333)In reply to: Ubuntu’s snap apps are coming to distros everywhere (Ars Technica) by mjthayer
Parent article: Ubuntu’s snap apps are coming to distros everywhere (Ars Technica)
For containers intended to run network services (ie docker) the ideal approach is to ruthlessly strip out anything you don't need.
This is one of the reasons why golang is popular (besides specifically adapted for network services). You can compile statically so you know for a fact it will have everything it needs in it's binary. The 'container' then consists of just dropping the binary into a directory.
This way to launch a new service or perform a new update the downtime caused by downloads is measured in a few KB rather then hundreds of MB. The container's major advantage over full VMs is just how fast it is. If you are using a proper 'CI' service like Jenkins to pull and copy code and send it to containers for unit testing it's very nice to get results in seconds rather then minutes. Also if you are doing on-demand services then even if a container never been deployed before on a physical host the time it takes for that physical host to copy down the container and respond is critical. If you can reduce the response time to sub-handful-of-seconds then that cuts out of a lot of infrastructure you need in place to handle user requests and such things.
Python you can do similar things. There are various modules for doing static building. 'pyinstaller' is one, but there are others.
So, for example...
I can use pyenv to setup multiple python installations in my homedir. I can switch between them and set python versions for 'shell', 'local' (directory/project), and 'global'. I can then use 'pip' to install any library or python program that pip provides. Each python version is independent. Then I can add Pyenv support to Emacs and have Emacs know which version of python to use with whatever script I am editing.
I can then set up the similar environment, but built for RHEL6 on a remote machine. I can copy the python scripts I want over to that and then build a static binary for my python script. Because my scripts are written for other people to use on machines I don't control I can just give them this binary and have them execute it on anything newer then EL6 and it should 'just work'.
That is extremely convenient. I can use python 2.7.11 or 3.whatever and have it 'just work'.
Now there are problems, of course. Python wasn't designed to do this so I have occasional problems with complicated modules or C modules.. like 'requests' is something I have trouble with. This is one of the reasons I am interested in golang.
How does all that translate to desktop applications though?
I don't know. Increasingly use of containers for network services is becoming perfected, but for desktop applications in Linux it's still a very unknown.
I have also been thinking about things like IPFS... 'the InterPlanetary File System'. So this is something that is meant to augment the WWW. Files are referenced by hashes and those hashes are made available by anybody that downloads a file. So this way if you used a website or accessed a file when your machine is online it doesn't go away when you disconnect temporarily. If somebody deletes the file then it doesn't go away as long as somebody else doesn't have it deleted. It's very P2P, very distributed, very scalable. It's currently fast enough you can stream HD media files and seek in them as they are being downloaded and it 'just works just fine'.
It can be exposed to posix-land via fuse file system.
So what if in the future 'application installation' in Linux consists of you logging into a web page and mounting a fuse file system. All possible containers of all possible versions of all possible software is available for you to use immediately. A read-only ipfs fuse mount with a read-write layer on top of it. You have a website to help you manage the apps you want to use and to perform a 'security update' is as easy as killing your running program and then starting it back up. Updates are atomic, roll backs are instantaneous. If a dependency is missing or breaking something then it can be fixed upstream and instantaneously be made available to anybody running any Linux with a internet connection. Containers are only downloaded when you use them and getting disconnected from the internet doesn't mean you lose anything that you've already started running.
I think that containers with file system images like what you can do with snappy or flatpak could go a long way to making it so no Linux user will ever need to install software ever again.
