Bassi: Dev v Ops
Posted Aug 11, 2017 15:53 UTC (Fri) by jhoblitt (subscriber, #77733) [Link]
There are pros and cons either ways for distro vs. upstream packaged software. The biggest concern I have about upstream/vendor packaged software, which presumably bundles the whole world, is keeping the dependencies up to date. Earlier this week, I had to rebuild a docker image published by a well known search engine, that arguablely does know how to ship software, because it had an out of date version of openssl in it with known vulnerabilities.
Desktop browsers are a great example of vendor packaged software that has a track record of timely updates -- and getting those updates out faster than most distros could package them. However, I'm pretty sure there are 10s to 100s of small applications on the typical desktop/server with modest maintainer teams that would not be able to keep up with that standard.
Bassi: Dev v Ops
Posted Aug 11, 2017 16:21 UTC (Fri) by Otus (subscriber, #67685) [Link]
There is the flip side that a few applications I want to manage on my own, but I don't see why the distro model would in any way prevent that. I just uninstall the distro version and install an upstream build or one of my own.
I.e. maybe *application developers* do not want a middle man, but as a user I certainly do. And you see this even with more closed stores: people blame Apple/Google if they have not vetted something in their app stores.
Bassi: Dev v Ops
Posted Aug 11, 2017 16:38 UTC (Fri) by josh (subscriber, #17465) [Link]
Exactly. It's purely that app developers on other platforms don't provide that option, because they have more control.
Bassi: Dev v Ops
Posted Aug 11, 2017 16:43 UTC (Fri) by boudewijn (subscriber, #14185) [Link]
Currently, for a full Krita release, we build:
* source tarball, of course.
* Linux x64 appimage -- I don't think an x32 appimage makes sense, but we get requests for that
* windows x64 installer and zip
* from which I create a package for the Windows store: people from microsoft were really helpful in setting this up, and now it works it's a breeze. The windows store is also a source of a useful amount of money.
* from the x64 installer, Stuart creates the Steam build
* windows x86 installer and zip
* MacOS/OSX (10.9 is the lowest we go) 64 bit dmg
* Alexey builds the Ubuntu ppa.
It's a crazy amount of work for me, and I really should farm all of that out to platform maintainers... And if someone who really groks the opensuse buildservice wants to help out creating rpm's for suse and fedora, that would be too awesome for words.
Bassi: Dev v Ops
Posted Aug 11, 2017 17:01 UTC (Fri) by ntnn (subscriber, #109693) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 11:53 UTC (Tue) by NightMonkey (subscriber, #23051) [Link]
Bassi: Dev v Ops
Posted Aug 11, 2017 17:43 UTC (Fri) by einar (guest, #98134) [Link]
FTR, openSUSE has all the state of git master in the KDE:Unstable:Extra repository and the latest version in KDE:Extra (and in Tumbleweed).
Bassi: Dev v Ops
Posted Aug 15, 2017 18:20 UTC (Tue) by davexunit (subscriber, #100293) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 18:49 UTC (Tue) by jdulaney (subscriber, #83672) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 19:14 UTC (Tue) by boudewijn (subscriber, #14185) [Link]
Anyway, of course, my project is easy to build for distributions; but distributions don't release often enough, backport repos are only a sticking plaster, especially when talking about applications for non-technical people. But there's something that's hardly been discussed in this thread: if get a bug for distribution X, I have to first figure out which exact versions of dependencies it has, which flags it thinks it needs to build with, what other Qt QImageIO plugins are present (like deepin's) and so on: in fact, it's come to a point I cannot give support for Krita on certain distributions, like gentoo or arch.
It's also come to a point where I hesitate to recommend to people who want to use Krita that they'd give Linux a try. I sometimes feel we are trying with fewer people than ever, to keep more different setups/configurations/distributions/desktops than ever spinning in the air.
Bassi: Dev v Ops
Posted Aug 15, 2017 22:06 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link]
I doubt that perspective is anywhere near universal. Lots and lots of major projects are moving to meson these days, mostly from autotools and not cmake but we won't have a single leader in this space anytime soon.
Bassi: Dev v Ops
Posted Aug 16, 2017 5:16 UTC (Wed) by rodgerd (guest, #58896) [Link]
This may be the most extraordinarily arrogant thing I've ever seen on lwn.
The job of an upstream software developer could reasonably be said to make software that pleases them. It could reasonably be said to be to make software that pleases the users of that software.
There is no way outside of the most remarkably toxic and self-centred mental state, that it could be reasonably be said to be to please distribution maintainers. And distribution maintainers who can't grasp that will, I guess, continue to be angry and baffled by the way they become less and less relevant.
Bassi: Dev v Ops
Posted Aug 16, 2017 8:40 UTC (Wed) by anselm (subscriber, #2796) [Link]
There are upstream developers with a “my way or the highway” attitude (DJB comes to mind) and there are upstream developers who go out of their way to make the life of distribution maintainers easier. Guess whose packages are generally better supported in distributions, and thereby get wider exposure. Hence, if you want your software to be widely used, accommodating distribution maintainers is probably not the worst strategy.
Bassi: Dev v Ops
Posted Aug 16, 2017 10:52 UTC (Wed) by pizza (subscriber, #46) [Link]
Oh, come now. This barely registers on the scale.
While I personally disagree that "making their software easier to build by others" is the _most_ important thing an upstream developer can do, it's fairly high up there, because everyone benefits from more frictionless builds, including (and especially) the original developer, because it enables all manner of side denefits.
It's in everyone's interest -- upstream, downstream, and indirectly, end-users too, that the software be easily built. This reduces the burden on everyone -- upstream doesn't have to put as much work into building it themselves, downstream can re-build it effortlessly, and the end-user benefits as releases/builds can be made more frequent without undue burden.
A great example of this is Libreoffice (and its predecessor, go-OOo). They put a lot of effort into making the original OpenOffice codebase easier to build, which in turn allowed them to vastly speed up development cycles to the point where they generate full nightly (regression tested!) builds for every platform they support. Meanwhile, the complex original build process left ApacheOpenOffice unable to perform complete builds for several months, and even after they finally got their security fix release out a year after the code change was made, the underlying situation is no better, greatly raising the bar for would-be contributors.
Given the choice, why would anyone chose to contribute to AOO over LO, given the latter's vastly superior developer experience? And as an end-user, can anyone say with a straight face that AOO's approach has led to any sort of benefit?
So yes, you're correct in that an upstream developer's primary job is to make software that scratches their own itch, and F/OSS or not, they owe nothing more to anyone (I've said something similar here many times, including earlier in this thread). But making something easy for others to build (and thus, contribute to) greatly improves the developer experience, which surely improves the end-user experience if only because it leads to better software, delivered more rapidly. That is hardly an arrogant position to take.
> There is no way outside of the most remarkably toxic and self-centred mental state, that it could be reasonably be said to be to please distribution maintainers. And distribution maintainers who can't grasp that will, I guess, continue to be angry and baffled by the way they become less and less relevant.
Nine times out of ten, the stuff distribution maintainers ask for will, in the longer run, disproportionally reduce the support burden on the upstream author. Or are you seriously saying that shouldering all the stuff that distribution maintainers currently do is somehow less work?
Bassi: Dev v Ops
Posted Aug 19, 2017 15:29 UTC (Sat) by flussence (subscriber, #85566) [Link]
When it's framed as an isolated statement without context, it certainly does come across that way. Don't do that?
Bassi: Dev v Ops
Posted Aug 16, 2017 11:14 UTC (Wed) by NAR (subscriber, #1313) [Link]
I disagree very much. The upstream developer wants to create software for its users first and foremost, not some 3rd party entity. It should be easy for the end user to install and run the created software. "Here's the tarball, it's up to you to get those 30+ dependencies to be able to actually run it" is less than satisfactory - it's an insult to the user. The fact is that distributions have resources to package maybe about 0.1% of software out there. Distributions don't add value nowadays, they just keep this fantasy alive that "we can take care of your dependencies" - they can't.autotools takes care of most of this for free
And what about Java? Python? JavaScript? Ruby? Does autotools find the dependencies for these languages? What if a dependency needs newer C compiler than the one included on the distribution (it happened to me)? In the Erlang world it is common that github repositories are specified in the rebar.config files, so the build process downloads the dependencies - of course, it's susceptible to bitrot. Bundling solves this issue.
Bassi: Dev v Ops
Posted Aug 16, 2017 12:25 UTC (Wed) by niner (subscriber, #26151) [Link]
Bassi: Dev v Ops
Posted Aug 16, 2017 17:22 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]
Bassi: Dev v Ops
Posted Aug 16, 2017 17:37 UTC (Wed) by niner (subscriber, #26151) [Link]
Bassi: Dev v Ops
Posted Aug 16, 2017 23:24 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]
However, this is a very slim advantage. CoreOS is becoming quite popular exactly because it removes much of the regular distro stuff.
Bassi: Dev v Ops
Posted Aug 16, 2017 21:15 UTC (Wed) by lsl (subscriber, #86508) [Link]
Only if you simultaneously archive the set of source files and some records about the environment you used for that particular build. You know, like distributions do for every single package they build, if only to be able to point at the source corresponding to a particular binary package.
If you don't do that, bundling just the object code actually makes the problem worse. People might start to rely on the program just to get hit by the shit hammer months down the road when they realize it can't be rebuilt from sources anymore and no one really knows what it was built from anyway because the build happened on some developer's personal laptop that suffered a coffee-related death long ago.
If you have the resources to pull these things off (and others, like running your builds and test suites on all the kinds of hardware that people care about), then good for you. Many upstreams don't, despite producing useful software. That's where distributions tend to add a lot of value.
Bassi: Dev v Ops
Posted Aug 17, 2017 15:21 UTC (Thu) by NAR (subscriber, #1313) [Link]
they realize it can't be rebuilt from sources anymoreIt's 2017, not 1997 anymore. Most software are used by non-developers. They do not want to rebuild the software. They probably don't even know (or care) about the concept of building software.
the build happened on some developer's personal laptop that suffered a coffee-related death long ago
Bundling libraries does not exclude sane software development practices like using source control, automated builds, etc.
running your builds and test suites on all the kinds of hardware that people care about
I seriously doubt distributions can do this. Wasn't there an article here lately that Fedora can't be installed on a Mac, because they don't have the hardware? There may be a few packages where the distributions can do meaningful work, but in most cases they can only check that the software can overcome the obstacles the distributions themselves introduce.
Bassi: Dev v Ops
Posted Aug 15, 2017 18:47 UTC (Tue) by jdulaney (subscriber, #83672) [Link]
Bassi: Dev v Ops
Posted Aug 11, 2017 17:09 UTC (Fri) by pizza (subscriber, #46) [Link]
Ah, it must be nice to have multi-billion dollar budgets to throw at problems. Though only a fool would ignore the problems those other OSes have as a result of their software delivery mechanism.
> Of course, now we in the Linux world are in the situation of reimplementing the past 20 years of mistakes other platforms have made; of course, there will be growing pains, and maybe, if we’re careful enough, we can actually learn for somebody else’s blunders, and avoid falling into common traps. We’re going to have new and exciting traps to fall into!
"Other platforms" in this case also includes "Linux distributions" -- Blithely dismissing what they built without understanding how/why the status quo came to pass is not a recipe for success.
> We will need to grow up a lot, and in little time; adopt better standards than just “it builds on my laptop” or “it works if you have been in the business for 15 years and know all the missing stairways, and by the way, isn’t that a massive bear trap covered with a tarpaulin on the way to the goal”. Complex upstream projects will have to start caring about things like reproducibility; licensing; security updates; continuous integration; QA and validation. We will need to care about stable system services, and backward compatibility. We will not be shielded by a third party any more.
So... they sort of admit the reason things are so bad are due to their own practices, and would have been much worse if not for "third party shielding" (aka distributions doing what they do)
Bassi: Dev v Ops
Posted Aug 11, 2017 18:16 UTC (Fri) by amacater (subscriber, #790) [Link]
Maybe the app builders should learn from 20+ years of distributions - who hit some of this a long time ago.
[Personal bias: I'm involved with Debian and have dealt with devs trying to build in Node, Ruby gems etc ]
well said
Posted Aug 11, 2017 19:36 UTC (Fri) by h2 (guest, #27965) [Link]
I keep reading about this 'problem' but the real problem to me is the people who want my free desktop or server to be more like the other unfree stuff out there, maybe that comes because so many posters here work for corporations, I can't say what the cause is.
There is a fine use case for very specific programs you might want to bundle into more universal packages, but that case is very small, no more than a handful at most on most systems, while preserving a well produced and curated base install of almost everything else.
After years of doing heavy distro support, my views on why desktop GNU/Linux has not 'caught on' has never changed, it's because the Kernel and major desktops keep breaking their fundamental APIs, which makes upgrades very problematic, and realworld users don't want problematic upgrades. And in particular, they have never, and will never, want to reinstall their OS to do an upgrade, which was the model that all but a few distributions (Gentoo, Arch, Debian) followed until quite recently. Pretending that realworld users don't use these systems because of packaging issues to me is absurd, they don't use them because these systems (BSDs, GNU/Linux) are made by and for engineers, and cater to the tastes of engineers, with a smattering of other users in Mint, Ubuntu, etc.
Redhat can keep trying to 'fix' this stuff all they want, or Ubuntu, but it's never going to matter, a good OS is not going to look the same to every user, some will prefer the Apple walled garden approach, some will prefer the radically insecure Android approach, some will prefer the Windows approach, and the rest, all roughly 2% of us, will prefer the Free approach.
Having a bunch of out of date and insecure packages containing likewise out of date and insecure dependencies cluttering up your system just means you've now become as bad as the corporate servers/desktops. That's not a goal I find desirable, maybe corporate Linux does, I certainly don;t.
well said
Posted Aug 12, 2017 8:47 UTC (Sat) by Wol (guest, #4433) [Link]
Keep breaking their APIs? Not the kernel, certainly! Yes the desktops are a pain in the neck, I ditched Amarok for Clementine for that reason, I couldn't upgrade to KDE4 for ages because I had that "the login screen takes days to appear" problem, etc etc.
Oh - and as for upgrades? Well, SuSE has had exactly the same model as Windows for as long as I can remember (and I remember the days before Windows :-). Do an upgrade-install over the top, and it will migrate the settings forwards. Don't forget, for most people upgrading Windows is buying a new machine ...
imho the biggest reason Linux never caught on is Windows 95 - MS took the "user experience" away from the PC makers, ruined it for people who used premium suppliers like HP, Compaq et al, and now no PC manufacturer can afford to break from the herd and take the user experience back. It'd pay off in the long run, but in the short term they'd go bust - witness all the attempts of people like Dell to provide Linux desktops that just don't really go anywhere :-(
Cheers,
Wol
well said
Posted Aug 12, 2017 16:48 UTC (Sat) by jospoortvliet (subscriber, #33164) [Link]
well said
Posted Aug 17, 2017 21:10 UTC (Thu) by ssmith32 (guest, #72404) [Link]
In other words, I agree with the general direction distros seem to be taking - keeping package management at the core, but also giving the option of something like flatpak for when it counts.
Bassi: Dev v Ops
Posted Aug 11, 2017 17:23 UTC (Fri) by smoogen (subscriber, #97) [Link]
1. They want someone to blame when something doesn't work the way they want
2. They don't want to be blamed when something doesn't work
3. They want someone to fix it when that happens.
4. They want someone to make something easily available for them.
5. They want something to always work correctly
6. They want something to be regularly new and interesting.
7. They want nothing to change in the something because they have 'muscle' memory.
8. They want something to work on what they use.
9. They want something to be available when they want it.
10. They want it to update when they want it.
10+. .. list can go on for a long time so lets call it quits here.
Depending on the people and the time of the day and whatever else is going on.. the importance of various items changes. For a while 1-4 were important to a lot of people and distributions were the ones who got to be the 'someone'. When the various conflicting 5-infinity occurred, the distribution is the one who gets blamed and expected to fix something. This causes the distribution people to start putting in slow downs and road blocks to try and make sure that when 6 happens, 5 and 7 are not happening. Which of course peeves off the people who like 6 more than 7 or 5 and they complain about distributions being too slow/conservative/broken. [The variations on this are endless because people are constantly changing. I might want 7 in my emacs but I don't care about it in my PandaDancePanda game. Or vice versa..]
The store model just moves the distribution to a different player. The Windows, Apple and Android stores are the places where people get things and they are the ones who get blamed first when PandaDancePanda isn't working on someone's phone. They then start putting in all the tools and barriers that OS distributors have to cut down those blames. This then causes people to create yet another "solution" which will have some "technological solution" that supposedly fixes the problems that happened with OS's and Vendor stores... This will end up becoming the new thing everyone must use but it turns out that it just becomes a new "distribution" reinventing all the roadblocks because humans are mercurial.
Bassi: Dev v Ops
Posted Aug 11, 2017 18:17 UTC (Fri) by drag (subscriber, #31333) [Link]
The real problem is that distribution package managers are losing their relevance because they are stuck on a mindset that says that the _ONLY_ way that they can ensure quality on the OS is to be the gatekeepers between what application developers make and what end users consume.
Because of this mentality people have just started ignoring the distributions altogether. And it's only going to get worse unless some compelling universal solution comes together.
The classic example of this is that Linux was a total basket case as a gaming OS until Steam came along. It's not because distributions didn't try... Fedora and other distributions really did try to make the OS better, but all their efforts involved just working their asses off packaging new software in the traditional manner. They didn't really try to solve the core issue... which was users wanted to run games and keep up with the newest stuff and Linux distributions couldn't do this for them. It's not because they are stupid or malicious or that developers resented them... it's just because the model and approach they are using can not solve these sorts of issues.
Linux developers were developing open source games with open source libraries and open source gaming engines on their open source Linux OSes... and the only people that could easily download and play them were Windows users! I ran into this sort of thing constantly.
I would spend sometimes days trying to figure out how to get the right C++ libraries or whatever the else I needed compiled in the right versions. Builds would work, but tests would fail... Is it because the devs sucked or because something I am using was too old or too new? Is it really worth it to screw around with compiling this or that to play a game? This is something that I would do over and over and over again.
Bassi: Dev v Ops
Posted Aug 11, 2017 18:41 UTC (Fri) by pizza (subscriber, #46) [Link]
Except... the distributions are actually correct. The only way _they_ can ensure quality on the OS is to be the gatekeepers. Which, oddly enough, is a similar model that Apple has always used with IOS, the model that MS is pushing hard towards with their Windows Store, and the model Google has been stealthily pursuing with their Play Services (and overtly with Chrome/ChromeOS) for the past few years.
(And let's be honest here -- if not for those gatekeepers ensuring some minimal level of quality, nobody would, least of all the typical application developer)
Those game developers you were going on about made a deliberate choice to work outside the distro model -- that means _they alone_ (and not the distros) were responsible for how well things worked (or didn't) in the end. For all of this essay's faults, it at least acknowledges this point -- Namely, application developers have to *considerably* step up their game, and on an ongoing basis at that.
Bassi: Dev v Ops
Posted Aug 11, 2017 19:25 UTC (Fri) by boudewijn (subscriber, #14185) [Link]
That's pretty insulting bullshit.
Anyway, you're wrong all the way down: an OS is something for applications to run on, not the combination of the thing the applications run on and the applications themselves.
Bassi: Dev v Ops
Posted Aug 11, 2017 20:28 UTC (Fri) by pizza (subscriber, #46) [Link]
Insulting, absolutely (and deservedly so!) but bullshit, nope -- two decades' experience in the trenches has shown me that "quality" is nearly always at the very bottom of the priority pile -- especially if it has implications outside the application writer's own (metaphorical) sandbox or will impact short-term schedule/deliverables (and remember, time is money!)
> Anyway, you're wrong all the way down: an OS is something for applications to run on, not the combination of the thing the applications run on and the applications themselves.
You're correct -- but in only in the same way that the ISO/OSI model relates to how network stacks are actually written. In the real world, the "operating system" is everything that you can count on being present as an application writer.
This includes an ever-growing pile of libraries that aren't technically the OS (according to the textbook definition) but aren't part of the application either -- minor stuff like language runtimes, graphics/multimedia libraries, UI toolkits (including web browser widgets), database engines, location services.. the list goes on and on, and they're all part of the "operating system" as actually shipped to an end-user, with each successive version of said OS bundling ever more stuff.
Bassi: Dev v Ops
Posted Aug 14, 2017 15:53 UTC (Mon) by fknuckles (guest, #112874) [Link]
is it too much to ask for me to ship *almost everything* my app depends on so that i only need to build one package, or make only very minimal modifications for it to run everywhere? I don't understand why anyone can be so concerned about "efficiency" that they would make the experience so archaic. I have no qualms with downloading a 300MB app, even if it's mostly duplicating all the libraries I already have and will waste RAM, as long as the app itself does what I need it to do stably and reliably. Internet speeds are better and faster, disks are getting bigger and faster, and even mobile phones are commonly shipped with 4GB of RAM these days... efficiency of computing resource use is good but no longer the barrier it was 25 years ago.
Shipping one bare deb that makes ultimate use of system libraries and then requires "15 years of experience" in order to make work on all but my (the developer's) machine simply causes the potential users to "go back to Windows" or "Buy a Mac". This isn't helped by the "25 different command line text editors, and 15 different IRC clients" which is also a real fact, possibly one fuelled by the fact that endless fragmentation means it's not commercially viable to sell a high quality email client in the linux ecosystem (for example).
Bassi: Dev v Ops
Posted Aug 14, 2017 17:13 UTC (Mon) by pizza (subscriber, #46) [Link]
You can do that today, and always have been able to do so -- although our definitions of "easily" likely differ.
> This isn't helped by the "25 different command line text editors, and 15 different IRC clients" which is also a real fact, possibly one fuelled by the fact that endless fragmentation...
Wait, I thought the point here is to give users more choices, above and beyond what's pre-packaged/bundled with their OS? Surely you don't believe that other platforms don't have massive duplication/overlap/fragmentation of user applications?
> ...it's not commercially viable to sell a high quality email client in the linux ecosystem (for example).
For quite some time now, it hasn't been commercially viable to sell a high quality email client in _any_ ecosystem. E-mail isn't even terribly unique in that regard, because what folks really care about is X-the-service, not X-the-software. One can't compete with a service by only providing software.
Bassi: Dev v Ops
Posted Aug 14, 2017 17:53 UTC (Mon) by mathstuf (subscriber, #69389) [Link]
Bassi: Dev v Ops
Posted Aug 21, 2017 7:27 UTC (Mon) by fknuckles (guest, #112874) [Link]
People don't download the same copy of their 300MB app every day, and the updates to the app don't have to be 300MB too. If you need an app, you don't refuse to use it because it's going to take 2 hours to download in your 5mbps DSL, and you don't refuse to use it because you have a 200GB monthly data cap. It's not app downloads that eat your data, it's youtube and netflix.
Bassi: Dev v Ops
Posted Aug 15, 2017 6:53 UTC (Tue) by mpr22 (subscriber, #60784) [Link]
If your 10MB application has 290MB of library dependencies, 270MB of which are already present on my system, I want to download 30MB, not 300MB. (Your hosting provider probably also wants me to download 30MB, not 300MB.)
Now, if your 10MB application with 290MB of library dependencies comes with 600MB of contentful data, then I stop caring about the 290MB of libraries because I'm going to have to wait for my download anyway.
Bassi: Dev v Ops
Posted Aug 16, 2017 3:39 UTC (Wed) by zblaxell (subscriber, #26385) [Link]
There are two costs being traded here: the cost of regression testing in the 270MB of dependencies multiplied by the number of supported (distribution, release) tuples, and the cost of pushing a third of a gigabyte to each user on a new install to reduce or eliminate the tuple-count cost factor. One of those costs is orders of magnitude larger than the other, and also costs things that aren't money, like end-user satisfaction, vendor reputation, maybe even consequential damages (not everyone gets to clickwrap those away).
If we have a million users, the bandwidth cost is $0.02 per user. Can we spend this money elsewhere to save money? To use system library dependencies, we need to hire people to verify the app on every Linux distribution (assuming these all have differences in build toolchain, source patches, enabled services, etc). For a distro with 1000 users of our app, that works out to a marginal QA budget of $18.00, not even one hour of one QA's salary, to solve the set of problems occurring on that distro that could have been solved by simply *not* making the download 90% smaller. No puny human QA is going to reliably solve GTK dependencies in an hour, so the most realistic alternatives to making the download 10x larger are to say "join the other 999,000 users using the distro we developed it for" or "your business isn't important to us."
Bundling has 99 problems, but bandwidth cost ain't one.
Bassi: Dev v Ops
Posted Aug 15, 2017 7:08 UTC (Tue) by jani (subscriber, #74547) [Link]
Of course, that's just hypothetical, because very few of the app developers will have any kind of tracking of vulnerabilities in the libraries they bundle.
Bassi: Dev v Ops
Posted Aug 15, 2017 8:02 UTC (Tue) by mjg59 (subscriber, #23239) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 8:25 UTC (Tue) by jani (subscriber, #74547) [Link]
But would you care to elaborate? If the developers bundled specific versions of libraries for any number of reasons, how could they not have to care about the libraries being updated?
Bassi: Dev v Ops
Posted Aug 17, 2017 14:03 UTC (Thu) by jschrod (subscriber, #1646) [Link]
As long as you are fully responsible for the security of your whole application - no.
That means that you must track all security issues of all dependecies that you ship, and that you provide timely application updates in case of any security updates in any of these dependencies.
I don't care about "efficiency". For me, your argument that this is not important is a straw man.
The case against bundling of dependencies is that almost none of the proponents explains how they'll handle security updates - or if they do so, tracking security issues of dependencies are not mentioned at all. See, I don't trust application developers that they do this tracking - but I trust distribution maintainers that each of them tracks the package they are responsible for.
Bassi: Dev v Ops
Posted Aug 17, 2017 15:14 UTC (Thu) by NAR (subscriber, #1313) [Link]
you must track all security issues of all dependecies that you shipThe thing is: for many (if not most) applications in many (if not most) installations it doesn't matter. Do I care if there's a "security" bug in xterm? In Gitk? In Vim? In VLC? In LibreOffice? Absolutely not. What are the chances that I actually get hacked because of a vulnerability in these? Probably less than dying from a brick falling on my head for a building and yet I don't wear hard hat when I go out to the street.
I want my browser as up to date ad possible, but I absolutely don't want to upgrade e.g. LibreOffice or Vim because the browser is updated - and this is where the distributions fail.
Bassi: Dev v Ops
Posted Aug 17, 2017 15:24 UTC (Thu) by pizza (subscriber, #46) [Link]
xterm and gitk are probably not likely, but VLC, LibreOffice, browsers, email clients, and so forth? abso-effin-lutely!
> Probably less than dying from a brick falling on my head for a building and yet I don't wear hard hat when I go out to the street.
That's because there are laws about putting up protected pedestrian paths next to buildings that are likely to shed bricks.
Bassi: Dev v Ops
Posted Aug 17, 2017 16:53 UTC (Thu) by nybble41 (subscriber, #55106) [Link]
Both xterm and gitk are expected to handle untrusted data. I know that xterm has had security issues in the past—"cat" the wrong "plain text" file with embedded control codes and it could overwrite arbitrary files owned by the user running the terminal (e.g. "\e]46;/path/to/file\a\e[?46h", which would set the log file name and start logging... if it weren't generally disabled at compile-time).[1] Note that this applies to the output of any command which copies from an untrusted file to the terminal without a filter, not just "cat".
The input for gitk can consist of arbitrary git repositories cloned from who-knows-where; the scope for an infection (though not the attack surface) is akin to a web browser.
Bassi: Dev v Ops
Posted Aug 17, 2017 17:18 UTC (Thu) by jschrod (subscriber, #1646) [Link]
I.e., I cannot follow your argument.
Well, maybe I really don't care about vim. But I do care that my Emacs is up-to-date cf. security issues. ;-)
Bassi: Dev v Ops
Posted Aug 18, 2017 14:00 UTC (Fri) by NAR (subscriber, #1313) [Link]
If I use LibreOffice only to edit a few files I create, how can an attacker exploit any vulnerability in it? How many LibreOffice documents have you received that contained an exploit?By the way, distributions only protect from known attacks. If you are really really really afraid of malicious code in cloned github repositories, you should use a throwaway virtual machine to open any files you've received anyway. Security is always a compromise. On one hand there's the extremely limited amount of software (and software combinations!) the distributions provide along with forced upgrades - on the other hand there's the perceived security and QA they provide.
Bassi: Dev v Ops
Posted Aug 19, 2017 22:04 UTC (Sat) by DOT (guest, #58786) [Link]
I receive them via email daily. Granted, most of them are caught by my spam filter. But what if one makes it through and I get fooled into opening it?
Bassi: Dev v Ops
Posted Aug 21, 2017 16:52 UTC (Mon) by Wol (guest, #4433) [Link]
"This is a document. It's a DATA file". "This is a macro file. It's possibly dangerous. Do you want to run it?".
It was easy to do stuff like Word. You could easily use a macro to load or create a document and edit it. But you could NOT put macros inside a document.
Cheers,
Wol
Bassi: Dev v Ops
Posted Aug 11, 2017 20:42 UTC (Fri) by drag (subscriber, #31333) [Link]
Yes because they enjoy making their lives hell just because. They love the fact that Linux users cannot use Linux software developed on Linux. Do you think there could be a reason why they _could not use_ the distro model?
> The only way _they_ can ensure quality on the OS is to be the gatekeepers.
These gatekeepers are going to find more and more people jumping the fence and ignoring them completely. They have the benefit of being willing to put in tremendous amount effort into the distribution and the good will and thanks of users. But eventually people are just going to continue making more and more elaborate work-arounds until somebody stumbles on a solution that will allow users to by-pass distribution repositories completely for most of their software.
Putting your head in the sand and continuously beating on the drum of 'quality' isn't actually going to get anybody quality or solve these issues. It's just ignoring reality at this point.
> Apple has always used with IOS, the model that MS is pushing hard towards with their Windows Store, and the model Google has been stealthily pursuing with their Play Services
Reviewing and curating the software is not remotely close to what the problem is here. If distributions want to have a hand in making sure that application quality is ensured they are going to have to adapt to the world rather then demanding the world adapt to them.
Bassi: Dev v Ops
Posted Aug 11, 2017 21:36 UTC (Fri) by pizza (subscriber, #46) [Link]
Oh, absolutely. Depending on who you ask, the distro model is either over-specified or under-specified. Or both.
> But eventually people are just going to continue making more and more elaborate work-arounds until somebody stumbles on a solution that will allow users to by-pass distribution repositories completely for most of their software.
That's not going to happen, for one very simple reason -- because it's a great deal of work, from a whole lot of people, that needs to be done in a coordinated manner -- And none of that is going to do or fund itself.
> Putting your head in the sand and continuously beating on the drum of 'quality' isn't actually going to get anybody quality or solve these issues. It's just ignoring reality at this point.
And it's willfully idiotic to pretend that the 'quality' drum didn't come out of utter necessity -- something that even TFA acknowledges.
> Reviewing and curating the software is not remotely close to what the problem is here. If distributions want to have a hand in making sure that application quality is ensured they are going to have to adapt to the world rather then demanding the world adapt to them.
Let me remind you what you wrote: "The real problem is that distribution package managers are losing their relevance because they are stuck on a mindset that says that the _ONLY_ way that they can ensure quality on the OS is to be the gatekeepers between what application developers make and what end users consume."
Literally everything you wrote in these last two paragraphs is what all major/mainstream platforms are either already doing (Apple, MS) or attempting to do (Google). They even proclaim their gatekeeper status as a feature, necessary "to ensure quality" on their platform. And I might add, all three of those players are forcing the world to adapt to them, rather than the other way around! (Apple is the most egregious of these players; Witness every new iOS version!)
So, please explain how distros' "quality" practices are different (in principle) from Apple, MS, or Google's, and how eliminating them will result in a better end-user experience -- because a better end-user experience is supposedly the whole point.
Bassi: Dev v Ops
Posted Aug 11, 2017 22:43 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]
Stores don't dictate which version of libraries must be used and how the app is built. And stores provide stable foundation on which reliable apps can be built to run for at least several years without doing the dependency churn treadmill exercise.
Bassi: Dev v Ops
Posted Aug 12, 2017 0:12 UTC (Sat) by lsl (subscriber, #86508) [Link]
At least Apple doesn't have any qualms dictating these things to you.
> And stores provide stable foundation on which reliable apps can be built to run for at least several years without doing the dependency churn treadmill exercise.
Not that long ago, the macOS version of a program I maintain started spewing out "deprecated" warnings because Apple decided *yet again* to change the preferred way of doing socket activation with launchd. This is now what, the third time? This program is only a couple of years old and has seen them all. For something as trivial as getting the number of inherited file descriptors from launchd.
With the Linux systemd version, the initial implementation still works and will very likely continue to work as long as systemd exists. And it's still nicer to use than the monstrosities Apple came up with for something that could be as simple as getenv("NUM_FDS").
Bassi: Dev v Ops
Posted Aug 12, 2017 0:24 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]
> Not that long ago, the macOS version of a program I maintain started spewing out "deprecated" warnings because Apple decided *yet again* to change the preferred way of doing socket activation with launchd. This is now what, the third time? This program is only a couple of years old and has seen them all. For something as trivial as getting the number of inherited file descriptors from launchd.
But it still works, doesn't it? Though Apple is definitely becoming less and less stable. Windows is still OK.
My recent personal story: I have recently spent several hours trying to compile https://github.com/KranX/Vangers - one of my favorite games. The version of clunk (audio library) was not working properly with the recent SDL. I had to build it with an older version of SDL to make it work but then the game crashed during the startup with an exception stack trace pointing somewhere into Mesa.
Before that I tried to install a deb package with it, but it was released for Ubuntu 7.10 and it was not even close to an installable state now.
I ended giving up and downloading it from Steam.
Bassi: Dev v Ops
Posted Aug 12, 2017 0:45 UTC (Sat) by pizza (subscriber, #46) [Link]
Lucky you. Meanwhile, Gutenprint has had all manner of hell because every OSX release between 10.4 and 10.9 broke something that Gutenprint depended upon, requiring code or packaging changes in order remain functional. Supporting it is more of a headache than every other platform put together.
Bassi: Dev v Ops
Posted Aug 12, 2017 20:37 UTC (Sat) by simoncion (guest, #41674) [Link]
Odd. Clunk builds just fine against Gentoo's =media-libs/libsdl-2.0.5 and friends. Vangers builds just fine with this clunk. Were you running into compilation issues that caused to you roll back to an earlier SDL, or runtime issues? Your comment is a little unclear on this point.
Bassi: Dev v Ops
Posted Aug 14, 2017 13:09 UTC (Mon) by ibukanov (subscriber, #3942) [Link]
Bassi: Dev v Ops
Posted Aug 12, 2017 0:48 UTC (Sat) by pizza (subscriber, #46) [Link]
Neither has been true for Apple/iOS for many years now -- and with each new iOS release, Apple gets stricter about what you can and cannot do and/or use.
Bassi: Dev v Ops
Posted Aug 12, 2017 16:45 UTC (Sat) by drag (subscriber, #31333) [Link]
I didn't say that quality doesn't matter. I am also not saying they didn't do it to improve quality.
What I am saying is that: Doing XYZ to improve quality does not mean that doing XYZ will actually improves quality. It's the difference between theory and practice. It's entirely possible, and is very easy, to fall into the trap of 'Doing all the wrong things for all the right reasons'.
Linux distributions and package management was originally created because of the tremendous difficulty that went into finding and compiling software. It's design is one oriented towards taking tens of thousands of different tarball source code releases and then turning them into a cohesive operating system for easy user consumption. That's the problem they were designed to solve and it was a vast improvement over having to collect software from a thousand different ftp servers. It's still a valid approach and totally necessary and I see no reason why it has to go away. However it's not a panacea. It's not going to solve all issues for everybody. The ecosystem for software that runs on Linux has outrageously stripped the ability for package managers to keep up. People absolutely need the ability to install and manage software in a easy way for software not specifically built for distributions by distributions.
Years ago it was true that unless it was packaged it probably wasn't worth your time. Nowadays if I depended on only distro packaged software I could not do my job.
In my professional life I run into this sort of stuff _constantly_. You have something that worked for many years, but is increasingly becoming a impediment because of architectural issues that were never resolved. Technical debt piles up and is never paid. When you try to improve it or replace it with something that is more effective there are a thousand barriers thrown up about why you can't do it. Impediments to processes are always created in reaction to something bad happening. They are always justified for valid reasons. Whether or not they solve those issues is entirely different question and there is always the possibility of a better approach.
This is the classic problem with 'Dev vs Ops'. It's not unusual for companies have sunk millions of dollars into infrastructure and servers for developers to use only to find out that developers have completely ignored it for VMs on their laptops and the cloud. Not because the cloud is inherently superior or cheaper, but because the people running those systems are not married to their infrastructure. They don't have emotional attachment and commitments to their design that prevent them from improving it or making changes. The bean counters then decide that the investment in infrastructure is unwarranted and people lose their jobs. It's totally and completely possible to run a datacenter for a good sized business that will be much more cost effective, secure, and reliable then what you can get from Amazon or Rackspace or whatever other cloud provider. But you can't do it if you are not willing to improve, take intelligent risks, and make changes.
Quality in software happens through extensive testing, incremental changes, and tight release schedules. By making small changes continuously, implementing automated testing, keeping change sets between releases small, and getting those releases out to the users as soon as possible all serve to increase quality.
When you have impediments between users and developers that slow the process down then the tendency is always going to be towards monolithic releases involving large numbers of changes. Large changesets released semi-annually is, all other things being equal, is going to result in bigger and more numerous bugs then proportionally smaller change sets released monthly or weekly or even daily.
The speed people want isn't about getting features done faster or getting software written faster. People who claim that it's all about just developing features faster are dangerously confused. Some part of that is true, but it's much more holistic. Faster turn around, shorter development cycles, better testing is what improves quality. And the purpose of software and it's maturity dictates what is the best schedule and approach. What works for GCC isn't going to work for the Kernel isn't going to work for Emacs plugins and isn't going to work for Firefox.
It's the same sort of thing that caused nightmares for users and developers dealing with people who jealously controlled commit access to CVS and SVN repositories before the popularity of Git changed things. For the most part the privileged committers had valid reasons for denying and controlling access, but it still caused a lot of drama, damage and headaches to big projects. It still got better when forking became encouraged and copying repositories became trivial. It's certainly not perfect now, but nobody really wants to go back to the old way.
> because it's a great deal of work, from a whole lot of people, that needs to be done in a coordinated manner -- And none of that is going to do or fund itself.
It doesn't have to fund itself. There are business reasons. You can't get the software you need by depending entirely on distribution releases. So you need to have something else.
> Let me remind you what you wrote: "The real problem is that distribution package managers are losing their relevance because they are stuck on a mindset that says that the _ONLY_ way that they can ensure quality on the OS is to be the gatekeepers between what application developers make and what end users consume."
If all distributions did was to take packages from upstream, evaluate them, grade them, and make them available to end users then we would not be having this discussion. This is something that they should actually be doing.
But that is not what is going on.
If Android had followed the same release requirements and required that Google employees were the only ones that could build software for end user consumption, and required that all software be written, bug free, and 100% ready just prior to Google's Android release dates, then Android would of never gotten off the ground. It would of died immediately. People would if justifiably rejected it out of hand.
If Google tried to move to that model now people would flip out. They would simply remove Google from Android. There is nothing Google does that is so valuable that people are going to be willing to tolerate seeing Android go down in flames. Not when it's open source, not when they can do something about it.
Bassi: Dev v Ops
Posted Aug 12, 2017 19:53 UTC (Sat) by pizza (subscriber, #46) [Link]
That's a nice strawman you've built up then shredded. You're arguing an irrelevant point, as no distro that matters operates in the way you've described.
Anyone is free to write and package anything they want for any distro, and if they so choose, can deliver it in a manner that will seamlessly integrate with the distro's software discovery/management tools (through use of third party repositories) without requiring any permission, endorsement, or "quality judgement" from said distro.
If folks can't be bothered to do even that minimal amount of work (and let's be honest, all other approaches require even more, not less effort) how can anyone expect them to undertake an approach (which the TFA sorta outlines) that requires even more effort (both immediate and long term) on their parts?
I mean, come on, if you want to try a new approach to delivering software, by all means,have at it -- But you'd better expect a healthy dose of scepticism when you when you say that with this new approach, all of the problems discovered, and lessons learned, over the past 20 years somehow magically go away, while simultaneously not incurring any of the problems inherent to other existing instances of the "new" model that's being proposed.
TFA is another example of the sixth law of networking truth (see RFC1925) all over again.
Bassi: Dev v Ops
Posted Aug 13, 2017 8:50 UTC (Sun) by tpo (subscriber, #25713) [Link]
With most (all?) distros you have something like "stable", "testing", "unstable" distribution versions/stages. From hearsay stability of those stages appears to be varying a lot between distros.
A lot of people are running Debian's testing where you can receive "fresh" stuff after a stabilization period of a week (?) or so. In the "devs vs. ops" duality a "testing" distribution offers a solution to the "not the latests release" problem. "Testing" creates the problem for the user of needing to keep track of a constant stream of package updates.
That problem in turn can be resolved by selective "backports" repositories or PPAs. However "backport" repositories at this time are a bit second class citizens.
Another variation are "rolling distros", yet another attempt to resolve the tension between stability/an integrated machinery/freshness.
When attacking the problem from the other, application side, possible solutions are f.ex. statically binding applications to include the whole stack of dependencies or applications that bring with them their own private copy of /usr.
A variation of that solution is what Bassi/Flatpack seem to be proposing.
The essence of ops is the balancing of the influx of instability and petrification in stability.
If you lean to the side of instability then you get as a result much larger migration and integration costs (application X was dropped, now what? Feature Y was dropped. API Z changed in an incompatible way...) and you get insecurity (devs rushed M out into production but for some reason are not maintaining it any more even though known security problems exists).
As far as I can see Bassi and Flatpack do propose a solution to many of the stated problems however move them from the distro to the application (stack, group).
I think the crucial point here is, that the "Flatpack" model has not yet had the time to conclusively prove itself in practice, so we at this point can not jump to definitive conclusions about its un/usability.
Each of the solutions seem to have their own niche where they match a certain sweet spot between instability and stablity and are useful. I wouldn't want to miss neither backports, nor PPAs, nor ESR Firefox in Debian nor fresh and up to date Chrome monoliths.
So why make this discussion an either or choice? Why not investigate how and if useful syntheses of those extreme poles can be had?
I think the proposed solutions are very interesting and I think it's a promising sign that f.ex. Debian is examining without apparent controversy (!) the possibilities of having *both* approaches: https://www.mail-archive.com/debian-devel@lists.debian.or...
Bassi: Dev v Ops
Posted Aug 13, 2017 12:06 UTC (Sun) by pizza (subscriber, #46) [Link]
It's a more fundamental problem than that. The "Flatpack runtime" is an-entire-distro-by-another-name, which will face the same maintenance and instability/petrification problems (to borrow your terms) that are supposedly the reasons why existing distros are so unsuitable for the modern era.
Furthermore, nobody's figured out how to sanely build Flatpacks or their runtimes; AFAICT Fedora is by far the furthest along in their efforts, but their approach requires a traditional dependency-managed distro _and_ applications that are properly packaged for (but not necessarily distributed by) that distro.
Those are just two of the major reasons for scepticism with respect to Flatpack. The end-goal is laudable, but to get there application writers need to undertake work (and then some) that they're already unwilling to do.
> However "backport" repositories at this time are a bit second class citizens.
That's not fair; they have the same status as any other third party repository. Where these "backport" repos fail is that they fall outside a distro's purview, and as such only tend to be maintained or stick around until its creator migrates to the newer stuff.
Lots of people may clamour for these backports, but nearly nobody is willing (or able) to put in even minimal ongoing effort to keep them current. (As a great example of this effect in practice, read up on what happened to Fedora Legacy sometime)
Bassi: Dev v Ops
Posted Aug 13, 2017 17:17 UTC (Sun) by tpo (subscriber, #25713) [Link]
>> I think the crucial point here is, that the "Flatpack" model has not yet had the time to conclusively prove itself in practice, so we at this point can not jump to definitive conclusions about its un/usability.
>
> It's a more fundamental problem than that. The "Flatpack runtime" is an-entire-distro-by-another-name, which will face the same maintenance and instability/petrification problems (to borrow your terms) that are supposedly the reasons why existing distros are so unsuitable for the modern era.
True however quantitative differences between a "full distro" and "a little distro" could effect a crucial *qualitative* improvement:
* "little flatpack distros" could be relatively independent in their own schedule of the release schedule of the full distro below them. Thus, as drag argued, that could lead to much quicker feedback and innovation cycles for applications. With possibly huge gains for application development.
* also "little flatpack distros" could have massively reduced dependency graphs in total as compared with "full distros", reducing release friction a lot. This could also possibly take off pressure from distributions and let them concentrate on a smaller problem set.
So I do agree with you that the "little flatpack distros" will face the same problems as "full distros", however on possibly very different scales. And that could gives us artefacts of a very different quality and indeed solve some important problems (application "freshness", parallel installability etc.).
But certainly - "little flatpack distros" won't be just sunshine and roses as exemplified by the putrid smell of rotting bits in some of the docker images. So of course some kind of reputation of quality flatpack sources will need to ripen over time maybe including distros as sources themselves.
Bassi: Dev v Ops
Posted Aug 15, 2017 5:32 UTC (Tue) by xophos (subscriber, #75267) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 6:41 UTC (Tue) by mpr22 (subscriber, #60784) [Link]
glibc - the standard C runtime library on traditional Linux systems - does not support being statically linked. So "indigenous" developers won't statically link because for a long time glibc was the only viable game in town for a Linux program, and "foreign" developers won't statically link because Windowsy folks are used to shipping the DLLs they dynamically linked against.
Bassi: Dev v Ops
Posted Aug 15, 2017 9:44 UTC (Tue) by roc (subscriber, #30627) [Link]
Bassi: Dev v Ops
Posted Aug 16, 2017 5:06 UTC (Wed) by lsl (subscriber, #86508) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 9:51 UTC (Tue) by NAR (subscriber, #1313) [Link]
The solution was invented even before dynamic linking and is called static linking.How do you statically link Java? Python? JavaScript? Ruby? Erlang? It's not 1997 anymore, not C is the only language that applications are developed in.
Bassi: Dev v Ops
Posted Aug 15, 2017 14:45 UTC (Tue) by smckay (guest, #103253) [Link]
Bassi: Dev v Ops
Posted Aug 17, 2017 2:40 UTC (Thu) by zblaxell (subscriber, #26385) [Link]
Indeed. A small team of competent developers can easily maintain a forked distro optimized for the needs of a single user. This is so easy that many individual users do it without realizing it (everyone who installs or customizes a few third-party packages effectively does this).
Packing an application binary with its dependencies is expanding slightly on this, to build a forked distro for users of a single application (possibly by picking a distro and copying bits of it into a package that can then be unpacked on another distro under different paths--i.e. the sort of thing LD_LIBRARY_PATH is designed for). This is much easier for the app developer than supporting either a full distro or integrating an application deeply into several different distros.
The security trade-off is non-trivial without a robust sandboxing mechanism in place, probably combined with routine eviction of non-updated apps from package repos/apps stores. We can learn some things from Apple here: keeping apps up to date is the app developer's problem, all the store has to do is define some technical standards and say "no" effectively (or "no, your app is built with an ancient version of libCVEfarm.so, and we won't distribute it any more until you upload a fixed version.").
Bassi: Dev v Ops
Posted Aug 11, 2017 21:59 UTC (Fri) by anton (subscriber, #25547) [Link]
(And let's be honest here -- if not for those gatekeepers ensuring some minimal level of quality, nobody would, least of all the typical application developer)Really? Just today I received a message that Debian is closing a 10-year old bug report (for a bug that should be easy to fix), not because they have finally fixed, it, but because they are removing emacs24 from unstable. Is it fixed in emacs25? Who knows? Would I get the same bug if I got the package from upstream? Probably not.
Bassi: Dev v Ops
Posted Aug 12, 2017 12:54 UTC (Sat) by amacater (subscriber, #790) [Link]
CVS behaviour was fixed first. When Emacs 21 was removed, the bug was reassigned to Emacs 24. There's a note that says that the problem wasn't found again when it was tested with Emacs 24 and a note that it was fixed by the time of Emacs 24.4.
There's been a tidy up of the bug tracking system and a mass closure of old bugs that no longer apply to older versions - there's not a lot of point in keeping bugs open that were not release critical for obsolete versions of Debian. So yes, this was fixed a few years ago.
If you were using the upstream version - yes, this might still be a bug, depending on when you self-installed Emacs, whether you built it from source etc But this is where the distributions maintaining stuff and tracking issues adds value and is different from, say, grabbing a random Github repository and building it / a small app to use my Android phone as a compass or whatever else you might consider.
Bassi: Dev v Ops
Posted Aug 12, 2017 17:05 UTC (Sat) by anton (subscriber, #25547) [Link]
It was a bug in the Debian package emacs21-common, which comes with its own version of rcs2log instead of using the (fixed) one from the CVS package. This separate copy of rcs2log also exists in Debian 9 (in emacs24-bin-common). I just tested it on some versions of Debian, and the bug is present in Debian 4.0 (Emacs 21.4), Debian 5.0 (Emacs 22.2), Debian 6.0 (Emacs 23.2), and fixed in Debian 8 (Emacs 24.4) and Debian 9 (Emacs 24.5). So this bug existed at least until 2011 (and probably until 2014), not a shining example of quality by the distribution, but, admittedly, it has finally been fixed (and I probably have to thank J Smith for that).Would it have been fixed any slower to fix if I had used an upstream-packaged version of emacs (if that bug was even present in the upstream version of emacs; otherwise CVS, if that bug was actually present there), and reported the bug to them? I doubt it.
Bassi: Dev v Ops
Posted Aug 11, 2017 20:29 UTC (Fri) by smoogen (subscriber, #97) [Link]
1. Developers try to distribute software directly.
2. Users start having various dependency issues between various apps or just want it to work.
3. The developers in trying to get more work done start putting things together in some sort of 'group'. It might be a distribution or an OS or a "We are the KDE (or GNOME) group".
4. Users start blaming the group whenever something is broke and no one likes being blamed
5. The group starts putting in rules to make sure that blaming is cut down.
6. Developers feel hemmed in and try to distribute software directly.
7. Goto 2.
OS distributions put more rules in to try and make it so that less stuff was broken for 'stupid stuff' or at least worked for the most people at one time.
App stores are doing this because they have become the new distributor and get blamed.
Whatever is the next trendy way to get apps directly to the user without the wait of getting store approved.. will after a year or so start doing so once it becomes popular and the amount of blaming gets above some sort of group psychology limit.
Bassi: Dev v Ops
Posted Aug 11, 2017 22:51 UTC (Fri) by roc (subscriber, #30627) [Link]
That is not necessarily true. Proof: this hasn't happened for Web applications.
Bassi: Dev v Ops
Posted Aug 12, 2017 8:04 UTC (Sat) by tpo (subscriber, #25713) [Link]
> That is not necessarily true. Proof: this hasn't happened for Web applications.
What about all the standartisation groups? Aren't they just that? An effort to agree on common rules?
Also, the average quality of web applications is quite low: you don't have the same browser on the same OS and the same device as the funky dev? Too bad, because you should /upgrade/ (i.e. throw away your device). Or downgrade to a "stable" browser. With "stable" Java. Or Flash. Or whatever. It's quite a colorful mess IMHO so I'm not sure the web serves as a useful example to demonstrate how things "are working" despite there not being a gatekeeper that mandates quality standards.
Bassi: Dev v Ops
Posted Aug 12, 2017 8:54 UTC (Sat) by Wol (guest, #4433) [Link]
So why, when I go onto the login page, is there no code behind the login button, but as soon as I make a change to one of the data-entry fields that code appears?
Cheers,
Wol
Bassi: Dev v Ops
Posted Aug 12, 2017 21:05 UTC (Sat) by roc (subscriber, #30627) [Link]
The Web lets developers get apps in front of users without going through per-app gatekeeping. Nothing you mentioned alters that.
Bassi: Dev v Ops
Posted Aug 13, 2017 1:07 UTC (Sun) by HenrikH (subscriber, #31152) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 5:43 UTC (Tue) by xophos (subscriber, #75267) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 13:16 UTC (Tue) by davecb (subscriber, #1574) [Link]
Bassi: Dev v Ops
Posted Aug 11, 2017 18:31 UTC (Fri) by flussence (subscriber, #85566) [Link]
Bassi: Dev v Ops
Posted Aug 11, 2017 19:41 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]
Do I have such a feedback system with Debian?
Bassi: Dev v Ops
Posted Aug 11, 2017 20:40 UTC (Fri) by flussence (subscriber, #85566) [Link]
Bassi: Dev v Ops
Posted Aug 11, 2017 22:44 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]
Stores have feedback mechanisms driven by users (vendors can't remove users' comments in Google Store or iStore).
Bassi: Dev v Ops
Posted Aug 11, 2017 23:09 UTC (Fri) by raegis (subscriber, #19594) [Link]
Sourceforge has done ratings for years, and now Gnome has its own "Software" app with a rating system. I never found these useful, but a person new to free software might, I guess, before they realize it's not necessary.
Bassi: Dev v Ops
Posted Aug 14, 2017 5:02 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]
> Packages in Debian already go through an extensive review process. Totally useless garbage packages never make it in
Yes they do. I'd hardly call "sl" useful: https://packages.debian.org/stretch/sl
Or maybe ddate?
> whereas the app stores are full of garbage. The ratings system you suggest is not as useful here. Fart apps don't happen on Debian.
Oh really? How about "fortune -o"? Or a couple of religious apps in Debian?
And anyway, that's a minus for Debian. Fart apps actually got decent sales - many users WANT them, yet Debian doesn't have them. Why do you think YOUR opinion of "quality" should matter?
Bassi: Dev v Ops
Posted Aug 14, 2017 8:37 UTC (Mon) by anselm (subscriber, #2796) [Link]
And anyway, that's a minus for Debian. Fart apps actually got decent sales - many users WANT them, yet Debian doesn't have them.
The main reason why there is no fart app in Debian is that so far nobody seems to have bothered to package and upload one. In principle there is nothing that would keep a fart app out of Debian if its license complies with the DFSG and there is someone who is willing to maintain it.
Bassi: Dev v Ops
Posted Aug 14, 2017 8:40 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]
Bassi: Dev v Ops
Posted Aug 17, 2017 3:09 UTC (Thu) by zblaxell (subscriber, #26385) [Link]
Well, that, and Debian doesn't collect and redistribute money from fart users (or fart advertisers) to attract fart developers and to fund fart reviewers and distribution infrastructure. Only a privileged few bother facing (justifiable) opposition from those who would not see their donations to the project squandered on digital flatulence.
Also, apt would explode given the number of packages in a non-trivial commercial app store (it barely keeps up with the growth of Debian now).
Both are solvable problems, if anyone cared enough. Some startup could just roll out an app store on top of straight Debian (maybe giving each app developer a unique URL and injecting files in /etc/apt/sources.list.d), and distribute all the fart apps they can find users to fetch.
Bassi: Dev v Ops
Posted Aug 17, 2017 4:42 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
And I actually agree that it makes total sense for system-level software and the basic desktop environment, after all I don't really want to watch ads in "ls" command output.
The problem is that there's NO reliable way to distribute software in "classic" desktop Linux other than through distros.
Bassi: Dev v Ops
Posted Aug 12, 2017 22:05 UTC (Sat) by flussence (subscriber, #85566) [Link]
Can these vendors find the important comments? Can other users? Can you do *anything* useful with a multi-gigabyte-long shoutbox, 5 comments per page and no search function?
Can you find the Play Store version of Debian's “Chromium keeps my computer's microphone hot at all times without warning” whistleblowing thread?
No, the UI's built to “optimize engagement” or some other meaningless SEO word salad, while keeping all the cries for help safely tucked “below the fold” alongside a sea of unintelligible trash and automated spam. These systems are designed to throw upset people into a tarpit, so as to prevent them from going out and actually making a difference.
By the way, this “proles and animals are free” mentality you're espousing is downright terrifying. We have enough of that empathy-devoid insanity in the world as it is, keep it the hell out of Free Software.
Bassi: Dev v Ops
Posted Aug 13, 2017 2:43 UTC (Sun) by mathstuf (subscriber, #69389) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 4:55 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]
> Can other users? Can you do *anything* useful with a multi-gigabyte-long shoutbox, 5 comments per page and no search function?
Yes. To see the overall quality of an app, along with the rating.
I don't know, billions of people are reasonably happy with app stores. Probably they lack you to explain to them how they are suffering from inability to experience dependency hell and "extensive review process".
> Can you find the Play Store version of Debian's “Chromium keeps my computer's microphone hot at all times without warning” whistleblowing thread?
Chrome doesn't keep microphone turned on at all times on phones. Moreover, I can see in real time on my phone which applications have elevated permissions.
BTW, I don't think it's even possible on regular Linux distros to deny applications access to the microphone with an easy interface for temporary access.
> By the way, this “proles and animals are free” mentality you're espousing is downright terrifying. We have enough of that empathy-devoid insanity in the world as it is, keep it the hell out of Free Software.
Yeah. Is that tinfoil hat too tight for you?
Bassi: Dev v Ops
Posted Aug 21, 2017 7:52 UTC (Mon) by fknuckles (guest, #112874) [Link]
Comments are sorted by date, which means you will see the ones that apply to the most recent version.
Developers and users don't need to see the 6-month old comments. Developers get notified of reviews in the app store. If it contains feedback (such as your app crashes on startup on my {phone}), then from then, it is just a matter of whether or not they ahve the relevant hardware to test, or they can build and ship an updated version of their app with extra instrumentation so that the next crash will offer an opportunity for the user(s) to upload more contextual log data etc.
The system doesn't look elegant but it is quite effective.
Bassi: Dev v Ops
Posted Aug 21, 2017 7:48 UTC (Mon) by fknuckles (guest, #112874) [Link]
This is a shining example of the disconnect in expectation that devs (in Linux especially) tend to have. For one thing, you can't expect your users to know how to give you a proper bug report, and you should welcome all feedback, even if it's not actionable. How many times have you been to a website that was full of broken links, not because it's not maintained, but because the developer themselves never exercises that part of the website? It's the same with apps. If you have no way for users to give you feedback, or you have such high hurdles (because you're a bit arrogant and mis-informed about how much time your users have to spend on you), then you will simply get no feedback , and your website/app will be broke for a very long time indeed, with the attendant decay-feedback-loop that it's associated with it. Make it easy to give you feedback, even it's just "hey, this page/feature is broken". It's better for you to receive more feedback than you can deal with than to receive none at all.
If an app I'm using crashes and I go tell you about it, if you expect me to surrender considerable amounts of time and effort to report this to you "The way you want it (tm)", then you're terribly mis-informed. On every platform other than {Insert Linux Distro here}, I can simply uninstall your app and try the next one that might do what I want.
Developers try to put in all sorts of mechanisms to get you to give them feedback before you leave a negative review (and I personally make an effort to use these), but the app-store review is the ultimate feedback. If your app sucks, or your recent update broke stuff, I go and edit my rating and give it fewer stars. When you fix it, and if I'm still using it, I eventually update my rating accordingly.
I actively avoid even testing apps that have fewer than 3 stars on average.
Bassi: Dev v Ops
Posted Aug 11, 2017 20:17 UTC (Fri) by davecb (subscriber, #1574) [Link]
This almost describes Multics, Solaris and (Linux) glibc, except for the "retired".
...of a core set of OS services; a parallel installable blocks of system dependencies shipped and retired by the OS vendor; and a bundling system that allows application developers to provide their own dependencies, and control them.
If you leave retirement in, you've created an NP-hard problem. If you don't retire things (ie, if you can have old and new versions of versioned-numbered stuff), the difficulties are greatly reduced. They're still difficult, theyr'e different from what you're used to, but they're smaller.
This got discussed here and in the golang wirld, and I described how older OSs and the glibc team didge the bullet ay “DLL Hell”, and avoiding an NP-complete problem
IMHO, we're trying different approaches solving to an evil problem, and instead should be chosing to not have that particular one. Have a solvable problem or two instead (:-))
Bassi: Dev v Ops
Posted Aug 13, 2017 16:35 UTC (Sun) by swilmet (subscriber, #98424) [Link]
In GNOME and GTK+, two different major versions of a library are completely parallel-installable, see:
https://developer.gnome.org/programming-guidelines/stable...
For example GTK+ 2 is installed in parallel to GTK+ 3.
For the glibc, I think it could work too: glibc 2 and 3 could be installed in parallel, and higher-level libraries/components/apps are ported to glibc 3 layer by layer, from bottom to top. But it requires that every higher-level library is also parallel-installable, and that they bump their major version when porting to glibc 3. But it would probably take many years to port all the Linux userland libraries to glibc 3, we would realize that some low-level stuff are not that well maintained and that nobody works on those modules anymore.
But with containers, an application can bundle older versions of libraries and use an older runtime, if the work is not yet done to port the code to the new versions. So even without parallel-installability nor the mechanism to have several versions of the same symbol in a library (like currently implemented in glibc), two different containers can have two different (and incompatible) versions of the glibc or any other library.
With containers the NP-complete problem doesn't exist because the upstream developers have already chosen the right runtime and bundled the right versions of additional libraries.
Bassi: Dev v Ops
Posted Aug 13, 2017 17:41 UTC (Sun) by davecb (subscriber, #1574) [Link]
Thanks for reminding me of the container approach: we used to use a container approach in Sun as well, typically to make it easy for someone to use a Solaris-8-only application on Solaris 10 without recertifying it.
If a customer put two applications with private libraries that clashed in one container, it failed, so we strongly recommended against it.
If you strictly put only one application per container, that worked. Well, until the applicatiuon got complicated enough (think node.js) that it was internally inconsistent. Then the vendors had to solve their NP-complete problem for you. You didn't notice it, but they sure did (:-))
We eventually were preaching "only one app per container", and the vendors shouldered the problem of keeping their apps sane on what looked to them like a "bare" machine
Bassi: Dev v Ops
Posted Aug 20, 2017 21:18 UTC (Sun) by nix (subscriber, #2304) [Link]
That doesn't work because glibc provides shared data structures that are used by many consumers at once (a major example being the heap, and malloc()). All glibcs in a process's address space must therefore agree on the format of all such shared data structures, locking rules, etc. In practice this means that you can only have one soname of glibc linked into a process's address space at once (thus, a single copy of the library, which necessarily agrees with itself about all that stuff). This therefore means that soname bumps for widely-used libraries of this nature cannot be done incrementally, and are generally agonizing as a result.
Bassi: Dev v Ops
Posted Aug 21, 2017 2:27 UTC (Mon) by mathstuf (subscriber, #69389) [Link]
Bassi: Dev v Ops
Posted Aug 21, 2017 5:18 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]
Bassi: Dev v Ops
Posted Sep 1, 2017 15:10 UTC (Fri) by nix (subscriber, #2304) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 8:04 UTC (Mon) by vasvir (guest, #92389) [Link]
It's what Linux kernel does (it is acknowledged in the article) albeit in a smaller scale.
Depreciation exists but it is not immediate and does not create friction with the app developers.
The problem is that the same guarantees that kernel provides must be provided also by libraries. This is not an easy sell but I don't believe that any solution is possible without some price to pay. What could help to ease the pain for library authors is:
* An automated way to warn library authors about broken compatibility (during compilation).
* A distribution provided testing framework like jenkins that runs the tests of all applications depending in a library - running the test-suite of the library goes without saying.
This leaves out a smaller elephant in the room: Textual interfaces.
It is common in script applications to totally depend in the output of programs to do their work. An example was when mplayer or ffmpeg (don't remember) changed their way of returning audio quality a script I was using (any2dvd) stopped working. There is no way to guard against this loosely coupled interface but they can also be a problem and contribute their share in the instability of the system going forward.
Bassi: Dev v Ops
Posted Aug 14, 2017 12:44 UTC (Mon) by pizza (subscriber, #46) [Link]
Ah yes, but who will actually pay that price? On what schedule? Who will enforce it? And how?
What's to keep said library authors from saying "I have other/better things to do", and if the naysayers persist, pointedly remind them that the library documentation clearly states "This was released in the hopes that it will prove useful, but WITHOUT ANY WARRANTY", and if they don't like it, they can return it for a full refund of the $0.00 they paid?
Don't get me wrong; software authors generally strive to do the right thing. But as you said, there's a price to be paid, and nobody has the right to expect someone to work on something they don't want to, especially when no money is involved.
Bassi: Dev v Ops
Posted Aug 14, 2017 14:28 UTC (Mon) by vasvir (guest, #92389) [Link]
What distributions may do is pay part of the price. Setup infrastructure to ease up __testing__ and maybe packaging. Especially for library authors. I think Suse has a build service but I never used it.
The goal is to start having libraries that don't break backwards compatibility at a whim. So libraries that are using these best practices (outlined in the article) are promoted and considered safe to use for 3rd party. If a library doesn't follow these best practices then the library is not suggested (warned against) linking for 3rd party applications. Distribution packaged applications are allowed to use the library normally.
If there is enough tooling developers may come and use it (think github). I am not referring to 3rd party application developers. I am referring to library developers for starters because these are the ones to have to pay for the other part.
Is this feasible for a group of volunteers such as Debian? Who knows? Is this feasible at all? I don't know...
In the other case the individual developer pays the price - and gets to decide how it's done - i.e. create a monolithic app.
The problem is mostly political - not technical. Namely who is going to pay and who's code is dependent enough to link against.
Bassi: Dev v Ops
Posted Aug 14, 2017 15:10 UTC (Mon) by pizza (subscriber, #46) [Link]
Fedora, Suse, and Ubuntu have such infrastructure for others to use. I believe there's no "private" option for the first two, though I think Ubuntu has some sort of commercial infrastructure-as-a-service offering)
> The goal is to start having libraries that don't break backwards compatibility at a whim. So libraries that are using these best practices (outlined in the article) are promoted and considered safe to use for 3rd party.
Application writers can and will use whatever libraries they want. If it's not part of the distro/runtime/whatever, they'll just bundle it instead. And even if the library is already there, they may want to modify it in some incompatible-with-upstream way and bundle it anyway. (I'm looking at you, Chrome!)
Bassi: Dev v Ops
Posted Aug 15, 2017 1:04 UTC (Tue) by Conan_Kudo (subscriber, #103240) [Link]
Fedora and SUSE's infrastructure can be deployed privately for free. Fedora's Koji[1] and SUSE's Open Build Service[2] are both freely available and easy to set up. Their infrastructure is fully documented and fully reproducible. I personally manage an OBS system at my workplace for building software efficiently and effectively.
I believe SUSE even offers a commercial version of their Open Build Service project for people who'd like to pay for it under their DevOps solutions banner.
Bassi: Dev v Ops
Posted Aug 15, 2017 10:39 UTC (Tue) by pizza (subscriber, #46) [Link]
Basically, software developers have long had the tools needed to do everything properly -- and the more competent ones actually do so because automated systems that perform your builds, packaging, and testing are not only necessary to deliver quality software, but greatly reduce the amount of overall workload.
But for every Libreoffice (with nightly builds and regression runs) there's a hundred AOOs -- who couldn't even _compile_ their codebase for the many months, and took over a year to get a single security fix released.
Bassi: Dev v Ops
Posted Aug 15, 2017 1:44 UTC (Tue) by khim (subscriber, #9252) [Link]
The goal is to start having libraries that don't break backwards compatibility at a whim. So libraries that are using these best practices (outlined in the article) are promoted and considered safe to use for 3rd party. If a library doesn't follow these best practices then the library is not suggested (warned against) linking for 3rd party applications. Distribution packaged applications are allowed to use the library normally.
Why do you think application developers wouldn't use non-recommended libraries?
Let me tell you an Android story (I happen to know a bit because I was doing work there recently). Think ICU. That library changes it's API in the incompatible way thus their developers invented clever hack: it's functions, on the binary level, have version suffix. There are no u_islower function in the libicuuc.so.55 library, no. There are u_islower_55 function. And it would be u_islower_59 in libicuuc.so.59. And library is not present in Android NDK. Have that stopped Application writers? Of course not! They developed clever schemes which load libicuuc.so.XX, try to detect "suffix" for u_islower_XX function and provide "clean" u_islower API to the application instead. Non-insignificant percent of application does that.
Or another "nice" idea: take dlopen. It returns the "opaque" pointer and the only thing you could do is to call "dlsym" function, right? Well, on Android it [used to] return pointer to struct soinfo structure. And number of applications used that to develop their own linker for "streamlined" binaries! Was that wise? Probably not. But now, if new version of Android would break them - people would complain about defective Android, not defective application (hey, that application worked for years, why should it stop working now?)
The solution was to provide old, binary-compatible ABI for the old applications (that's where all these #if defined(__work_around_b_24465209__) come from) but also provide more opaque pointer to the new applications to make sure it wouldn't be so easy to access.
You have to understand that application developers are not going to follow "best practices" just because you asked them so. They could and would do nasty stuff - and then they would abandon their creation (wast majority of software on Google Play is abandonware... yet people are using it anyway).
I don't think you could convince developers to care about that - and distributions developers are also ill-equpped to deal with all that. Someone who actually cares about the ability to run application should do that... and I have no idea who could that be...
Bassi: Dev v Ops
Posted Aug 15, 2017 14:44 UTC (Tue) by vasvir (guest, #92389) [Link]
I can empathize with every word you are saying. True! But...
There is no point to forbid things. People will find ways to come around all obstacles and actually that's a good thing. Even when Windows was The empire you cannot claim with a straight face that there were no applications that were abusing the system. So the point is not to forbid things. The point is to enable things.
In a platform race the platform that wins is the one that is best at (assuming no hardware)
a) make things as easy as possible
b) provides the more benefits
It can be every combination of the two. Maybe a platform is a bitch to program for but it pays good money. Maybe is super easy to get your feet wet like javascript... Who knows.
My suggestion was along the lines of a) easy to build - good tools - warns early if do something bad - doesn't break in immediate future. For b) the benefit would be that if you follow best practices your application will run for the next 10 years in all major linux distributions regardless of local upgrades.
Bassi: Dev v Ops
Posted Aug 15, 2017 15:56 UTC (Tue) by pizza (subscriber, #46) [Link]
"A classic is something everybody wants to have read, but no one wants to read." --Mark Twain
There's probably a clever way to paraphrase that for "best practices" -- They are rules that nobody actually wants to follow but still expects everyone else to follow.
Bassi: Dev v Ops
Posted Aug 15, 2017 16:25 UTC (Tue) by farnz (subscriber, #17727) [Link]
I liked Raymond Chen's "good advice comes with a rationale so you can tell when it becomes bad advice". Generalising from there, a "best practice" is good advice that's lost its rationale.
Bassi: Dev v Ops
Posted Aug 15, 2017 16:31 UTC (Tue) by davecb (subscriber, #1574) [Link]
Back in the Sun days, "good practices" described things we were sorta sure were good, and we couldn't see a downside to.
Conversely, "best practices" was a phrase that really meant "if you don't do this, you're an <expletive deleted/>, and when it breaks, we'll look real hard for a way to prove you voided your warranty"
Bassi: Dev v Ops
Posted Aug 15, 2017 13:43 UTC (Tue) by davecb (subscriber, #1574) [Link]
I'd suggest that the amount of work is really pretty small in the Solaris (Multics, glibc) model. As an example, assume there is a specification change , required for a bugfix, in a library...
The library developer (who used to be me!) fixes the bug, gives the fixed code a different version number, and then writes a release note. The fix exists in parallel with the buggy code in this model.
If the fix is voluminous, I may write an "update" or "downdater" as part of refactoring, to get rid of copy-paste code.
In our organization, I also wrote a "port" file that a tool could use, for any consumer of the library to use to refactor their code: an example of port is at https://github.com/davecb/port [see also https://github.com/davecb/port/blob/master/src/Port_Tutorial.odt ]
The latter is strictly for the convenience of people using the library, who want to switch to what is arguably the more correct behavior.
Bassi: Dev v Ops
Posted Aug 15, 2017 19:38 UTC (Tue) by vasvir (guest, #92389) [Link]
Here is an entertaining thought.
If the convention followed by Linux kernel was followed by library authors Debian could drop the stable/testing/unstable branches and adopt a rolling release strategy.
After all the kernel does not have stable/development branches it is by all intent and purposes a rolling thing.
Yes I know there are older kernel support but it isn't a dev thing like it was 1.2/1.3 or 2.0/2.1. I think that with version 2.4 kernel adopted a rolling release strategy.
just a thought...
Bassi: Dev v Ops
Posted Aug 11, 2017 21:22 UTC (Fri) by fratti (subscriber, #105722) [Link]
Because I really think we shouldn't.
Bassi: Dev v Ops
Posted Aug 12, 2017 15:24 UTC (Sat) by imMute (subscriber, #96323) [Link]
Is this different than asking "Do we list web developers as people to listen to now?" ? What makes an "Electron developer" different than a "[full stack] web developer"?
>Because I really think we shouldn't.
Depends on the topic being discussed. If we're talking about JS frameworks, web standards, etc - yes absolutely. If we're discussing Linux package management techniques - probably not.
Now, flip it around and answer those two questions from the perspective of the web developer.
Bassi: Dev v Ops
Posted Aug 12, 2017 19:05 UTC (Sat) by Frogging101 (guest, #113180) [Link]
Very little. And that's why their opinions should have little importance in a discussion about real desktop applications.
Bassi: Dev v Ops
Posted Aug 12, 2017 21:14 UTC (Sat) by roc (subscriber, #30627) [Link]
Bassi: Dev v Ops
Posted Aug 13, 2017 17:26 UTC (Sun) by pboddie (guest, #50784) [Link]
I also don't think it is "hilarious" that people are pushed into having to use GMail because various mail/groupware applications were almost abandoned by otherwise well-resourced organisations. Maybe you can think of at least one example.
Or that people are obliged to work on platforms like Google Docs (here I obviously also include whatever Microsoft's product is), where data protection regulations are most likely being broken, through genuine or feigned ignorance, the latter sometimes through agreements that tell the customer what they/their lawyers want to hear.
Bassi: Dev v Ops
Posted Aug 13, 2017 21:29 UTC (Sun) by roc (subscriber, #30627) [Link]
Bassi: Dev v Ops
Posted Aug 13, 2017 14:29 UTC (Sun) by evad (subscriber, #60553) [Link]
Please try to avoid calling something a 'real desktop application' and deciding that countless applications that users want to use are not relevant. If we want to produce an open source desktop for end users we need to be non-biased.
Bassi: Dev v Ops
Posted Aug 12, 2017 14:51 UTC (Sat) by Tara_Li (subscriber, #26706) [Link]
*shrugs* Now, there's no actual clear definition of "an operating system". My OS class in college had one - we call it the "kernel" now, but admitted that a whole lot of other things were commonly considered part of the OS, but really weren't - the assembler, the JCL interpreter, etc.
Now, OS just means "a collection of software put together by someone". Bash might be part of one variation of the linux family of OSes, but others might not have it at all, having busybox instead. Or ash. Or zsh.
I think we've gone a long way down a wrong path. But that's just me.
Bassi: Dev v Ops
Posted Aug 12, 2017 20:16 UTC (Sat) by pboddie (guest, #50784) [Link]
Where GNU/Linux distros differ from other approaches (like the whole Sun Freeware thing or whatever it was called, and newer "megapackage" solutions employing virtualisation-style technologies) is in the way that the distros provide layers of software rather than stacks of software, and the way that dependency management then has to become a central feature of the resulting solution as it is deployed.
As for things like network stacks, I'd argue that it is specifically the Linux kernel development model that reduces the choice here. If such stacks were supported by the kernel (or a microkernel alternative) as separate modules then the pressure to eliminate "redundancy" in the deployed solution wouldn't be as great, and people might have a greater incentive to offer alternatives.
Bassi: Dev v Ops
Posted Aug 13, 2017 11:56 UTC (Sun) by alfille (subscriber, #1631) [Link]
As a developer, the rpm and debian packaging is archane and very poorly documented. Eventually you hope that one of your users will be passionate enough to do that platform for you. Building from source is a great alternative until you reach all the libtools challenges, including different version of all the build tools, deprecated macros. Not to mention different versions of standard libraries like libusb.
As a user, when it works, it's a pleasure. Bare metal to working system is relatively painless. Until you need something that is newer or not included. I.e. gnuradio. Then it's weeks of dependency hell.
This is a matter of cognitive load and convenience. The interesting part of software is creating or using it, not deploying or installing it. I think the linux distribution mindset is mired in the small hard disks of a few decades ago when sharing many components was prized. The current packaging/dependency mode is too granular. Applications should include almost everything.
Let the system get creative about deduplicating libraries.
Bassi: Dev v Ops
Posted Aug 13, 2017 14:23 UTC (Sun) by Wol (guest, #4433) [Link]
When I used the linker on Pr1mos, it only resolved unlinked symbols, so with circular references, you had to link the same library several times to resolve all the links.
So when you build a new app or library or whatever, it always links to the latest version - and remembers it! The snag with that, of course, is that you can't upgrade (or break :-) all your apps just by upgrading the library. So you then need some way of updating an app to use a new library. But it's under your control.
And of course you need some way of trying to find which apps use which libraries so you know which ones are safe to delete. Although you could always have the fall-back that if a library is deleted it just grabs the newest version as a replacement, although of course that brings the risk of unplanned breakage.
As always, though, it seems that fixing one problem involves creating a new one (or breaking an old fix :-)
Cheers,
Wol
Bassi: Dev v Ops
Posted Aug 13, 2017 18:53 UTC (Sun) by dps (subscriber, #5725) [Link]
FYI I have used Linux dine 0.99pl13 and had very few problems caused by upgradung libraries. Maybe people are used to other systems which don't have good support for updates.
Given windows update et al I think thar applications contaiing vulnerable versions of everything required are going out of fashion. I don't fhunk Linux needs them. Windows is supported by very specialized applications which are only available on windows.
Bassi: Dev v Ops
Posted Aug 13, 2017 21:33 UTC (Sun) by roc (subscriber, #30627) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 7:16 UTC (Mon) by HelloWorld (guest, #56129) [Link]
And actually, the Flatpak maintainers know this, as they did provide a dependency mechanism which they call "runtimes". Except that unlike current mechanisms, you can only have one "runtime" and need to bundle everything else. It's a crapfest, really.
Bassi: Dev v Ops
Posted Aug 14, 2017 8:33 UTC (Mon) by swilmet (subscriber, #98424) [Link]
Then the new Flatpak is uploaded to a "testing" OSTree repo, an email is sent to the people interested to test new versions, and if the new version works fine, the Flatpak can be moved to production.
Should be entirely doable.
Bassi: Dev v Ops
Posted Aug 14, 2017 13:33 UTC (Mon) by HelloWorld (guest, #56129) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 14:04 UTC (Mon) by pizza (subscriber, #46) [Link]
That model is entirely doable, and is essentially the approach that Fedora is taking. But they're leveraging an entire distro's existing package set, dependency trees, and build infrastructure in order to do that, and that won't help someone that doesn't do their stuff from within Fedora (or wants to use it for non-F/OS software -- IOW, the ones who stand to benefit the most from Flatpak and its ilk!)
If these application authors already had the CI systems in place to track dependencies and rebuild binary packages as appropriate, then they will already have the stuff in place to generate any number of "fat" distro packages, with flatpak becoming merely another packaging format on the list. (The final packaging is probably the easiest part of the whole process...)
However, if the application authors lack automated build systems, then they should probably not be delivering binary software to begin with -- packaging is probably the least of their worries.
Bassi: Dev v Ops
Posted Aug 14, 2017 12:47 UTC (Mon) by roc (subscriber, #30627) [Link]
> Suddenly you need to build a new Flatpak when your dependencies have changed
Quite the contrary, you generally *don't* need to do anything because you update dependencies on *your* schedule, i.e. when you need specific features or fixes. And when you do those updates, you test them thoroughly ... which distro vendors don't do.
The exception is security updates; but then, if you're an application vendor that supports at least one platform where you have to bundle dependencies (i.e. pretty much anything other than desktop Linux), then you have to track security issues in your dependencies and make releases containing those fixes for your non-desktop-Linux users. And if you're doing that, doing releases for your desktop Linux users at the same time is not that much extra work.
Bassi: Dev v Ops
Posted Aug 14, 2017 19:19 UTC (Mon) by dps (subscriber, #5725) [Link]
You might assume things would get thoroughly testsed by automated test suites but sometimes these don't actually exist.
Sometimes bugs are drastic enough that they actually have to be fixed but customers might be told nothing and no security fix issued. Let it suffice to say this is unfortuately all non-finction.
Bassi: Dev v Ops
Posted Aug 13, 2017 22:40 UTC (Sun) by ballombe (subscriber, #9523) [Link]
Believe it or not there are people interested in deployment and installation.
The point of community-developed distribution is to allow them to work together.
What happen is that open source software developers feel less and less part of a collective (free software) and want more and more power over their work. Often the reason is that they derive income from software development. As the consequence they want to remove the middle(wo)men but without doing their work that they find tedious.
Money corrupt everything.
Bassi: Dev v Ops
Posted Aug 14, 2017 8:17 UTC (Mon) by boudewijn (subscriber, #14185) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 7:24 UTC (Mon) by vasvir (guest, #92389) [Link]
Seriously? You are a developer but find deb and rpm difficult to use?
I had to create deb packages for obscure packages or for hosting custom patches so it's definitely doable and a breeze. Nothing like the original work that being put in these software.
At least I know when my packaged software will break or prevents a big change in the underlying dependencies.
Bassi: Dev v Ops
Posted Aug 14, 2017 8:41 UTC (Mon) by swilmet (subscriber, #98424) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 9:09 UTC (Mon) by vasvir (guest, #92389) [Link]
Bassi: Dev v Ops
Posted Aug 14, 2017 12:39 UTC (Mon) by alfille (subscriber, #1631) [Link]
So if you don't use that particular distribution, it's a big hassle.
But more to the point, you are describing the process from a professional developer's viewpoint. You have the tooling all set up.
Imagine the process from an amateur's perspective. You are writing software for a particular use that you have domain-specific interest and expertise. You get it working and well tested on your system. You figure out all the dependencies. Now you want to offer it to the world. What dependencies does each distro need? How do they label them? It's a lot of work and not much pleasure, and a big disincentive to improve your application since not only do you need to test functionality, you also need to test each packaging.
Clearly it's worth doing, but it could be much easier.
Bassi: Dev v Ops
Posted Aug 14, 2017 14:35 UTC (Mon) by vasvir (guest, #92389) [Link]
The process is easy if a) you are a developer - so you understand computer process and b) you are determined to do it.
Requirement b) is not obvious because it implies that you are either getting payed to do it (highly improbable) or you have a sysadmin aspect.
So chances are that packaging is a hassle and you don't want to give it time hence it is moderately difficult as you said.
The amateur case also stands. I haven't thought about it.
Changes in the last 20 years
Posted Aug 14, 2017 10:14 UTC (Mon) by NAR (subscriber, #1313) [Link]
I think the distribution model worked fine 20 years ago was becauseSince then the world has changed. Now the number of repositories in github are in 10s of millions range, while e.g. number of packages in Debian are in 10s of thousands. Of course, not everything is developed for Linux, but still, the apps in Android/Apple stores are also in the millions. Distributions cannot cope with the sheer volume of software out there. They are not "little behind", they are three orders of magnitude behind. The distributions job should be to enable the user to easily install (and uninstall) those millions of software they don't (and can't) package. This is where they fail.
And don't get me started on the QA work they do. I'm sure the packagers have all the right intentions. I'm also sure that there are actual packagers that do meaningful QA, work with upstream, etc. But I'm also pretty sure most of the packaging work is only making sure it compiles and starts. I remember a case when the cdrecord(?) maintainer had some problems with the Debian(?) packager, so he created a version that only worked on the packager's computer, didn't work anywhere else. The packager hasn't noticed the strange looking array - I do think this is the QA level that can be expected from most packagers.
One can also not expect software authors to build packages. For example the software I'm working on has 31 dependencies. To my surprise, about half of them are actually included in Debian, so I'd only need to package the other 15 and my application. Then do the same for Fedora, CentOS, OpenSuSE, Ubuntu, RHEL, SLES. Some of these are releasing multiple times a year. And generally authors can't expect users to do the packaging, because software is no longer used by only IT people (I mean even my mother and mother in law, both well into their 60s, use computers).
And there's the big bogey man, the security argument, "if you bundle libraries, you'll be hacked". Well, the thing is, for most software, it doesn't actually matter. If it doesn't face network, it is in a closed data center (if the malicious user gets to the software, the organization is already hacked), it doesn't matter. It's a lot easier to attack the vulnerability between the keyboard and the back of the chair than to actually exploit vulnerabilities. That Stack Clash bug was there for half decade and apparently the world didn't end.
Changes in the last 20 years
Posted Aug 14, 2017 11:55 UTC (Mon) by pizza (subscriber, #46) [Link]
You do realize that you contradicted yourself?
You say that you can't expect [application] software authors to build packages of their stuff, while you seem to have an expectation that [library] software authors will build packages of their stuff for the application authors to use.
If those library authors don't package their stuff, the application authors will have to it -- And in this brave new/old world, distribute it all within the application package. Either way, the application author now has to do that work! (And shoulder the obligations of distributing third-party software along with theirs too..)
Changes in the last 20 years
Posted Aug 14, 2017 13:42 UTC (Mon) by NAR (subscriber, #1313) [Link]
Bassi: Dev v Ops
Posted Aug 15, 2017 11:57 UTC (Tue) by NightMonkey (subscriber, #23051) [Link]
You have RancherOS on one end, with *everything* a container, above the kernel.
Then, you have CoreOS, which is *almost* everything above the kernel is a container.
Then, you have traditional distribution glibc-based userspace. This is where the old distributions are.
Not that I like this situation, mind you. :)
Bassi: Dev v Ops
Posted Aug 20, 2017 9:17 UTC (Sun) by mcortese (guest, #52099) [Link]
is not a mystery, though, why it’s a dying model.
Well, it might not be a mystery, but I haven't found any explanation in the paper. Actually, I haven't even found any evidence that the current model is dying. Only that some other platforms use a different model. Is there a trend to switch away from a distro-based approach? The paper seems to assume so, but I fail to see the evidence.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds