Distributors entering Flatpakland
Distributors entering Flatpakland
Posted Jul 9, 2022 12:39 UTC (Sat) by bojan (subscriber, #14302)Parent article: Distributors entering Flatpakland
Anyhow, the regular FF profile did not get picked up - everything was under ~/.var. There are probably tricks I'm not aware of to get this installed differently. Or not. No idea.
But, if this was a regular user, just looking for the latest FF in a format that is not a tarball - sure - they could get it pretty easily. I didn't test for long, but it looked and behaved just like regular FF at first glance.
Posted Jul 11, 2022 1:11 UTC (Mon)
by bojan (subscriber, #14302)
[Link] (12 responses)
Reminds me of those articles written by Urlich Drepper: https://lwn.net/Articles/250967/
Did we collectively forget about all that?
Posted Jul 11, 2022 11:08 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (11 responses)
The tradeoffs changed considerably in the 15 years since Ulrich wrote that series of articles, such that wasting memory isn't as big a deal as it used to be.
In 2007, low end brand new systems came with 2 GiB of RAM, and the population of common PCs would include a significant number with under 1 GiB of RAM; going beyond 4 GiB was the realm of high-end systems. A comparable new system now has 8 GiB of RAM, and very few PCs still in use have under 4 GiB, while you have to go beyond 32 GiB to get into the realm of high-end systems.
Memory throughput has kept pace with memory size; in 2007, top end memory was DDR3-2133 with a throughput of 17 GB/s per DIMM, while common memory of the era had a throughput around 6.4 GB/s per DIMM. Today, common memory is DDR4-2666, with a throughput of 21GB/s per DIMM, while top end memory is DDR5-7200 at 57.6 GB/s per DIMM.
At the same time, the jobs we want our systems to do haven't changed much in that time; the only thing that's had significant growth is framebuffer sizes, which have gone from typically 3 MiB for a 32bpp 1024x768 framebuffer to 8 MiB for a 1920x1080 framebuffer or 32 MiB for a 4K framebuffer. If your workload fitted comfortably in 2 GiB RAM in 2007, then it'll need at most 4 GiB RAM (allowing for the expansion in framebuffer sizes) in 2022, but your system almost certainly has 8 GiB or more RAM.
That, in turn, means much more room to waste RAM than in 2007 - there's 4x the RAM to play with, but most users haven't doubled the amount they want their system to do.
Posted Jul 11, 2022 11:55 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (2 responses)
4GB - are you sure? I'm not sure how long my wife has had her new laptop - a year or two? - and your typical cheap laptop then was still only 4GB. A quick look on the Currys website implies that's still the case - loads of brand new laptops with 4GB.
Cheers,
Posted Jul 11, 2022 12:14 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (1 responses)
Under 4 GiB; most machines sold in the last couple of years have 4 GiB or more RAM.
Posted Jul 11, 2022 12:35 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
I make sure my machines now have more :-) But even 4GB is still tight as a Windows machine ages - it just gets horribly slow.
Cheers,
Posted Jul 11, 2022 11:56 UTC (Mon)
by bojan (subscriber, #14302)
[Link] (7 responses)
Brute force approach, which is essentially what is going on here, is almost always wrong.
Just take the example of that FF flatpak installation. Most of the stuff that was installed as a runtime already existed on my system and was likely loaded into memory.
Distributions would be better off spending time on making sure things work across them. I don't know - maybe even consider having one packaging system that doesn't bring in tons of duplication when this is not necessary. Surely, over this many years, sufficient lessons have been learnt by the likes of Debian, Fedora, Ubuntu, SUSE etc. to allow some kind of cooperation.
When Linux distributions started, there was no cloud and having a build system for cross distro applications was not easy. Today, if major distributions started talking to each other about unifying some of the stuff, everyone would walk away a winner. Without the need for all this duplication.
If things were made accessible enough, even proprietary vendors like Google, Adobe, Microsoft etc. could easily build their own stuff there.
Runtimes are generally a good idea. Every OS already has one. Why not target levels of those instead? Distributions have been doing it for years individually anyway.
In the end, what's the point of me running F36, for example, if some random package delivers some other obsolete junk along with it?
Posted Jul 11, 2022 12:28 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (6 responses)
Caches have also grown, however, so while the absolute efficiency has gone down, the proportion of the cache that's inefficiently used has not.
Intel's 10th Generation Core processors are still current (despite the 11th and 12th generation existing), and have typically 8 MiB or 12 MiB of LLC (plus higher levels that are affected more by working set size than by total size); in 2007, the Core 2 Duo was the latest hotness, with 2 MiB of LLC (plus higher levels).
All the resources have grown significantly since 2007, and thus the "ideal" tradeoff point between needing infinite human time to get minimum resource usage, and needing infinite resources to get minimum human time has moved.
Posted Jul 11, 2022 13:42 UTC (Mon)
by bojan (subscriber, #14302)
[Link] (1 responses)
Installing another copy of, for example, mesa, on a system that already has one is simply sub-optimal. The only reason this is supposedly required in an open source ecosystem is because people refuse to come together and co-operate. And that will always be to everyone's detriment.
I understand that flatpak developers are trying to solve a real problem, which is amazing fragmentation of the open source ecosystem. The current result, unfortunately, has pretty serious side effects.
Posted Jul 11, 2022 15:31 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
But the underlying shift is that because resources have become more abundant, it's easier to consume extra resources than it is to solve the social problems within the ecosystem - requiring people to have at least 3 GiB RAM has gone from "rules out most people" in 2007 to "virtually everyone meets this requirement" in 2022.
So that moves the balance point for what sub-optimal technology decisions you can accept in order to allow you to avoid dealing with people who aren't constructively helping. It would be nice if everyone involved in open source was constructive, but unfortunately, there exist people in all walks of life who consider it more important to be "right" by their own definition than to be constructive.
Posted Jul 11, 2022 13:45 UTC (Mon)
by rbtree (guest, #129790)
[Link] (3 responses)
Oh god.
One of the of most significant advantages of Linux always was how efficient it is and how well it runs with limited computing resources. Personally I'm mostly using it on a desktop with i5-4460, and it runs as well as it did in 2014.
In my country, a lot of low-powered *new* laptops are beind sold to this day. Things with CPUs like Celeron N4020, 4 GBs of RAM, and HDDs, and people are buying them en masse because what else are you going to buy with a salary of $200-300 a month after taxes (which is what a cook or a construction worker typically makes)?
Teachers, doctors, and bureaucrats of all sorts are not doing much better.
Windows is pretty much unusable on those kinds of machines (although people are doing that and it's painful to watch). I'm sure there will be some distributions that will not go down the flatpak road and will still care about performance, but if all the popular ones do that, I'm afraid I will have nothing to recommend anymore (you're not going to recommend some student's side project that may be announced EOL a couple of days from now to a newbie).
Posted Jul 11, 2022 13:59 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Jul 11, 2022 14:41 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
The comparison is even more stark if we move down from the high end CPUs to the low end; back in 2007, if you were in the market that today buys a Celeron N4020 (4 MiB LLC) with 4 GiB RAM, you'd have been buying a Celeron 365 or equivalent, with 512 KiB (0.5 MiB) LLC, and 512 MiB or 1 GiB of RAM.
Indeed, the gap between "as cheap as possible" and "low end for rich customers" machines has been shrinking over the last 15 years; you've gone from having 1/4 the RAM and 1/4 the LLC in an "as cheap as possible" machine as compared to "low end" to 1/2 to 1/3 the LLC and 1/2 the RAM of a low end machine for a rich person. The remaining gap is in mass storage - where a "low end" machine will have an SSD nowadays, a "cheap as possible" machine will still have a HDD because of the size differential between a $30 HDD and a $30 SSD.
That said, that gap may change, too - HDDs are limited in terms of how cheap they can be because they are precision mechanical systems, where an SSD scales down in cost; it was already true 8 years ago that if 30 GiB is sufficient space, an SSD was a cheaper option than a HDD, and the space on really cheap SSDs is likely to grow over time.
Posted Jul 11, 2022 21:01 UTC (Mon)
by bartoc (guest, #124262)
[Link]
Distributors entering Flatpakland
Distributors entering Flatpakland
Distributors entering Flatpakland
Wol
Distributors entering Flatpakland
Distributors entering Flatpakland
Wol
Distributors entering Flatpakland
Distributors entering Flatpakland
Distributors entering Flatpakland
Distributors entering Flatpakland
Distributors entering Flatpakland
Distributors entering Flatpakland
Wol
Distributors entering Flatpakland
Distributors entering Flatpakland