Some unlikely 2021 predictions
With luck, the world will emerge from the depths of the pandemic this year. In many ways, the effects of the pandemic on the free-software community were relatively minor, so it is to be expected that the recovery will not change much either. We will continue to create great software as always. The recovery may mean, though, that we can start meeting in-person later in the year, which is important for the long-term health of our community. That said, many people will likely remain reluctant to travel, and companies may discover a reluctance to pay for a return to travel. So many of our meetings will have, at least, an online component for the entire year.
The other thing that may happen, though, as the world opens up, is a conclusion by some in our community that life is too short and precarious to spend it tied to a keyboard. It would not be surprising to see retirements increase over the course of the next year or two.
Support for CentOS 8 will end at the end of the year; users will have to transition to CentOS Stream or find another solution altogether. For all the screaming, CentOS Stream may well turn out to be good enough for many of the deployments that are currently using a stable CentOS build. Others are likely to find that, in this era of cloud computing, a long-term-stable distribution isn't as important as it used to be. If the "machines" running the distribution will not last for years, why does the distribution they run need such a long life? The end of CentOS could have the unintended effect of undermining the demand for ultra-stable "enterprise" distributions in general.
There will be attempts to recreate CentOS as it was, of course; most or all of them are likely to fail. Maintaining a stable distribution for years takes a lot of work — and tedious, unrewarding work at that. CentOS struggled before Red Hat picked it up; there is no real reason to believe that its successors will have an easier time of it. The fact that the alternative with the most mindshare currently, Rocky Linux, has no publicly archived discussions and only seems to communicate on the proprietary Slack platform is also worrisome.
For better or for worse, the Fedora project has a well-established relationship with Red Hat. The status of openSUSE is nowhere near as clear, which is one of the causes of the ongoing strife on its mailing lists over the last year. OpenSUSE will need to better define its relationship with SUSE in 2021, even if additional stresses, such as the creation of the independent openSUSE Foundation or the rumored public offering by SUSE, don't happen. Like Fedora, openSUSE is the descendant of one of our earliest and most influential distributions; it will be with us for a long time yet, but exactly how that will happen needs to be worked out.
It will become possible to submit kernel patches without touching an email client — but few people will do that in 2021. The kernel community will, eventually, be dragged into more contemporary ways of doing development. The kernel project's current processes are there for a reason, though; few other projects have anything near the kernel's developer or patch counts. Some significant thought will have to go into making "modern" development processes work at kernel scale. As has happened in the past, the result may be innovations that are felt far beyond the kernel community.
The commercial side of BPF will become more prominent in 2021. BPF, which allows code to be loaded into and executed within a running kernel, has been growing rapidly in power and adoption over the last several years. This year, we'll see how companies are using it to build their products and services. BPF makes it much easier to add interesting functionality to the kernel, but it also serves to keep code implementing that functionality separate from the kernel source. Our systems in the future may be more flexible and capable, but they may also thus be a bit more proprietary, even if all the code is ostensibly free.
GNOME 40 will bring a new shell interface (described here). It's a GNOME interface change, so a fair amount of loud complaining is inevitable; people had finally started to make peace with the current GNOME shell, after all. And yes, it will be GNOME 40 rather than GNOME 3.40.
Python developers will have to think hard about the future of the language. The Python 3.0 release happened twelve years ago now, and most of the angst over moving from Python 2 is behind us. There is a reasonable case to be made that, to a great extent, the Python 3 language is "done" and need not continue to undergo significant change. On the other hand, proposed features like structural pattern matching show that some developers still have an appetite for big changes. One can safely predict that there will be no disruptive Python 4 release anytime soon; what is harder to predict is when developers wanting stability will start to put the brakes on attempts to continue to evolve Python 3.
Software supply-chain attacks will be a serious threat for the
community this year. The SolarWinds attack, which was used to compromise a
number of US government agencies, was carried out by slipping the malware
payload into a routine software update. We can read this
SolarWinds blog entry from 2019 with amusement; it claimed that
open-source software makes one's chance of downloading malicious software
"much higher
". That post has not aged well, but this attack
could also happen with free software, which is distributed in binary form
through a large number of trusted channels. Malicious code inserted into
one of those supply chains could be used with devastating effect; we can
only hope that the suppliers we trust are truly trustworthy.
Antitrust enforcement against companies like Facebook and Google in the US and Europe will pick up in 2021. The long-term result of these efforts could be huge, and not necessarily good or bad for our community, but the glacial pace of the courts will keep anything serious from happening this year. Being under the antitrust microscope may make those companies, which are significant contributors to the free-software community, more circumspect in their actions, though. If a software contribution could look like an attempt to further entrench a monopoly, it may just not happen.
OpenStreetMap will continue to grow in importance as companies realize that it provides the best way to compete with Google Maps. As a result, welcome resources will pour into the project; indeed, it would appear that corporate-sponsored contributions are already the majority of the edits going into the OpenStreetMap database. Inevitably, there will be clashes with the hobbyists who built OpenStreetMap up from the beginning, but the end result should be good for everybody involved. As with software, free data is better when everybody works to improve it.
Through all of this, Linux and free software will just be stronger at the end of 2021. That trend has held for decades, through economic crises, terrorist attacks, a pandemic, the dotcom crash, and more; predicting its continuation should be a safe thing to do.
Finally, these predictions will be reviewed and duly mocked in the December 23, 2021 LWN Weekly Edition — the other half of this tradition of ours. That is a small part of our larger mission of reporting from within the free-software community — a tradition that is about to begin its 24th year. Certainly none of us predicted that back at the start, but it is safe to say we'll still be at it when the year ends. As always, many thanks to all of you who have supported our work for all of these years; you are the reason that we are still at it. Best wishes to all LWN readers for a safe and rewarding 2021.
Posted Jan 6, 2021 21:23 UTC (Wed)
by pebolle (guest, #35204)
[Link] (15 responses)
(Yes, I'm a glass-is-half-empty person.)
Posted Jan 7, 2021 5:58 UTC (Thu)
by re:fi.64 (subscriber, #132628)
[Link] (9 responses)
Posted Jan 7, 2021 19:43 UTC (Thu)
by pebolle (guest, #35204)
[Link] (8 responses)
And, as is probably clear, I still think it the refusal to focus is costly for little gain. Actually I fear there's a loss: fewer people interested in a less well developed main desktop.
I really find it hard to grok why Fedora (and Ubuntu, for that matter) refuse to focus instead of fracturing there minuscule market share. (Debian being Debian, I expect nothing else.)
Posted Jan 7, 2021 21:29 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link] (7 responses)
I think Wol's response to that post gets at the point very well: volunteers are not fungible. Focusing more tightly won't somehow convince people to work on what you want them to work on. Instead, it will convince them to move elsewhere.
And having a broader focus can help give the project flexibility. We all know that some desktops have had bad spells where they tried something new and it wasn't great when it first came out, e.g. GNOME 3.0, early KDE Plasma, etc. By having more than one desktop, the distribution isn't in deep trouble when the one desktop it uses suddenly sucks. Similarly, having builds that target desktop, server, cloud, IOT, etc. means the distribution is capable of moving into new spaces as people become interested.
Posted Jan 7, 2021 21:45 UTC (Thu)
by pebolle (guest, #35204)
[Link] (6 responses)
Which I think is a good outcome.
> By having more than one desktop, the distribution isn't in deep trouble when the one desktop it uses suddenly sucks.
Hasn't Fedora defaulted to Gnome for well over one decade now? Or is it two decades? Likewise for Suse and KDE. So this looks quite theoretical to me. And moreover, by focusing on one desktop the chances of it suddenly sucking, to a distribution using it, should get even lower. Because more developers and users are actually using it and improving it (directly or through feedback).
Posted Jan 7, 2021 22:05 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
> - RedHat will finally stop funding half-a-dozen-of-the-same-thing and force Fedora to only work on a single desktop offering (ie, Gnome);
Red Hat isn't funding multiple desktops. It is focusing on one. It just so happens that Fedora (which has a lot more folks involved) has volunteers interested in working on others and they handle the other desktop environments. For the large part, what gets downloaded and used is Fedora workstation.
Posted Jan 7, 2021 22:19 UTC (Thu)
by pebolle (guest, #35204)
[Link]
Processing power, storage and bandwidth are not free. Not at all. Neither are employees (one full-time employee per hundred volunteers?).
And, even if all of the above were free, my point is that other desktop environments add negative value.
Posted Jan 8, 2021 20:55 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
My main desktop was an Athlon Thunderbird (K8?), and I simply couldn't log in. I don't know how long it took to go from login screen to usable desktop - I think the longest I waited was 24-36 hours before I gave up.
I went back once I managed to fix it (gentoo is wonderful here :-), but disasters like that cost you users...
Cheers,
Posted Jan 11, 2021 10:32 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (2 responses)
This would be true if divas^H developers liked to "focus" on fixing bugs. But they prefer refactoring and (re)writing on a regular basis. So some redundancy and alternatives are useful.
Posted Jan 11, 2021 18:35 UTC (Mon)
by pebolle (guest, #35204)
[Link] (1 responses)
So the developers of the default desktop prefer refactoring and rewriting, on a regular basis actually, and the developers of the alternatives don't? Apperently the incentives for the developers of alternative desktop are totally different.
(Because rhetorical questions suck: I think the developers of the default desktop and the alternatives are basically subject to the same incentives. Except some of the developers of the default are actually _paid_ to fix bugs. And I expect the default to have less bugs than the alternatives anyhow, as it is being used much more often.)
Posted Jan 26, 2021 16:51 UTC (Tue)
by marcH (subscriber, #57642)
[Link]
This is not about "default" vs "alternatives", it's only about having more choice and more competition.
> Because rhetorical questions suck
Especially the ones put in others' mouth.
Posted Jan 7, 2021 9:37 UTC (Thu)
by smurf (subscriber, #17840)
[Link] (4 responses)
Posted Jan 7, 2021 20:15 UTC (Thu)
by pebolle (guest, #35204)
[Link] (3 responses)
I can't read threads like that anymore. Way too many messages, most of them way too long. Some of the names involved are all too familiar from watching their systemd train wreck from the sidelines.
Debian is clearly filled with people with a lot of experience and energy. It's their project and they can run it as they like. But some of them really seem to love playing mock United-Nations-sponsored organization: a "Constitution", "General Resolutions", and whatever more they excel at. It's all looks a bit silly...
Posted Jan 8, 2021 0:30 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
1. The NetworkManager maintainer removed its (working) sysvinit script and documented the change in the changelog. Someone filed a bug to add it back.
Frankly, I think the Debian people brought this on themselves by voting for the most ambiguous and wishy-washy option available (except perhaps for proposal G). Several of the other options had clearly-spelled-out policies for what to do with bugs like this (or at least implicated existing policies in standardized ways), but proposal B was pretty vague on what a maintainer is expected to do in this situation (it literally just says to "use their normal procedures").
Posted Jan 8, 2021 1:59 UTC (Fri)
by pebolle (guest, #35204)
[Link] (1 responses)
So true! Debian's systemd saga borders on the the bizarre. Their refusal to make a choice is what fuelled this soap opera. Honestly, I think it's impossible to explain all this to people outside our Free Software bubble.
Thank you Debian, for making us look fringe!
Posted Jan 21, 2021 16:38 UTC (Thu)
by mstone_ (subscriber, #66309)
[Link]
Posted Jan 7, 2021 0:33 UTC (Thu)
by marshallm900 (guest, #140779)
[Link]
Posted Jan 7, 2021 0:37 UTC (Thu)
by jedix (subscriber, #116933)
[Link] (15 responses)
Nvidia will realise they can avoid the GPL by writing graphics driver in BPF and start accepting cryptocurrency as payment for devices.
Someone will write a Twitter to BPF translation layer.
Systemd will be reimplemented in BPF.
Processors and phones will be sold with known hardware bugs to ensure upgrading.
Posted Jan 7, 2021 1:43 UTC (Thu)
by hisdad (subscriber, #5375)
[Link] (2 responses)
Posted Jan 7, 2021 8:26 UTC (Thu)
by edeloget (subscriber, #88392)
[Link] (1 responses)
I would oppose such a change, as it seems it would break my own workflow and it looks like a kernel ABI regression...
Posted Jan 7, 2021 15:10 UTC (Thu)
by georgm (subscriber, #19574)
[Link]
Posted Jan 7, 2021 17:04 UTC (Thu)
by jafd (subscriber, #129642)
[Link] (7 responses)
I'm sure you meant "will continue to be sold", but otherwise concur with the sentiment.
Posted Jan 10, 2021 3:44 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link] (6 responses)
I'm not a hardware expert, but my simplified and (probably) wrong understanding is as follows: You can't economically turn out perfect silicon every time. Instead, the production process has a certain percentage of errors, and you try to lower this percentage as much as possible, but eventually you start running into diminishing returns. The manufacturer (hopefully) discovers those errors autonomously with a standardized testing process, but then you have to decide what to do with the bad silicon:
1. Throw it away (or recycle it, to the extent that it is practical to do so).
Option #1 is obviously the least profitable option, and options 2-4 can all be accurately described as "selling silicon with known hardware bugs." But IMHO only option #4 is actually dishonest. I also tend to imagine that #4 is rare, at least for the serious manufacturers. Consider how easily the FDIV bug was found - there's always going to be some person who is stress testing your hardware in a "weird" configuration, and if there's a bug, that person has a fairly good chance of finding it.
(OTOH, Spectre took multiple decades to find, and was endemic across the entire industry for that whole time, so maybe I'm a bit optimistic here.)
Posted Jan 10, 2021 9:46 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (4 responses)
For example, I have a 3-core Athlon. I suspect it's actually a 4-core, but the fourth core failed QA and was disabled. Fine, I bought and paid for a 3-core, so I got what was advertised.
Spectre, on the other hand, was a design flaw - the chip did exactly what was claimed. The problem is it ALSO did something else ...
Similar to the Zilog Z80, actually, except they found the bug before it shipped - how many people remember (or realised) that although it had a RightShift instruction, it didn't have a LeftShift? They deleted it from the spec because it was buggy.
So some people investigated, and discovered that the instruction code for LeftShift actually did a LeftShiftAndIncrement :-) Which - I believe - found its way into the spec because people used it ... :-)
Cheers,
Posted Jan 10, 2021 11:14 UTC (Sun)
by jem (subscriber, #24231)
[Link] (3 responses)
You are again repeating this myth that the "Shift Left Logical" instruction is missing from the Z-80 because it was found to be buggy. Yes, the Z-80 only has three shift instructions, SLA, SRA, and SRL. The SLL instruction is missing because it is not needed: the SLA (Shift Left Arithmetic) does the same thing. Only right shifts differ between arithmetic and logical shifts.
The Z-80 has a lot more undocumented instructions, none of which are bugs. They just work the way they do because of how the processor was wired. Zilog did not intend them to be used, but also did not want to waste silicon to disable them is such a simple processor.
Posted Jan 10, 2021 16:59 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (2 responses)
So the question remains. WHY did Zilog not want SLL to be used? And I hate to say it, but the *obvious* explanation is it was buggy. Especially as the instruction quite clearly existed, and *almost* worked. Three instructions is not "logically complete". And ime, not being logically complete is *asking for trouble*. What happens if an assembler-writer uses SRL, and then also uses SLL because it's the logically obvious instruction? PEOPLE DON'T READ SPECS - THEY JUMP TO CONCLUSIONS, and it's very likely people will just assume SLL exists.
(I'm aware there was a whole bunch of 16-bit instructions that happily worked on pairs of 8-bit registers but weren't documented, and also I think 8-bit instructions that worked on the 16-bit registers, but I'm not aware of any other instruction that clearly should have existed and didn't.)
Do you have any *evidence* as to why SLL wasn't documented? Or are you just looking at what happened after the chip was released, and jumping to a different conclusion from other people?
Cheers,
Posted Jan 10, 2021 18:39 UTC (Sun)
by jem (subscriber, #24231)
[Link]
Tell that to CPU architects. Motorola's 6800 had the same three shift instructions as the Z-80. The 80x86 processors also only have three shift instructions. The 8086 User's Manual states: "SHL and SAL (Shift Logical Left and Shift Arithmetic Left) perform the same operation and are physically the same instruction. ARM has the LSR, LSL, and ASR. RISC-V has SRLI, SLLI, and SRAI (shift right/left logical/arithmetic immediate). MIPS? Same thing.
It is a well known fact that an arithmetic shift only differs from a logical shift when shifting "right", i.e. towards the less significant bit positions. This has been known for decades, and I doubt the engineers at Zilog were stupid when they added these instructions. So to me, the *obvious* explanation was that Zilog copied the three instructions from the Motorola 6800, and left the op code that was left unused to do what it happened to do.
What Zilog *should* have done was to do what Intel did later with the 8086: document the assembler mnemonic SLL as an alias for SLA. Or used the name SLL instead of SLA.
>And ime, not being logically complete is *asking for trouble*. What happens if an assembler-writer uses SRL, and then also uses SLL because it's the logically obvious instruction?
What *I* think is asking for trouble is when an assembler writer jumps to conclusions about the instruction set. "Ah, there is an error in the documentation, Zilog forgot to document the SLL instruction. Ok, let's use the *obvious* opcode for that." And the ship the assembler without testing it.
>Do you have any *evidence* as to why SLL wasn't documented?
Do *you* have any evidence for your original claim? That Zilog didn't intend to design their processor according to industry practice at the time, and they really intended to define four shift operations, of which two are identical? And with a stroke of extremely good luck they could just drop the SLL instruction from the documentation, when they realized at the last moment that it didn't work as expected. What are the odds of that?
Posted Jan 12, 2021 22:43 UTC (Tue)
by floppus (guest, #137245)
[Link]
Sure, you can argue it'd be more parsimonious for the two opcodes to do exactly the same thing, but the undocumented function is occasionally useful to save a byte and four clock cycles.
Whether the undocumented function was intentional or not, who knows... but if it were unintentional, you'd expect that the result would be (x<<1)|(x>>7), or (x<<1)|CF, or (x<<1)|(x&1), or something even weirder, rather than (x<<1)|1.
If you simply want your assembler to emit an SLA opcode whenever you write SLL, that's an issue with the assembler, not the hardware. ;)
Posted Jan 10, 2021 13:44 UTC (Sun)
by excors (subscriber, #95769)
[Link]
Also not an expert, but I think you're conflating two different classes of issue:
1) Errors in the design. Often bugs in the HDL code that defines the chip's behaviour and that were not detected by static analysis or by testing in simulation; sometimes bugs that are outside of the logic you were testing, e.g. a crypto block that leaks secrets to an attacker who's precisely monitoring the power lines.
If you discover those issues while running tests on real silicon, it takes many months and millions of dollars to fix the design and produce a new revision of the silicon. It's usually possible, and far cheaper and quicker, to develop a workaround in software(/microcode/firmware/etc), so CPUs typically come with lists of dozens or hundreds of errata that the relevant software developers need to be aware of. In extreme cases the workaround might be to disable a major hardware feature (like TSX in Haswell), though usually the workarounds are much less painful.
If you're producing a new revision of the silicon anyway (e.g. to fix an unworkaroundable bug, or for a new chip with new features), but you have a good software workaround for a particular errata, you might still choose not to fix that errata. 'Fixing' the design risks introducing new bugs (which may delay the new revision for months and cost millions more), so it's safer to leave it alone, and the errata can persist for a long time.
2) Random defects in the fabrication of each chip, because you're pushing the manufacturing technology to the limit and it's a messy physical process and the atoms don't all go where you want them to. In that case you can turn off (i.e. configure the hardware/microcode/firmware/software/etc to not use) the bad parts of the silicon, or run at a lower frequency / higher voltage that doesn't trigger the fault. You get chips with a lot of variation in capabilities and power efficiency, then the marketing people work out how to divide them up neatly and sell them at different prices.
All of those practices will continue because they're sensible economic tradeoffs, and because computers would be unaffordable if customers insisted on perfect hardware.
Posted Jan 7, 2021 20:03 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (3 responses)
The gaming industry already does this at the software level. For now, they have not yet realized that they can charge for the day 1 patch, but it's only a matter of time IMHO.
(Related: Even now, they will sell you a game for $60, then sell you the remaining 3/4 of the game for N easy payments of $1.99.)
Posted Jan 9, 2021 6:51 UTC (Sat)
by nilsmeyer (guest, #122604)
[Link] (2 responses)
I'm not sure about that. So far the software world has been mostly untouched by it, but you have a legal right to get a product that is in working order when you pay for it. At the moment, software quality issues are often seen as force majeure, an accident. Even the language used is often invoking that image, "computer bug", "data leak" and so on. What it really is is often human error. And there are real costs associated - I'm mostly talking about the b2b sphere here but what would happen if everyone who bought a buggy video game demanded a refund?
Posted Jan 9, 2021 12:22 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
It depends. How buggy is "buggy"?
Posted Jan 9, 2021 17:30 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
I bought a Terry Pratchett game for Win95 - I don't think I got much further than the first few "rooms" whatever they were. And inasmuch as I could find reports for it, it was very temperamental about the hardware, and advice was "re-install 95, install the game, don't install anything else". Not good news if you don't have a spare PC ...
Cheers,
Posted Jan 7, 2021 9:57 UTC (Thu)
by MortenSickel (subscriber, #3238)
[Link]
Posted Jan 7, 2021 11:19 UTC (Thu)
by k3ninho (subscriber, #50375)
[Link] (1 responses)
I'm still a bit 2020-shy on promising anything 50 weeks away, but may you have solid wealth, great health and move safely free from tragedy throughout the coming months!
(On the wealth front, is there still a need for more subscribers?)
K3n.
Posted Jan 7, 2021 15:27 UTC (Thu)
by kpfleming (subscriber, #23250)
[Link]
Posted Jan 7, 2021 11:45 UTC (Thu)
by jnareb (subscriber, #46500)
[Link] (10 responses)
I hope that we will get a game-changer such as Git and Mercurial, though perhaps more slowly: the problem is less understood, and hopefully there would be no "we must do it" event like with the BitKeeper license change (end of free-for-OSS license).
People are working on it, see for example "Patches carved into developer sigchains"... though if it would be in non-experimental use before the end of 2021 is a question.
Posted Jan 7, 2021 20:10 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (7 responses)
I find it difficult to imagine what such an event would look like. It's not as if SMTP is owned by anyone.
One could imagine the companies whose developers are actually contributing most of the patches issuing an ultimatum, but I tend to think this would be an incredibly foolish and short-sighted move on their part (and I think most of them are aware of that fact, so it probably won't happen).
Posted Jan 8, 2021 4:27 UTC (Fri)
by ecree (guest, #95790)
[Link] (6 responses)
But if the big mail-providers EE&E in a way that screws us over, how many of us know how to set up our own mail servers? And how likely are we to learn in the meantime, given that interoperating with the big networks is already a massive PITA? (Alexei was already worrying about this back at netconf ’19, see http://vger.kernel.org/netconf2019_files/netconf2019_slid... — "Email is dying".)
So yes, if Gmail etc. become unusable for kernel development, it would be *possible* for us to set up an entirely parallel network of "engineers' email", that doesn't follow Gmail et al's lead on whichever of DKIM, SPF and DMARC it is that makes things hard. But it wouldn't be *easy* and it wouldn't happen overnight. Company IT departments won't support it on their email servers, because their first priority is keeping the executives' email working. (Maybe some of them will run a second server for the engineers. But I think we've all encountered enough corporate IT to be pessimistic about that.) And we can't start switching now, because we need to keep supporting developers who are on the existing systems (which are very picky about whom they accept mail from).
I like the email workflow and want it to continue. But unless we can square this circle, sooner or later it is going to blow up in our faces.
Posted Jan 8, 2021 22:27 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (2 responses)
As of January 2021, it's not that hard to set up your own mail server, and configure SPF/DKIM/DMARC in a way that gives you good deliverability into Gmail/Hotmail/etc. I've had my own mail server for years.
I can't see the Big Providers changing this any time soon. There are still tons of businesses running their own mail server or using something other than the Big Ones; businesses that do use Google, etc. will not be amused if they can't communicate with clients or vendors.
This may change in the future, but not for a while, I think. There... that's my reckless prediction for 2021.
Posted Jan 9, 2021 1:31 UTC (Sat)
by zlynx (guest, #2285)
[Link]
But, I do remember some useful Linux Journal articles. There may have been a LWN article or two as well.
I do mine on Fedora Linux with Exim, Cyrus IMAP, and SpamAssassin. Exim needs to be configured to deliver to Cyrus properly. At one point I had to adjust the SELinux rules for that but I think that's part of the Fedora packages now. There's also SSL certificate configuration that has to be turned on and adjusted to work with LetsEncrypt.
And then there is DNS with MX, SPF and DKIM and DMARC stuff. And I enabled DNSSEC on my domain which I've heard gives you another few bonus reliability points.
And if you cannot get a static IP then you have to do all of this in the cloud, and optionally configure a VPN and forward your email servers to your home network.
Posted Jan 9, 2021 7:00 UTC (Sat)
by nilsmeyer (guest, #122604)
[Link]
I run my own e-Mail server, it's usually on auto-pilot with me just doing upgrades. But every once in a while an issue crops up, for example I only noticed that there was an issue with IPv6 after a friend told me and provided the logs from his server, I won't get that kind of service from gmail etc..
One of the first things I did as a young apprentice was to set up a new e-Mail / spam filter cluster for a small ISP so I have almost 20 years of this under my belt. The software stack didn't change much, still using Exim, still using Dovecot, replaced spamassassin with rspamd some time ago. Dovecot especially has been a joy with barely any need to do major changes.
> There are still tons of businesses running their own mail server or using something other than the Big Ones; businesses that do use Google, etc. will not be amused if they can't communicate with clients or vendors.
As long as Microsoft still offers Exchange on Premise that should remain the case. They could of course at any time decide that they really want customers to use their cloud instead. That worries me. At the same time Microsoft Outlook ruined e-Mail for a lot of people, I can always tell when I'm communicating with an Outlook User - full quote of all prior e-Mails, HTML signatures, the German version for some reason not using Re: but AW: for replies...
Posted Jan 9, 2021 2:26 UTC (Sat)
by liam (guest, #84133)
[Link] (2 responses)
Posted Jan 9, 2021 2:35 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jan 14, 2021 3:50 UTC (Thu)
by liam (guest, #84133)
[Link]
Posted Jan 17, 2021 13:34 UTC (Sun)
by Hi-Angel (guest, #110915)
[Link] (1 responses)
Wow! This is an amazing article! FTR, the article is here https://people.kernel.org/monsieuricon/patches-carved-int... I really hope such instrument would put an end to disagreement between people who likes gitlab/github-based workflows and ones who prefer mailing lists.
> though if it would be in non-experimental use before the end of 2021 is a question.
The article was in 2019, and since then the author haven't posted on the same matter anything. If you know if and where further discussions on that matter are happening, I'd love to know it.
Posted Jan 25, 2021 11:37 UTC (Mon)
by Hi-Angel (guest, #110915)
[Link]
So, I contacted the author and they pointed out to this project as the one working on that https://radicle.xyz/
I have also asked on their matrix channel and they confirmed. Quoting them:
> hi-angel:
Posted Jan 7, 2021 12:10 UTC (Thu)
by funderburg (guest, #3750)
[Link]
A little pessimistic I think? Rocky Linux has a large number of volunteers lined up, and they're aware of the Slack issue and are making moves to use open source Mattermost with IRC integration.
Posted Jan 7, 2021 15:46 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (2 responses)
I hope this prediction is not based on some "insider knowledge" from LWN :)
Posted Jan 8, 2021 4:34 UTC (Fri)
by ecree (guest, #95790)
[Link]
Unless it's Jon. I don't know if I could get through the week without a dose of his wry yet informative humour :)
Posted Jan 10, 2021 17:10 UTC (Sun)
by jezuch (subscriber, #52988)
[Link]
Posted Jan 8, 2021 3:53 UTC (Fri)
by mtaht (subscriber, #11087)
[Link]
Obviously, of these, the only one I know will come true is #5. But I would love to be wrong.
Posted Jan 8, 2021 4:18 UTC (Fri)
by notriddle (subscriber, #130608)
[Link] (9 responses)
Posted Jan 9, 2021 20:36 UTC (Sat)
by ms-tg (subscriber, #89231)
[Link] (8 responses)
+1 to this. Year of the ARM desktop, laptop, server, and mobile. In other words a fundamental realignment where todays non-ARM x86 starts a slide from dominant to legacy to niche.
Would love to hear and understand contrary predictions!
Posted Jan 11, 2021 8:06 UTC (Mon)
by jezuch (subscriber, #52988)
[Link] (3 responses)
Posted Jan 11, 2021 13:11 UTC (Mon)
by pizza (subscriber, #46)
[Link] (2 responses)
There's nothing inherently superior about Arm or RISC-V (or even x86); CPU cores of equivalent performance require about the same overall internal complexity (and transistor count) and therefore cost about the same amount of money to develop and manufacture. Of course, it helps to re-use portions of previous designs (and their test suites).
In order for developing your own CPU core to be economically feasible, you have to have an astronomically high production volume to make the NRE cheaper than licensing an Arm (or one of the higher-performing RISC-V) core. Only a handful of players have this volume, especially on the high end.
Meanwhile, the rest of the SoC will cost about about the same amount of NRE no matter what the CPU core is, and of course the actual fabrication costs only depend on area/yield. (The actual CPU cores are only a small portion of a typical SoC's area/complexity)
Posted Jan 11, 2021 17:55 UTC (Mon)
by plugwash (subscriber, #29694)
[Link] (1 responses)
After the actions of the trump administration I would expect the Chinese to be wary of relying on western tech.
Posted Jan 11, 2021 18:51 UTC (Mon)
by pizza (subscriber, #46)
[Link]
And recently, the whole thing has been scandal-plagued:
https://asia.nikkei.com/Business/China-tech/Arm-China-ask...
Posted Jan 24, 2021 14:53 UTC (Sun)
by ms-tg (subscriber, #89231)
[Link] (2 responses)
https://www.percona.com/blog/2021/01/22/postgresql-on-arm...
Posted Jan 26, 2021 23:59 UTC (Tue)
by intgr (subscriber, #39733)
[Link] (1 responses)
Percona is misleading a lot of people with this article.
Posted Jan 27, 2021 0:02 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 25, 2021 7:34 UTC (Mon)
by jcm (subscriber, #18262)
[Link]
Posted Jan 8, 2021 10:45 UTC (Fri)
by idealista (guest, #121682)
[Link]
Posted Jan 8, 2021 15:26 UTC (Fri)
by rhdxmr (guest, #44404)
[Link]
Posted Jan 8, 2021 16:59 UTC (Fri)
by ber (subscriber, #2142)
[Link]
your predictions are a good idea, because they help to step back a bit and get ideas about the larger picture! Thanks for so many years of LWN and its predictions!
(I know there was some irony in the introduction about this not being a good idea. But nobody stated why the predictions are useful - they are and it is not about being correct.)
Posted Jan 12, 2021 13:59 UTC (Tue)
by tao (subscriber, #17563)
[Link]
Posted Jan 13, 2021 15:37 UTC (Wed)
by pabs (subscriber, #43278)
[Link]
Some unlikely 2021 predictions
- The Documentation Foundation will collapse;
- Mozilla will decline even further. By the end of 2021 it's basically limping along;
- another episode or two of Debian's refusal-to-end-the-systemd-soap will be aired;
- RedHat will finally stop funding half-a-dozen-of-the-same-thing and force Fedora to only work on a single desktop offering (ie, Gnome);
- related to the four predictions above: the Free Software world will still struggle to cope with a world where computing is basically done on (locked down) smart appliances, (locked down) phones/tables, and (corporate controlled) clouds.
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Wol
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
The discussion in https://bugs.debian.org/975075 seems to confirm this.
Some unlikely 2021 predictions
Some unlikely 2021 predictions
2. In a separate bug, it was requested to modify NM's dependencies so it could be installed with elogind instead of systemd.
3. The maintainer set the bugs to wishlist priority and otherwise ignored them.
4. Someone uploaded an NMU (and later went on to file the bug linked upthread). The maintainer asked that it be rejected, and stated that the sysvinit script had been removed intentionally.
5. Much ado over what the GR means, how it applies to cases like this, whether the TC is empowered to interpret the GR, etc.
6. My read of that thread is that the sysvinit script is probably going to end up living in a separate package. The offending Depends has already been downgraded to a Recommends (but I'm not sure if they are going to do anything else with that problem).
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
that doesn't block when it runs out of zeros.
..
Some unlikely 2021 predictions
> that doesn't block when it runs out of zeros.
> ..
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
2. Turn off the bad parts of the silicon, and sell the portion you didn't turn off at a discount.
3. Use firmware or microcode to correct the problem, and (if performance is impacted) sell it at a discount.
4. Sell it as-is and hope nobody notices.
Some unlikely 2021 predictions
Wol
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Wol
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Wol
" Nonetheless, we've been doing this since 2002" Thanks for the link - what a stroll down memory lane. - and even with the "This week in Linux history" LWN page... That would be cool to revive, but I see the amount of work related to that, so you are perfectly excused.
One thing that could be doable - maybe related to next round of predictions - is to make a page with pointers to the old predictions and yearly summaries
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
Email Servers
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
But if the big mail-providers EE&E in a way that screws us over, how many of us know how to set up our own mail servers?
Every problem can be rewritten as a marketing opportunity.
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
It will become possible to submit kernel patches without touching an email client
> Hi, just wondering, are you folks the ones working to make this workflow real? https://people.kernel.org/monsieuricon/patches-carved-int...
>
> lftherios:
> Hey hi-angel Yes we are the ones trying to bring this vision to reality. Konstantin (the author of the post) has been very helpful to us during this process and heavily influenced our design with his thoughts and comments.
>
> One note: in comparison to the original post we don't use the SSB protocol but instead we designed a peer-to-protocol we call radicle-link that borrows a number of ideas from SSB.
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
#1 - the concept of bufferbloat and the fixes for it will finally make it across to a general audience, including the US government and FCC, mandating ISP deployment of RFC8290
#2 - Binary blobs will be banned as a result of the in both wifi and lte
#3 - The endless debate over L4S in the IETF will finally die in favor of SCE
#4 - some vendor will actually pay former bufferbloat.net volunteers to port the right code into all the network co-processors
#5 - Openwrt 2021 will ship with all the right stuff, actually working, for anyone smart enough to burn 10 minutes to flash and install it.
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
https://www.reuters.com/article/us-arm-china-lawsuit/arm-...
https://www.ft.com/content/f86a7ecf-8a6c-4be1-8c96-567f3d...
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
Some unlikely 2021 predictions
LWN's Predictions are useful food for thought
Some unlikely 2021 predictions
Some unlikely 2021 predictions