LWN: Comments on "Some unlikely 2021 predictions" https://lwn.net/Articles/840632/ This is a special feed containing comments posted to the individual LWN article titled "Some unlikely 2021 predictions". en-us Sat, 27 Sep 2025 09:25:14 +0000 Sat, 27 Sep 2025 09:25:14 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Some unlikely 2021 predictions https://lwn.net/Articles/844108/ https://lwn.net/Articles/844108/ Cyberax <div class="FormattedComment"> On the other hand, it is cheaper on AWS than comparable Intel CPUs. <br> </div> Wed, 27 Jan 2021 00:02:21 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/844106/ https://lwn.net/Articles/844106/ intgr <div class="FormattedComment"> While this article demonstrates good value for the Arm-based AWS offering, the performance is pretty appalling. The Intel CPU has half the cores, was designed in 2017 and fabricated on 14nm technology, compared to 7nm for Graviton2. So with two hands tied behind its back, x86 still has comparable performance.<br> <p> Percona is misleading a lot of people with this article.<br> </div> Tue, 26 Jan 2021 23:59:58 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/844058/ https://lwn.net/Articles/844058/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; So the developers of the default desktop prefer refactoring and rewriting, on a regular basis actually, and the developers of the alternatives don&#x27;t? </font><br> <p> This is not about &quot;default&quot; vs &quot;alternatives&quot;, it&#x27;s only about having more choice and more competition.<br> <p> <font class="QuotedText">&gt; Because rhetorical questions suck</font><br> <p> Especially the ones put in others&#x27; mouth.<br> </div> Tue, 26 Jan 2021 16:51:58 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/843775/ https://lwn.net/Articles/843775/ Hi-Angel <div class="FormattedComment"> <font class="QuotedText">&gt; The article was in 2019, and since then the author haven&#x27;t posted on the same matter anything. If you know if and where further discussions on that matter are happening, I&#x27;d love to know it.</font><br> <p> So, I contacted the author and they pointed out to this project as the one working on that <a rel="nofollow" href="https://radicle.xyz/">https://radicle.xyz/</a><br> <p> I have also asked on their matrix channel and they confirmed. Quoting them:<br> <p> <font class="QuotedText">&gt; hi-angel:</font><br> <font class="QuotedText">&gt; Hi, just wondering, are you folks the ones working to make this workflow real? <a rel="nofollow" href="https://people.kernel.org/monsieuricon/patches-carved-into-developer-sigchains">https://people.kernel.org/monsieuricon/patches-carved-int...</a></font><br> <font class="QuotedText">&gt;</font><br> <font class="QuotedText">&gt; lftherios:</font><br> <font class="QuotedText">&gt; Hey hi-angel Yes we are the ones trying to bring this vision to reality. Konstantin (the author of the post) has been very helpful to us during this process and heavily influenced our design with his thoughts and comments.</font><br> <font class="QuotedText">&gt;</font><br> <font class="QuotedText">&gt; One note: in comparison to the original post we don&#x27;t use the SSB protocol but instead we designed a peer-to-protocol we call radicle-link that borrows a number of ideas from SSB.</font><br> <p> </div> Mon, 25 Jan 2021 11:37:03 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/843763/ https://lwn.net/Articles/843763/ jcm <div class="FormattedComment"> I&#x27;m expecting Arm to have a clean sweep by 2030 of everything from phone to tablet to laptop to server. It will be a glorious day!<br> </div> Mon, 25 Jan 2021 07:34:51 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/843729/ https://lwn.net/Articles/843729/ ms-tg <div class="FormattedComment"> AWS instances with Arm looking good!<br> <p> <a href="https://www.percona.com/blog/2021/01/22/postgresql-on-arm-based-aws-ec2-instances-is-it-any-good/">https://www.percona.com/blog/2021/01/22/postgresql-on-arm...</a><br> </div> Sun, 24 Jan 2021 14:53:37 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/843441/ https://lwn.net/Articles/843441/ mstone_ <div class="FormattedComment"> Yeah, free software&#x27;s decision making progress is completely incomprehensible, compared to the corporate world where, say, google can decide on a simple messaging app and stay on message with no drama at all. <br> </div> Thu, 21 Jan 2021 16:38:00 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842928/ https://lwn.net/Articles/842928/ Hi-Angel <div class="FormattedComment"> <font class="QuotedText">&gt; People are working on it, see for example &quot;Patches carved into developer sigchains&quot;</font><br> <p> Wow! This is an amazing article! FTR, the article is here <a rel="nofollow" href="https://people.kernel.org/monsieuricon/patches-carved-into-developer-sigchains">https://people.kernel.org/monsieuricon/patches-carved-int...</a> I really hope such instrument would put an end to disagreement between people who likes gitlab/github-based workflows and ones who prefer mailing lists.<br> <p> <font class="QuotedText">&gt; though if it would be in non-experimental use before the end of 2021 is a question.</font><br> <p> The article was in 2019, and since then the author haven&#x27;t posted on the same matter anything. If you know if and where further discussions on that matter are happening, I&#x27;d love to know it.<br> </div> Sun, 17 Jan 2021 13:34:39 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842623/ https://lwn.net/Articles/842623/ liam <div class="FormattedComment"> I didn&#x27;t say they would succeed:)<br> </div> Thu, 14 Jan 2021 03:50:30 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842509/ https://lwn.net/Articles/842509/ pabs <div class="FormattedComment"> Comments from Hacker News:<br> <p> <a href="https://news.ycombinator.com/item?id=25758976">https://news.ycombinator.com/item?id=25758976</a><br> </div> Wed, 13 Jan 2021 15:37:21 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842425/ https://lwn.net/Articles/842425/ floppus <div class="FormattedComment"> But the opcode that would correspond to &quot;SLL&quot;, despite being undocumented, does have a well-defined function: it multiplies the value by two and adds one. This is fairly well known among Z80 hackers (some people refer to this opcode as &quot;SLIA&quot; for &quot;shift left inverted arithmetic&quot;, or &quot;SL1&quot; for &quot;shift left and add 1&quot;.)<br> <p> Sure, you can argue it&#x27;d be more parsimonious for the two opcodes to do exactly the same thing, but the undocumented function is occasionally useful to save a byte and four clock cycles.<br> <p> Whether the undocumented function was intentional or not, who knows... but if it were unintentional, you&#x27;d expect that the result would be (x&lt;&lt;1)|(x&gt;&gt;7), or (x&lt;&lt;1)|CF, or (x&lt;&lt;1)|(x&amp;1), or something even weirder, rather than (x&lt;&lt;1)|1.<br> <p> If you simply want your assembler to emit an SLA opcode whenever you write SLL, that&#x27;s an issue with the assembler, not the hardware. ;)<br> <p> </div> Tue, 12 Jan 2021 22:43:45 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842343/ https://lwn.net/Articles/842343/ tao <div class="FormattedComment"> Marble Machine X will finally be completed and Wintergatan will embark on a world tour.<br> </div> Tue, 12 Jan 2021 13:59:03 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842314/ https://lwn.net/Articles/842314/ pizza <div class="FormattedComment"> A few years ago, &quot;Arm China&quot; went from a standard wholly-owned subsidiary of Arm to a joint venture which intentionally has 51% Chinese ownership. It was set up this way to help ensure China would not be reliant upon &quot;Western tech&quot;, and to that effect it has what amounts to irrevocable licenses for already-developed cores as well as architecture licenses if they wish to develop their own cores. Plus a great deal of operational autonomy.<br> <p> And recently, the whole thing has been scandal-plagued:<br> <p> <a href="https://asia.nikkei.com/Business/China-tech/Arm-China-asks-Beijing-to-intervene-in-row-with-UK-parents">https://asia.nikkei.com/Business/China-tech/Arm-China-ask...</a><br> <a href="https://www.reuters.com/article/us-arm-china-lawsuit/arm-china-investor-sues-company-escalating-ceo-spat-amid-sale-idUSKBN26725D">https://www.reuters.com/article/us-arm-china-lawsuit/arm-...</a><br> <a href="https://www.ft.com/content/f86a7ecf-8a6c-4be1-8c96-567f3dd424fd">https://www.ft.com/content/f86a7ecf-8a6c-4be1-8c96-567f3d...</a><br> <p> <p> </div> Mon, 11 Jan 2021 18:51:23 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842312/ https://lwn.net/Articles/842312/ pebolle <div class="FormattedComment"> <font class="QuotedText">&gt; This would be true if divas^H developers liked to &quot;focus&quot; on fixing bugs. But they prefer refactoring and (re)writing on a regular basis. So some redundancy and alternatives are useful.</font><br> <p> So the developers of the default desktop prefer refactoring and rewriting, on a regular basis actually, and the developers of the alternatives don&#x27;t? Apperently the incentives for the developers of alternative desktop are totally different.<br> <p> (Because rhetorical questions suck: I think the developers of the default desktop and the alternatives are basically subject to the same incentives. Except some of the developers of the default are actually _paid_ to fix bugs. And I expect the default to have less bugs than the alternatives anyhow, as it is being used much more often.)<br> </div> Mon, 11 Jan 2021 18:35:33 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842311/ https://lwn.net/Articles/842311/ plugwash <div class="FormattedComment"> The question IMO is will the Chinese decide to fund Risc V development in an attempt to get it to the level where it is competitive with arm&#x27;s higher-end cores. And if so will they be successful.<br> <p> After the actions of the trump administration I would expect the Chinese to be wary of relying on western tech.<br> </div> Mon, 11 Jan 2021 17:55:03 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842236/ https://lwn.net/Articles/842236/ pizza <div class="FormattedComment"> Is it possible? Of course. Is it likely? Nope.<br> <p> There&#x27;s nothing inherently superior about Arm or RISC-V (or even x86); CPU cores of equivalent performance require about the same overall internal complexity (and transistor count) and therefore cost about the same amount of money to develop and manufacture. Of course, it helps to re-use portions of previous designs (and their test suites).<br> <p> In order for developing your own CPU core to be economically feasible, you have to have an astronomically high production volume to make the NRE cheaper than licensing an Arm (or one of the higher-performing RISC-V) core. Only a handful of players have this volume, especially on the high end. <br> <p> Meanwhile, the rest of the SoC will cost about about the same amount of NRE no matter what the CPU core is, and of course the actual fabrication costs only depend on area/yield. (The actual CPU cores are only a small portion of a typical SoC&#x27;s area/complexity)<br> <p> <p> </div> Mon, 11 Jan 2021 13:11:41 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842234/ https://lwn.net/Articles/842234/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; And moreover, by focusing on one desktop the chances of it suddenly sucking, to a distribution using it, should get even lower. </font><br> <p> This would be true if divas^H developers liked to &quot;focus&quot; on fixing bugs. But they prefer refactoring and (re)writing on a regular basis. So some redundancy and alternatives are useful.<br> <p> <p> <p> </div> Mon, 11 Jan 2021 10:32:50 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842231/ https://lwn.net/Articles/842231/ jezuch <div class="FormattedComment"> Would it be possible for RISC-V to steal ARM&#x27;s lunch?<br> </div> Mon, 11 Jan 2021 08:06:11 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842205/ https://lwn.net/Articles/842205/ jem <div class="FormattedComment"> <font class="QuotedText">&gt;Three instructions is not &quot;logically complete&quot;.</font><br> <p> Tell that to CPU architects. Motorola&#x27;s 6800 had the same three shift instructions as the Z-80. The 80x86 processors also only have three shift instructions. The 8086 User&#x27;s Manual states: &quot;SHL and SAL (Shift Logical Left and Shift Arithmetic Left) perform the same operation and are physically the same instruction. ARM has the LSR, LSL, and ASR. RISC-V has SRLI, SLLI, and SRAI (shift right/left logical/arithmetic immediate). MIPS? Same thing.<br> <p> It is a well known fact that an arithmetic shift only differs from a logical shift when shifting &quot;right&quot;, i.e. towards the less significant bit positions. This has been known for decades, and I doubt the engineers at Zilog were stupid when they added these instructions. So to me, the *obvious* explanation was that Zilog copied the three instructions from the Motorola 6800, and left the op code that was left unused to do what it happened to do.<br> <p> What Zilog *should* have done was to do what Intel did later with the 8086: document the assembler mnemonic SLL as an alias for SLA. Or used the name SLL instead of SLA.<br> <p> <font class="QuotedText">&gt;And ime, not being logically complete is *asking for trouble*. What happens if an assembler-writer uses SRL, and then also uses SLL because it&#x27;s the logically obvious instruction?</font><br> <p> What *I* think is asking for trouble is when an assembler writer jumps to conclusions about the instruction set. &quot;Ah, there is an error in the documentation, Zilog forgot to document the SLL instruction. Ok, let&#x27;s use the *obvious* opcode for that.&quot; And the ship the assembler without testing it.<br> <p> <font class="QuotedText">&gt;Do you have any *evidence* as to why SLL wasn&#x27;t documented?</font><br> <p> Do *you* have any evidence for your original claim? That Zilog didn&#x27;t intend to design their processor according to industry practice at the time, and they really intended to define four shift operations, of which two are identical? And with a stroke of extremely good luck they could just drop the SLL instruction from the documentation, when they realized at the last moment that it didn&#x27;t work as expected. What are the odds of that?<br> <p> </div> Sun, 10 Jan 2021 18:39:44 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842202/ https://lwn.net/Articles/842202/ jezuch <div class="FormattedComment"> Maybe yes, maybe no, but having gone on a two-week cycling trip this summer, I definitely understand where it&#x27;s coming from. There are *lots* of days when I&#x27;d rather be back on the trail than &quot;trapped in front of a keyboard&quot;.<br> </div> Sun, 10 Jan 2021 17:10:10 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842200/ https://lwn.net/Articles/842200/ Wol <div class="FormattedComment"> Okay, SLA and SLL may have the same EFFECT. But as a mathematician I like symmetry. I would document both. And the instruction code for SLL can be deduced from the other three. Except it doesn&#x27;t quite do an SLL.<br> <p> So the question remains. WHY did Zilog not want SLL to be used? And I hate to say it, but the *obvious* explanation is it was buggy. Especially as the instruction quite clearly existed, and *almost* worked. Three instructions is not &quot;logically complete&quot;. And ime, not being logically complete is *asking for trouble*. What happens if an assembler-writer uses SRL, and then also uses SLL because it&#x27;s the logically obvious instruction? PEOPLE DON&#x27;T READ SPECS - THEY JUMP TO CONCLUSIONS, and it&#x27;s very likely people will just assume SLL exists.<br> <p> (I&#x27;m aware there was a whole bunch of 16-bit instructions that happily worked on pairs of 8-bit registers but weren&#x27;t documented, and also I think 8-bit instructions that worked on the 16-bit registers, but I&#x27;m not aware of any other instruction that clearly should have existed and didn&#x27;t.)<br> <p> Do you have any *evidence* as to why SLL wasn&#x27;t documented? Or are you just looking at what happened after the chip was released, and jumping to a different conclusion from other people?<br> <p> Cheers,<br> Wol<br> </div> Sun, 10 Jan 2021 16:59:14 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842198/ https://lwn.net/Articles/842198/ excors <div class="FormattedComment"> <font class="QuotedText">&gt; I&#x27;m not a hardware expert, but my simplified and (probably) wrong understanding is as follows: You can&#x27;t economically turn out perfect silicon every time. Instead, the production process has a certain percentage of errors, and you try to lower this percentage as much as possible, but eventually you start running into diminishing returns.</font><br> <p> Also not an expert, but I think you&#x27;re conflating two different classes of issue:<br> <p> 1) Errors in the design. Often bugs in the HDL code that defines the chip&#x27;s behaviour and that were not detected by static analysis or by testing in simulation; sometimes bugs that are outside of the logic you were testing, e.g. a crypto block that leaks secrets to an attacker who&#x27;s precisely monitoring the power lines.<br> <p> If you discover those issues while running tests on real silicon, it takes many months and millions of dollars to fix the design and produce a new revision of the silicon. It&#x27;s usually possible, and far cheaper and quicker, to develop a workaround in software(/microcode/firmware/etc), so CPUs typically come with lists of dozens or hundreds of errata that the relevant software developers need to be aware of. In extreme cases the workaround might be to disable a major hardware feature (like TSX in Haswell), though usually the workarounds are much less painful.<br> <p> If you&#x27;re producing a new revision of the silicon anyway (e.g. to fix an unworkaroundable bug, or for a new chip with new features), but you have a good software workaround for a particular errata, you might still choose not to fix that errata. &#x27;Fixing&#x27; the design risks introducing new bugs (which may delay the new revision for months and cost millions more), so it&#x27;s safer to leave it alone, and the errata can persist for a long time.<br> <p> 2) Random defects in the fabrication of each chip, because you&#x27;re pushing the manufacturing technology to the limit and it&#x27;s a messy physical process and the atoms don&#x27;t all go where you want them to. In that case you can turn off (i.e. configure the hardware/microcode/firmware/software/etc to not use) the bad parts of the silicon, or run at a lower frequency / higher voltage that doesn&#x27;t trigger the fault. You get chips with a lot of variation in capabilities and power efficiency, then the marketing people work out how to divide them up neatly and sell them at different prices.<br> <p> All of those practices will continue because they&#x27;re sensible economic tradeoffs, and because computers would be unaffordable if customers insisted on perfect hardware.<br> </div> Sun, 10 Jan 2021 13:44:23 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842195/ https://lwn.net/Articles/842195/ jem <div class="FormattedComment"> <font class="QuotedText">&gt;Similar to the Zilog Z80, actually, except they found the bug before it shipped - how many people remember (or realised) that although it had a RightShift instruction, it didn&#x27;t have a LeftShift? They deleted it from the spec because it was buggy.</font><br> <p> You are again repeating this myth that the &quot;Shift Left Logical&quot; instruction is missing from the Z-80 because it was found to be buggy. Yes, the Z-80 only has three shift instructions, SLA, SRA, and SRL. The SLL instruction is missing because it is not needed: the SLA (Shift Left Arithmetic) does the same thing. Only right shifts differ between arithmetic and logical shifts.<br> <p> The Z-80 has a lot more undocumented instructions, none of which are bugs. They just work the way they do because of how the processor was wired. Zilog did not intend them to be used, but also did not want to waste silicon to disable them is such a simple processor.<br> </div> Sun, 10 Jan 2021 11:14:31 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842193/ https://lwn.net/Articles/842193/ Wol <div class="FormattedComment"> I think you&#x27;re missing a trick here, namely the diffference between production errors and design errors.<br> <p> For example, I have a 3-core Athlon. I suspect it&#x27;s actually a 4-core, but the fourth core failed QA and was disabled. Fine, I bought and paid for a 3-core, so I got what was advertised.<br> <p> Spectre, on the other hand, was a design flaw - the chip did exactly what was claimed. The problem is it ALSO did something else ...<br> <p> Similar to the Zilog Z80, actually, except they found the bug before it shipped - how many people remember (or realised) that although it had a RightShift instruction, it didn&#x27;t have a LeftShift? They deleted it from the spec because it was buggy.<br> <p> So some people investigated, and discovered that the instruction code for LeftShift actually did a LeftShiftAndIncrement :-) Which - I believe - found its way into the spec because people used it ... :-)<br> <p> Cheers,<br> Wol<br> </div> Sun, 10 Jan 2021 09:46:54 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842189/ https://lwn.net/Articles/842189/ NYKevin <div class="FormattedComment"> To be fair, this is a natural outcome of the economics of modern hardware production.<br> <p> I&#x27;m not a hardware expert, but my simplified and (probably) wrong understanding is as follows: You can&#x27;t economically turn out perfect silicon every time. Instead, the production process has a certain percentage of errors, and you try to lower this percentage as much as possible, but eventually you start running into diminishing returns. The manufacturer (hopefully) discovers those errors autonomously with a standardized testing process, but then you have to decide what to do with the bad silicon:<br> <p> 1. Throw it away (or recycle it, to the extent that it is practical to do so).<br> 2. Turn off the bad parts of the silicon, and sell the portion you didn&#x27;t turn off at a discount.<br> 3. Use firmware or microcode to correct the problem, and (if performance is impacted) sell it at a discount.<br> 4. Sell it as-is and hope nobody notices.<br> <p> Option #1 is obviously the least profitable option, and options 2-4 can all be accurately described as &quot;selling silicon with known hardware bugs.&quot; But IMHO only option #4 is actually dishonest. I also tend to imagine that #4 is rare, at least for the serious manufacturers. Consider how easily the FDIV bug was found - there&#x27;s always going to be some person who is stress testing your hardware in a &quot;weird&quot; configuration, and if there&#x27;s a bug, that person has a fairly good chance of finding it.<br> <p> (OTOH, Spectre took multiple decades to find, and was endemic across the entire industry for that whole time, so maybe I&#x27;m a bit optimistic here.)<br> </div> Sun, 10 Jan 2021 03:44:51 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842178/ https://lwn.net/Articles/842178/ ms-tg <div class="FormattedComment"> <font class="QuotedText">&gt; 2021 will be remembered as the year of the ARM desktop</font><br> <p> +1 to this. Year of the ARM desktop, laptop, server, and mobile. In other words a fundamental realignment where todays non-ARM x86 starts a slide from dominant to legacy to niche.<br> <p> Would love to hear and understand contrary predictions!<br> </div> Sat, 09 Jan 2021 20:36:55 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842175/ https://lwn.net/Articles/842175/ Wol <div class="FormattedComment"> &quot;It only runs on a clean install&quot;?<br> <p> I bought a Terry Pratchett game for Win95 - I don&#x27;t think I got much further than the first few &quot;rooms&quot; whatever they were. And inasmuch as I could find reports for it, it was very temperamental about the hardware, and advice was &quot;re-install 95, install the game, don&#x27;t install anything else&quot;. Not good news if you don&#x27;t have a spare PC ...<br> <p> Cheers,<br> Wol<br> </div> Sat, 09 Jan 2021 17:30:44 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842159/ https://lwn.net/Articles/842159/ mpr22 <div class="FormattedComment"> <font class="QuotedText">&gt; what would happen if everyone who bought a buggy video game demanded a refund?</font><br> <p> It depends. How buggy is &quot;buggy&quot;?<br> </div> Sat, 09 Jan 2021 12:22:27 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842145/ https://lwn.net/Articles/842145/ nilsmeyer <div class="FormattedComment"> <font class="QuotedText">&gt; As of January 2021, it&#x27;s not that hard to set up your own mail server, and configure SPF/DKIM/DMARC in a way that gives you good deliverability into Gmail/Hotmail/etc. I&#x27;ve had my own mail server for years. </font><br> <p> I run my own e-Mail server, it&#x27;s usually on auto-pilot with me just doing upgrades. But every once in a while an issue crops up, for example I only noticed that there was an issue with IPv6 after a friend told me and provided the logs from his server, I won&#x27;t get that kind of service from gmail etc.. <br> <p> One of the first things I did as a young apprentice was to set up a new e-Mail / spam filter cluster for a small ISP so I have almost 20 years of this under my belt. The software stack didn&#x27;t change much, still using Exim, still using Dovecot, replaced spamassassin with rspamd some time ago. Dovecot especially has been a joy with barely any need to do major changes. <br> <p> <font class="QuotedText">&gt; There are still tons of businesses running their own mail server or using something other than the Big Ones; businesses that do use Google, etc. will not be amused if they can&#x27;t communicate with clients or vendors. </font><br> <p> As long as Microsoft still offers Exchange on Premise that should remain the case. They could of course at any time decide that they really want customers to use their cloud instead. That worries me. At the same time Microsoft Outlook ruined e-Mail for a lot of people, I can always tell when I&#x27;m communicating with an Outlook User - full quote of all prior e-Mails, HTML signatures, the German version for some reason not using Re: but AW: for replies... <br> </div> Sat, 09 Jan 2021 07:00:37 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842144/ https://lwn.net/Articles/842144/ nilsmeyer <div class="FormattedComment"> <font class="QuotedText">&gt; The gaming industry already does this at the software level. For now, they have not yet realized that they can charge for the day 1 patch, but it&#x27;s only a matter of time IMHO.</font><br> <p> I&#x27;m not sure about that. So far the software world has been mostly untouched by it, but you have a legal right to get a product that is in working order when you pay for it. At the moment, software quality issues are often seen as force majeure, an accident. Even the language used is often invoking that image, &quot;computer bug&quot;, &quot;data leak&quot; and so on. What it really is is often human error. And there are real costs associated - I&#x27;m mostly talking about the b2b sphere here but what would happen if everyone who bought a buggy video game demanded a refund? <br> </div> Sat, 09 Jan 2021 06:51:34 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842142/ https://lwn.net/Articles/842142/ Cyberax <div class="FormattedComment"> There are companies that sell you pre-configured mail servers. They are not doing well.<br> </div> Sat, 09 Jan 2021 02:35:07 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842141/ https://lwn.net/Articles/842141/ liam <blockquote>But if the big mail-providers EE&amp;E in a way that screws us over, how many of us know how to set up our own mail servers?</blockquote> <br><br> Every problem can be rewritten as a marketing opportunity. Sat, 09 Jan 2021 02:26:08 +0000 Email Servers https://lwn.net/Articles/842137/ https://lwn.net/Articles/842137/ zlynx <div class="FormattedComment"> For those of us that have done email for years setting up everything does seem pretty easy. But coming into it cold is hard. New people don&#x27;t know everything or how it needs to work together.<br> <p> But, I do remember some useful Linux Journal articles. There may have been a LWN article or two as well.<br> <p> I do mine on Fedora Linux with Exim, Cyrus IMAP, and SpamAssassin. Exim needs to be configured to deliver to Cyrus properly. At one point I had to adjust the SELinux rules for that but I think that&#x27;s part of the Fedora packages now. There&#x27;s also SSL certificate configuration that has to be turned on and adjusted to work with LetsEncrypt.<br> <p> And then there is DNS with MX, SPF and DKIM and DMARC stuff. And I enabled DNSSEC on my domain which I&#x27;ve heard gives you another few bonus reliability points.<br> <p> And if you cannot get a static IP then you have to do all of this in the cloud, and optionally configure a VPN and forward your email servers to your home network.<br> </div> Sat, 09 Jan 2021 01:31:06 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842130/ https://lwn.net/Articles/842130/ dskoll <p>As of January 2021, it's not that hard to set up your own mail server, and configure SPF/DKIM/DMARC in a way that gives you good deliverability into Gmail/Hotmail/etc. I've had my own mail server for years. <p>I can't see the Big Providers changing this any time soon. There are still tons of businesses running their own mail server or using something other than the Big Ones; businesses that <i>do</i> use Google, etc. will not be amused if they can't communicate with clients or vendors. <p>This may change in the future, but not for a while, I think. There... that's my reckless prediction for 2021. Fri, 08 Jan 2021 22:27:00 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842127/ https://lwn.net/Articles/842127/ Wol <div class="FormattedComment"> As a SUSE/KDE user, KDE$ was to me a disaster - I abandoned KDE.<br> <p> My main desktop was an Athlon Thunderbird (K8?), and I simply couldn&#x27;t log in. I don&#x27;t know how long it took to go from login screen to usable desktop - I think the longest I waited was 24-36 hours before I gave up.<br> <p> I went back once I managed to fix it (gentoo is wonderful here :-), but disasters like that cost you users...<br> <p> Cheers,<br> Wol<br> </div> Fri, 08 Jan 2021 20:55:08 +0000 LWN's Predictions are useful food for thought https://lwn.net/Articles/842116/ https://lwn.net/Articles/842116/ ber <div class="FormattedComment"> Jon,<br> <p> your predictions are a good idea, because they help to step back a bit and get ideas about the larger picture! Thanks for so many years of LWN and its predictions!<br> <p> (I know there was some irony in the introduction about this not being a good idea. But nobody stated why the predictions are useful - they are and it is not about being correct.)<br> <p> <p> </div> Fri, 08 Jan 2021 16:59:29 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842097/ https://lwn.net/Articles/842097/ rhdxmr <div class="FormattedComment"> BPF will gain more popularity in container environment and there will be a few BPF conferences offline.<br> </div> Fri, 08 Jan 2021 15:26:32 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842056/ https://lwn.net/Articles/842056/ idealista <div class="FormattedComment"> The predicton from a year ago:<br> <p> <a href="https://lwn.net/Articles/808260/">https://lwn.net/Articles/808260/</a><br> </div> Fri, 08 Jan 2021 10:45:51 +0000 Some unlikely 2021 predictions https://lwn.net/Articles/842046/ https://lwn.net/Articles/842046/ ecree <div class="FormattedComment"> Even if it is, I think our communities are robust enough to cope if someone key retires.<br> <p> Unless it&#x27;s Jon. I don&#x27;t know if I could get through the week without a dose of his wry yet informative humour :)<br> </div> Fri, 08 Jan 2021 04:34:35 +0000 It will become possible to submit kernel patches without touching an email client https://lwn.net/Articles/842044/ https://lwn.net/Articles/842044/ ecree <div class="FormattedComment"> <font class="QuotedText">&gt; It&#x27;s not as if SMTP is owned by anyone.</font><br> <p> But if the big mail-providers EE&amp;E in a way that screws us over, how many of us know how to set up our own mail servers? And how likely are we to learn in the meantime, given that interoperating with the big networks is already a massive PITA? (Alexei was already worrying about this back at netconf ’19, see <a href="http://vger.kernel.org/netconf2019_files/netconf2019_slides_ast.pdf">http://vger.kernel.org/netconf2019_files/netconf2019_slid...</a> — &quot;Email is dying&quot;.)<br> <p> So yes, if Gmail etc. become unusable for kernel development, it would be *possible* for us to set up an entirely parallel network of &quot;engineers&#x27; email&quot;, that doesn&#x27;t follow Gmail et al&#x27;s lead on whichever of DKIM, SPF and DMARC it is that makes things hard. But it wouldn&#x27;t be *easy* and it wouldn&#x27;t happen overnight. Company IT departments won&#x27;t support it on their email servers, because their first priority is keeping the executives&#x27; email working. (Maybe some of them will run a second server for the engineers. But I think we&#x27;ve all encountered enough corporate IT to be pessimistic about that.) And we can&#x27;t start switching now, because we need to keep supporting developers who are on the existing systems (which are very picky about whom they accept mail from).<br> <p> I like the email workflow and want it to continue. But unless we can square this circle, sooner or later it is going to blow up in our faces.<br> </div> Fri, 08 Jan 2021 04:27:33 +0000