New leadership for Asahi Linux
Today's news is bittersweet. We are grateful to marcan for kicking off this project and tirelessly working on it these past years. Our community will miss him. Still, with your support, the project has a bright future to come". Martin has explained his reasons for leaving at length in this blog post.
Posted Feb 13, 2025 16:10 UTC (Thu)
by kragil (guest, #34373)
[Link] (7 responses)
Posted Feb 13, 2025 17:51 UTC (Thu)
by Lennie (subscriber, #49641)
[Link] (2 responses)
that does not seem like normal (or at least should not be)
Posted Feb 16, 2025 8:59 UTC (Sun)
by ssmith32 (subscriber, #72404)
[Link] (1 responses)
Or if it was just an unrelated, tragic, coincidence.
It seems unlikely to me that online discussions about the Linux kernel could result in stalking, no matter how harsh or toxic.
So I'm assuming the latter?
And I get why details wouldn't be shared, but it would be healthy for the community to know if LKML discussions triggered stalking or not.
It was a curious paragraph.
Posted Feb 16, 2025 10:41 UTC (Sun)
by ck (subscriber, #158559)
[Link]
Posted Feb 13, 2025 21:29 UTC (Thu)
by milesrout (subscriber, #126894)
[Link] (3 responses)
Posted Feb 14, 2025 8:44 UTC (Fri)
by kragil (guest, #34373)
[Link]
Posted Feb 14, 2025 15:19 UTC (Fri)
by marcan (guest, #103032)
[Link] (1 responses)
Posted Feb 14, 2025 17:46 UTC (Fri)
by jhe (subscriber, #164815)
[Link]
Posted Feb 13, 2025 16:14 UTC (Thu)
by koverstreet (✭ supporter ✭, #4296)
[Link] (181 responses)
Posted Feb 13, 2025 16:38 UTC (Thu)
by mb (subscriber, #50428)
[Link] (170 responses)
I think it's time for *all* good non-abusive people to leave the Linux project and start a new kernel project on the base of modern development technologies.
Posted Feb 13, 2025 16:58 UTC (Thu)
by alx.manpages (subscriber, #145117)
[Link] (13 responses)
You're right now accusing Kent of being an asshole, without explicitly saying so. I find that unacceptable. I was at the verge of calling you explicitly that a few hours ago. You certainly earned merits for it. I preferred to respond you a few hours later when I was more calmed.
I don't like being yelled. Yet, I have deserved it more than once, and I appreciate that people have yelled at me. Maybe it wasn't the nicest way they could have told me, but I learnt from them anyway. People aren't perfect, and they might express their voice in different ways. For example, Theo de Raadt has yelled at me repeatedly, and I appreciate it; he made me see his points, and he was right most of the time (not always, but that's another story).
Whoever is free of sin should cast the first stone.
Posted Feb 13, 2025 17:06 UTC (Thu)
by mb (subscriber, #50428)
[Link] (8 responses)
Pardon?
>You're right now accusing Kent of being an asshole
... Wow.
> I was at the verge of calling you explicitly that a few hours ago.
You seem to be a very lovely person.
Posted Feb 13, 2025 17:06 UTC (Thu)
by corbet (editor, #1)
[Link] (1 responses)
Posted Feb 14, 2025 1:39 UTC (Fri)
by jmalcolm (subscriber, #8876)
[Link]
Posted Feb 13, 2025 17:09 UTC (Thu)
by alx.manpages (subscriber, #145117)
[Link] (5 responses)
I'm referring to <https://lwn.net/Articles/1009296/>.
> I have never said or even implied that.
Then I misunderstood. I'm very sorry for that. Given the context (link above; and Kent's recent issues for heated discussions), I interpreted that. I'm happy that it was a misunderstanding.
Posted Feb 13, 2025 17:26 UTC (Thu)
by mb (subscriber, #50428)
[Link] (4 responses)
Ok, I guess I should have written this more elaborately yesterday. So I'll try to do that now:
I read a couple of your comments and basically all of them had the "typical" AI structure to me. That might have been coincidence or not. I didn't know, of course.
I'm not reading AI articles or AI comments or articles and comments that *look* like it.
I'm very sorry that this has apparently upset you a lot.
> Given the context (link above; and Kent's recent issues for heated discussions),
Of course I implicitly referred to these issues.
I fundamentally disagree with you that being yelled at is a good thing or is even remotely acceptable.
Posted Feb 13, 2025 22:01 UTC (Thu)
by alx.manpages (subscriber, #145117)
[Link] (3 responses)
Thanks!
> I read a couple of your comments and basically all of them had the "typical" AI structure to me.
I think we should not accuse humans of looking like AIs, at least in certain contexts where we can assume there's a certain person behind the username. If we start having to prove that we're humans, we'll be in trouble.
Of course, if you're talking to a random reddit new user, you might very well do that, but I think we shouldn't do that here.
> That's all. Really.
Apologies accepted. It was just a misunderstanding. I'm happy now. :)
Please accept mine for the heated response from earlier today.
> I fundamentally disagree with you that being yelled at is a good thing
I don't think it's always the best thing, but for example, today it was useful, I think. If I hadn't yelled at you, you wouldn't have explained your point, and I would have continued thinking you're an asshole, and you'd have continued thinking I might be a chatbot. Me yelling at you has been beneficial in this case, I think. Things aren't black and white, but grey.
Having said that, I think yelling is usually not useful, and personally don't use it often (you might have gotten a bad impression earlier today, but I think it's rather unusual from me to yell). I just understand that some people might have different thresholds for yelling, and accept them. (and for being yelled at)
Cheers,
Posted Feb 15, 2025 23:24 UTC (Sat)
by alx.manpages (subscriber, #145117)
[Link] (2 responses)
Posted Feb 13, 2025 17:07 UTC (Thu)
by koverstreet (✭ supporter ✭, #4296)
[Link] (3 responses)
I didn't feel I was being called an asshole, so let's just let that go.
We've got some real and legitimate frustrations about the way things are operating, but pulling a BSD isn't the answer either. We need to get better at working together, and giving up and leaving isn't the answer either.
We do need leadership that is a bit less out of touch - Christoph's been a problem for a long time, and him getting a pass while Hector got chewed out wasn't good.
But it'll also help if we can avoid venting too much at the maintainers who have been around the longest and done the most, so - perhaps a more respectful attitude all around.
Posted Feb 13, 2025 17:34 UTC (Thu)
by mb (subscriber, #50428)
[Link]
Posted Feb 14, 2025 2:05 UTC (Fri)
by jmalcolm (subscriber, #8876)
[Link]
Thank you for the filesystem by the way. I wish it was a bit easier to use on a root partition but I know we will get there.
Posted Feb 14, 2025 6:32 UTC (Fri)
by joib (subscriber, #8541)
[Link]
I think a large part of the reason for the dominance of Linux in so many areas today is due to disparate interests being able to collaborate, thus saving a lot of wheel-reinvention and building up momentum. Whether you attribute this to the copyleft license, Linus personal leadership, some kind of "development culture" or whatever, but surely the BSD-style solving of disagreements by taking your ball and going home is one of the big reasons why the *BSD's are relatively marginal today, and Linux isn't.
I'm not a kernel developer so I don't have a part in this fight, but it seems that Linus needs to show some leadership here. Either overruling Christoph, or then declaring the RfL experiment failed so the people involved in it don't need to waste more effort on a futile endeavor.
Posted Feb 13, 2025 18:31 UTC (Thu)
by ferringb (subscriber, #20752)
[Link] (153 responses)
What would that even actually look like, let alone the 'how'? That's not a "this is stupid", it's an actual "htf would that pig gain wings" question.
Frankly trying to deal w/ kernel code makes me want to drink a liter of whisky, but there's a ton of momentum and knowledge bound up in the current culture, and a *lot* of that code is fairly brutal to try and learn- both the inherint complexity, and just stupid shit like documentation being nonexistent. That's a lot to displace w/ by an alternative.
All of this I know you know, hence wondering what realistic alternative there could even be at this stage, sans something like FANG funding a fork, which has it's own issues.
Posted Feb 13, 2025 19:02 UTC (Thu)
by mb (subscriber, #50428)
[Link] (152 responses)
Also, big rewrites have always happened frequently. Linux itself is such a project. Or a smaller but more recent example: coreutils/uutils.
I have deliberately not detailed the "how". But I would not be surprised at all if at some point a project suddenly pops up that does it *somehow*.
New operating systems pop up and vanish all the time. They are laughed at, maybe. Including Linux in 1991.
>Frankly trying to deal w/ kernel code makes me want to drink a liter of whisky
Yes, but that doesn't mean it has to be like that.
>hence wondering what realistic alternative there could even be
Would it have looked realistic to talk about an xfree86 fork in the early 2000s? Or OO.org? Not to me. It still eventually happened somehow, though.
Posted Feb 13, 2025 20:12 UTC (Thu)
by ferringb (subscriber, #20752)
[Link] (150 responses)
Possibly debating to debate, but my recollection from then is both communities basically were disgruntled and then the owners did a forced error which kicked it off.
Xfree86 did the license crap, Oracle abandoned OOo but kept the reins. My point there is both communities- from what I recall- where ripe for bailing, and the owners basically forced it. Akin to redis and valkey, or terraform and opentofu. Your recollection of those events- specifically your vantage- isn't mine, I was just on the corp FOSS/distro side for that. IE, I could be talking out of my ass.
> I think a huge part of that is due to the programming language that some of the maintainers love so much.
There is a deep irony in the complexity of kernel tooling, macro and attribute usage to try and make C safer and more streamlined to avoid gotchas, while contrasted against the opposition to a language that fundamentally eliminates half that crap while providing a stronger framework for building even more crazed safety/robustness tooling. Baking RAIT, state progression, all of that into typing and ownership is a no brainer. It's not even "chuck it all and rewrite", it's a pretty smooth onramp that yields gains as things go. A bit more complex than jumping between shitty C standards, but that's how I view it.
Either way, even if it's doomed to fail I'd love to see a space where R4L (and other kernel culture/process improvements) seem like a collaborative effort across the board. Including a reset of what's considered acceptable behavior as cultural norms, in particular.
No argument for any of your other points, even if I disagree w/ the manpages/coreutils maintainer's view of "I can do C safely" vs my view of "I make mistakes, I want tooling that prevents the bulk of it". It's a bit blunt in phrasing, but again, I come at this from a tooling angle.
Posted Feb 13, 2025 23:00 UTC (Thu)
by butlerm (subscriber, #13312)
[Link] (149 responses)
That said it is probably always going to be an uphill battle to persuade a large development community to abandon or even gradually abandon an existing well established language ecosystem for an entirely new one, no matter what the advantages are.
And this is especially the case because it is far from certain that C compilers, optimizers, and linkers will not be developed that remove the vast majority if not all of C's and C++'s undefined behaviors without a performance penalty - not only without a performance penalty but with a performance increase. There are people working on that right now that have been mentioned here on LWN.net and from what I can see they have been making pretty good progress.
Furthermore it is far from established that Rust would be the best memory safe target language to convert something as large as the Linux kernel code base to anyway. There are much better known languages that are probably easier to learn than either Rust or contemporary C++ are that may be better choices for that. A Pascal derivative like Modula-3 with the appropriate improvements is a good example.
I can think of a number of other possibilities that could work including appropriately statically compiled, linked, and optimized C# or other languages from the .NET world. Perhaps someone should try that as an experiment as well. Or something like Ada with the appropriate extensions perhaps. Or a new language designed and developed for the purpose. Ada is forty years old after all. The Go language with the appropriate extensions comes to mind as well.
Posted Feb 13, 2025 23:30 UTC (Thu)
by khim (subscriber, #9252)
[Link] (34 responses)
We can be 100% sure that it'll never happen. It's impossible to eliminate all undefined C and C++ behaviors. Mathematically impossible. And elimination of of majority, while possible, is not happening, either. That, essentially, means, that C would be, eventually, replaced. Some people, like marcan, just couldn't accept the fact that it would only be replaced, ultimately, with significant percentage of C developers. That's the usual way such things are happening, which is sad, but also, unfortunately, inevitable. Where can we read about said progress? The core thing, affine type system would be there, it's simply inevitable: we don't have any realistic alternative. But yes, Rust may not be the best choice… only we don't have any other, right now. Modula-3 is not memory safe, thus it's non-starter. Ada may qualify, it got a way to be memor safe around five years ago. But I seriously doubt it would be better accepted than Rust. Same is true for Swift. What other alternatives are there? 100% non-starter. These languages base their memory safety on entirely different foundation, tracing GC. Not something you may bring into kernel (at least not into Linux kernel). Is this an attempt to write post with AI generator or something? To include so many “nonstarter” ideas in one plausibly sounding post… I have no idea how to achieve that without AI.
Posted Feb 13, 2025 23:51 UTC (Thu)
by koverstreet (✭ supporter ✭, #4296)
[Link] (4 responses)
Automatic memory management, and it's still pretty tied to Apple.
Posted Feb 13, 2025 23:56 UTC (Thu)
by khim (subscriber, #9252)
[Link] (3 responses)
That's what I meant: while Rust is having acceptance issues… chances that Linux developers would accept something that's under full control of Apple is much smaller. Sure, that's less interesting than affine types – but it's still memory safety without tracing GC thus, at least in theory, compatible with the needs of kernel in the current “thou shell not use memory-unsafe language” era. But previous item makes it non-starter.
Posted Feb 14, 2025 14:04 UTC (Fri)
by emk (subscriber, #1128)
[Link] (2 responses)
Posted Feb 14, 2025 14:06 UTC (Fri)
by koverstreet (✭ supporter ✭, #4296)
[Link] (1 responses)
Posted Feb 14, 2025 17:35 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 14, 2025 0:20 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (2 responses)
I don't know Rust at all and I'm not a kernel programmer, but khim's comment above made me go and research what "affine types" are, and let me down quite an enjoyable little rabbit whole to the point where I finally get (I think) what the big deal about Rust is.
So thank you!
Posted Feb 14, 2025 8:29 UTC (Fri)
by 4761 (guest, #165801)
[Link] (1 responses)
Posted Feb 15, 2025 2:31 UTC (Sat)
by ebiederm (subscriber, #35028)
[Link]
https://blog.adacore.com/when-formal-verification-with-sp...
For what is possible I recommend looking at separation logic. One of the original papers includes a memory safety proof of an algorithm that reverses a linked list and manually manages memory.
The references in the Wikipedia page is a good starting place to find out more about Separation Logic. https://en.m.wikipedia.org/wiki/Separation_logic
Last I looked it is still an open question how to convert separation logic into a type system for a language that has manually manages memory.
Still separation logic gets used in lots of interesting proofs of manual memory management. So any serious solution to manual memory management and security is going to need to interact with separation logic at some point.
The key property of separation logic is being able to show that one piece of code and data, does not interact with another piece of code and data. That property is needed for any kind of reasoning that works on that scales to millions of lines of code of current code bases.
If the solution to showing separation is sufficiently granualar, a proof that "free" and the data passed to it does not interact with anything else in the code base is enough to show that calling "free" is memory safe. Which potentially allows machine checked memory safety for all of the existing data structures in the kernel.
Posted Feb 14, 2025 3:04 UTC (Fri)
by butlerm (subscriber, #13312)
[Link] (25 responses)
As far as progress goes, I believe Google is your friend. Articles on the subject have appeared here and on a couple of other sites within the past month or two. As far as Modula-3's memory safety is concerned, same answer as with C - specify exact implementation defined behavior so code that is memory safe can be written and demonstrated to be memory safe. As far as all the suggested languages are concerned I suggested that they would be worth trying as experiments - or even research projects. I can't quite see the great harm that would come from that.
As far as languages that normally use a tracing garbage collector are concerned, I am not sure where you get the idea that they must use a tracing garbage collection instead of something like an internal implementation of reference counting with extensions to break cycles, or why they would require embarassing pauses, or have performance characteristics worse than RCU or a slab allocator for that matter.
The languages I mentioned are just the beginning. With a little bit of software engineering you could translate the entire Linux code base into an appropriate dialect of Python and with appropriate compiler technology compile it to have performance greater than that exhibited by the C code base right now. I am pretty sure that would also be possible with languages like Fortran, Java, and Eiffel with the appropriate extensions.
Not to mention assembly language with automated translation from of assembly code from one CPU architecture to another. With a little creativity one could write and maintain the entire kernel in a close derivative of 68000 assembly language and automatically cross-assemble it into every target architecture the Linux kernel supports - *and* check for and maintain all the safety guarantees expected of a modern system level programming language. That is probably not the best way to maintain a kernel, but it would be hard to beat a kernel written in assembly language for performance. Netware 386 was written in assembly language for a reason.
None of this stuff is impossible or has been shown to be impossible. Anyone who thinks it is simply lacks imagination or is too dedicated to one possible solution to carefully consider any of the others.
Posted Feb 14, 2025 4:12 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Erm... What?
Posted Feb 14, 2025 8:30 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
???
Forth, anyone?
Cheers,
Posted Feb 14, 2025 11:51 UTC (Fri)
by khim (subscriber, #9252)
[Link]
Every layer of abstraction adds overhead. Forth is not an exception. It's slower and less efficient than assembler… but also more compact! That's remarkable, but not unique, quality. Microsoft have allegedly used some variant of p-Code in their early software for the same reason.
Posted Feb 14, 2025 8:54 UTC (Fri)
by taladar (subscriber, #68407)
[Link]
Posted Feb 14, 2025 10:33 UTC (Fri)
by khim (subscriber, #9252)
[Link] (20 responses)
Who told you I'm here “to win friends and influence people”? I'm here to learn… if others have anything worthwhile to teach me. It happens. This phrase just means that you are an idiot or you think I'm an idiot… is it an insult? You can treat it as one, if you want, but that's just the fact. Rice's Theorem result couldn't be “fixed” in an idiotic way that you propose because it, fundamentally, means that it's impossible to transform the program in any way, except purely syntactical) and then ask “does this program still performs in the exact same way or not”. Question of “how big the program have to be for that question not to be unanswerable” is interesting, but only in a purely theoretical sense: in practice verifiers start to require more power and time than we may ever have very quickly, before size of the program reaches anything resembling the OS kernel, even primitive OS kernel (like MS-DOS). Because “internal implementation of reference counting with extensions to break cycles” is simply one way to implement tracing GC? The fundamental property of tracing GC is that it insists that it should know everything about all references from and to all objects anywhere in the system. “Extensions to break cycles” share that property and thus split the whole world in two: the one handled by GC and the one that's not handled by GC. And then we are back to the square one because someone, somehow, have to write and support that “world that's not handled by GC”. I would rather they say they are the end. Of the discussion, not the Rust or Rust for Linux, of course. They show that you don't understand why Rust exists, why Rust for Linux exist and what problem they are solving. Because they are not trying to solve any actual technical problem, but rather their existence and popularity have shown us that we hit something that's impossible to solve by technical means and it's time to “redefine the task”. In parcticular it's impossible to create a language for the “we code for hardware” guys which would both produce optimal (or close to optimal) result and not turn the program into a pile of goo when certain, hard to test and detect, conditions are violated. The solution to that social problem is social and very-well known, too: one funeral at time approach works reliably… others… not so much. Rust for Linux is an interesting experiment to see if that problem can be solved without throwing out the work of thousands of people and starting from scratch. It's very interesting and it would be cool to see if Linus could pull of the third revolution (after Linux itself and then Git), but for all his successes in the years past I wouldn't expect him to mass-change Sauls to Pauls… that's not how humans work. Some people would be converted… but most wouldn't. Would that be enough to save Rust for Linux (and, by extension, Linux)? I have no idea really. But I know that expecting technical solutions to social problems is foolish and expecting to change social problem by a decree of top guys simply wouldn't work. Yet another idiotic proclamation not justified by anything. I guess if you would define “appropriate dialect of Python” as “something that looks like a Python program from the outside but can be syntactically translated to C” (because purely syntactical change sidesteps the Rice theorem) then it may even be true, but I doubt you have meant it like that. These things exist but you lose anywhere between 50% and 90% of performance. Precisely because you have to spend a lot of resources emulating behavior that's not needed at all – but it's not possible to detect that and not emulate it (back to the Rice's theorem, again). The more efficient and tricky your assembler program the larger the slowdown. To faithfully emulate all aspects of 3.58 MHz SNES and make all assembler programs with all nasty tricks their authors have invented work one needs a 3GHz CPU. Hardly a way to produce an OS that real people would use for real work. Another iAPX 432? It would work exactly as “efficiently” (that is: very inefficiently) and would achieve exactly the same level as success as original. Possible? Maybe. Feasible? Most likely no. The majority of the Linux kernel code is in drivers and this 3x-10x slowdown and drivers don't play well: hardware loses patience and state, then nothing works. Transpiled from one CPU architecture and thus at least 3x to 10x slower? Easy. Not all languages would be able to beat it, but many are enough for that. Sure. And that reason was: it had no need to support anything by 386 and had no need anything by file and printer sharing. When people started demanding more things from their network OSes it died extremely quickly. There are lots of OSes that are written in assembler language still in use. You never hear about them because, these days, they need emulators and thus are much slower than anything else. And anyone who tells tales that are not justified by history, experience or even math is just a buffoon. And you sound like a someone who have never done any work in any relevant area, be it binary translation, language development or kernel OS development… what makes you sure you can teach anything worthwhile to anyone who have done such work?
Posted Feb 14, 2025 14:32 UTC (Fri)
by james (subscriber, #1325)
[Link] (19 responses)
Incidentally, on Netware performance, he visited them in "1992 or so" and reported:
Posted Feb 14, 2025 16:41 UTC (Fri)
by branden (guest, #7029)
[Link] (18 responses)
I found this part to be a major takeaway:
"...if some human mind created something then your human mind can understand it. You should always assume that, because if you assume it, it throws away all the doubts that are otherwise going to bother you, and it's going to free you to just concentrate on 'what am I seeing, how is it working, what should I do about it, what am I trying to learn from this'. Never, ever think that you're not smart enough; that's all nonsense." -- Robert P. Colwell
Fortunately for cowboys and rock stars in kernel programming as elsewhere--and the managers who enable them--too few people hear or heed this advice. In these circles, it is believed that one's odds of professional advancement improve as one more convincingly illustrates that one's colleagues are intellectually hamstrung. To put it nicely.
I'm no fan of Intel as a firm but have found much to admire architecturally in their failed experiments, like the 432, the i960, and, yes, the notorious IA64 (Itanium)--I gather that much of what what the last demanded of compilers, drawing furious opposition that labeled the machine dead on arrival, is now done by compilers as a matter of routine. If that's correct, it's not too different from the Ada/i432 situation.
Observing the tenor of LKML's most heated disputes, we might say that only a second-rate hacker learns from his own mistakes, let alone draws a lesson from anyone else's. A consensus that certain mistakes didn't happen, or won't be aired before an unvetted audience, may be a defining trait of broligarchy.
Posted Feb 14, 2025 17:20 UTC (Fri)
by khim (subscriber, #9252)
[Link] (6 responses)
Nope. Not even remotely close. VLIW was built on the premise that scheduling could be done in the compiler. In advance. Before anything is known about actual availability of data. That's what crashed and burned Pentium 4, Itanic and even for GPUs it proved out not to be feasible, after pipelines there have become diverse enough. And no, compilers couldn't do that. They can kinda-sorta-maybe a tiny bit simulate that with PGO and other tricks (used by BOLT, etc), but these are still very far from precision needed to make VLIW viable. It's only marginally different. i432 have created an architecture which couldn't be fast and compiler was supposed to, somehow, make it fast. And Ada team just completely bombed that and haven't even tried. With Itanic compiler team actually tried. And they even had some achievements… but they could never approach efficiency of hardware scheduler because hardware scheduler is much more primitive but it has much more precise data. When access to memory (main RAM, not L2 or L3 cache) takes 500 cpu cycles… mistakes in scheduling are extremely costly and compilers couldn't prevent them. Yet you apply it entirely wrongly. Instead of trying to understand how things work (and yes, everything that human invented could be understood by another human) you just look on things, pat yourself on the back telling yourself that you can understand things… and then, without actually taking time and effort to understand them (hey, that's long and complicated process. let' skip it)… you produce the output “as if” you actually understood them. That's, essentially, what LLMs are doing: they couldn't understand what they are saying and their goal is not to understand anything but to create sequence of words that human reader would perceive as having some meaning… the same means produce the same results, only in case of LLMs we are permitted to call that sequence of worlds “hallucinations”, but if human does the same then it's “opinion”… and we have to respect it — but why?
Posted Feb 14, 2025 18:21 UTC (Fri)
by branden (guest, #7029)
[Link] (5 responses)
>> I found this part to be a major takeaway
> Yet you apply it entirely wrongly.
Uhh, thanks for the insult. The earlier part of your response had utility, though you offer more than baseline reasons to not swallow your claims uncritically. In practically every comment you post to LWN, you wear your motivated reasoning on your sleeve.
> Instead of trying to understand how things work (and yes, everything that human invented could be understood by another human) you just look on things, pat yourself on the back telling yourself that you can understand things… and then, without actually taking time and effort to understand them (hey, that's long and complicated process. let' skip it)… you produce the output “as if” you actually understood them.
Yup, you got me, that's what I do all the time, in every case. You've discovered a completely general truth!
Fortunately it's one that applies only to other people (or maybe just me)--not to you.
> That's, essentially, what LLMs are doing: they couldn't understand what they are saying and their goal is not to understand anything but to create sequence of words that human reader would perceive as having some meaning… the same means produce the same results, only in case of LLMs we are permitted to call that sequence of worlds “hallucinations”, but if human does the same then it's “opinion”… and we have to respect it — but why?
I'm disinclined to think that LLMs are equivalent to what is (or used to be?) called "general intelligence". Perhaps inadvertently, you've supplied a reason to reconsider my presupposition. We can be confident that even if an LLM proves equivalent to a general intelligence (however defined), your own personal state of evolution will prove so advanced that it can't simulate you. You've got the spotlight--keep shreddin', bro!
Posted Feb 14, 2025 18:58 UTC (Fri)
by khim (subscriber, #9252)
[Link] (4 responses)
If that was an attempt to sarcasm then you failed. Of course it doesn't apply “only to other people”… the human brain is hard-wired to skip thinking as much as it can (it's costly, very power-consuming, process, after all) and I'm not an exception. But I make conscious effort to double check what I write by looking at the various sources (including, but not limited, to Wikipedia) and don't demand “respect to my opinion”. My “opinions”, like everyone's are worth nothing. And if my facts or my reasoning are incorrect then they can be shown to be wrong by referencing to the appropriate contradiction in them. But please don't tell tales about how I should tolerate nonsense because… what would it give me? Friends? Why would I need friends on some remote forum, be it LWN or Reddit ? I have them in “real life”™. LLM would never be able to “prove equivalent to a general intelligence”. What they have achieved, at this point, is the ability to regurgitate things that were told by others. And it's fascinatig (and a tiny bit scary) how often people are doing the same. But it's unclear when and if LLMs would get something that would teach them to do something more. Most likely that moment is still far away, most likely decade or more away… and when it would come they wouldn't be called LLMs. Yes. That's the whole point: you may see it and you may challenge it. If you can. The whole idea is not to convince someone to “respect my opinion”, but to either accept my reasoning or show me where they are wrong.
Posted Feb 14, 2025 20:17 UTC (Fri)
by branden (guest, #7029)
[Link] (3 responses)
> If that was an attempt to sarcasm then you failed.
More like a transparent attempt to flatter your gargantuan ego, since the tenor of your remarks over several years on LWN is that this is a concession you demand in conversation--albeit one that you would seem to prefer not to be gauchely and overtly labelled as such.
> Of course it doesn't apply “only to other people”… the human brain is hard-wired to skip thinking as much as it can (it's costly, very power-consuming, process, after all) and I'm not an exception.
This sounds like the sort of trivial observation one might find at LessWrong, where the admission of universal limitations is a form of currency accepted as a proxy for personal humility. I don't buy it.
> But I make conscious effort to double check what I write by looking at the various sources (including, but not limited, to Wikipedia)
Good for you.
> and don't demand “respect to my opinion”. My “opinions”, like everyone's are worth nothing.
Your model of opinion utility is useless. You might as well call opinions "noise" in the information-theoretic sense; that would make your epistemology more clear to your interlocutors.
I offer, without expecting you to accept, an alternative model. Opinions are a form of knowledge the strength of which derives from the expertise and reliability of the authority uttering them. Expert opinions are valued as evidence in courtrooms because a vetting process is applied to experts. For academic expert witnesses, that process involves verification (well, recitation, at least) of their credentialing by accredited authorities on top of personally authored, peer-reviewed, topically applicable research. That is not the standard we apply to Internet discussion forums or chat around the water cooler in the office; nevertheless we similarly apply weight to a person's claims depending on what is known about their relevant expertise and their past record of reliability.
For example, I found myself reading Robert P. Colwell's oral history interview closely and attentively despite an active disinterest in x86 architecture because (1) he had evident, relevant expertise in the field of computer architecture generally, in which I'm intensely interested and professionally adjacent; and (2) he reached conclusions that seemed consistent with my own <mumble> years of experience in software engineering; and that, bluntly, I was inclined (or biased) to believe already.
I've had numerous encounters with people like you who punk on others for not knowing things. You might concede, in a LessWrong sense, that we are all creatures that wallow constantly in ignorance. Nevertheless when correcting others you do it with an excess of derogation, awarding to yourself the privilege of expressing contempt based solely (or sufficiently) on your "conscious effort to double check what you write by looking at the various sources (including, but not limited, to Wikipedia". Well, bully for you. You checked Wikipedia. Maybe something else.
The contemptible thing about your comment was that you took the statement from Colwell that I quoted, claimed to agree with it, applied it with ludicrous specificity to a tentative claim I made about computer architecture and the history of compiler development, then proffered your (purportedly) superior and conflicting knowledge. Thus, rather than upholding Colwell's perspective as you claimed to do, you undermined it, by counseling others that their minds are inferior to yours--the opposite of Colwell's lesson.
> [I] don't demand “respect to my opinion” ... please don't tell tales about how I should tolerate nonsense
Who said you should? And incorrect statements are not always "nonsense". As noted above, your model confuses simple ignorance with carelessness, both of these with malicious deception, and all of the foregoing with noise in an information channel.
It's not the world's responsibility to give you a raw, noise-free information channel to inerrant truth. (Beware those offering one: they're building a cult.) It's your responsibility to identify imperfect paths to truths that will suffice. Your policy of nondiscrimination among the varieties of imperfect paths to knowledge lead you, apparently, to an intolerant tone and arrogant modes of expression. Alternatively, you already preferred that tone and mode, and developed (or borrowed) an epistemology to justify them.
> what would it give me? Friends? Why would I need friends on some remote forum, be it LWN or Reddit ? I have them in “real life”™.
Good for you for having friends in real life. The classes of relationship for which you seem to have no use, and yet which a forum like LWN comments (which has a narrowly focused audience, demographically and topically) depends, are known as "peers" or "colleagues". (That in turn doesn't mean people have to sickly sweet to each other--first, mandating such encourages belittling rhetoric to take on more baroque forms, crafted carefully to express contempt yet escape censure; second, engineers argue, sometimes passionately, about things all the time. Sometimes we learn new facts that sprout only from contested ground.)
Among peers and colleagues, just as among friends, it is okay to be mistaken or ignorant. What is valuable is to be able to offer, and accept, correction gracefully. Another valuable skill is the capacity to argue an issue, even a contentious one, without unnecessarily involving one's ego.
Some people will be unable or unwilling to associate with you on those bases; sometimes, it will be due to their own prejudices or defects. Other times, it will be solely down to you being supercilious and overbearing. One of these, you can control.
> That's the whole point: you may see it and you may challenge it. If you can. ... The whole idea is not to convince someone to “respect my opinion”, but to either accept my reasoning or show me where they are wrong.
Your point, and idea, need to be larger wholes to make communication with you more than minimally productive.
I concede that minimal productivity might be your objective.
(Meanwhile I'll count down to a scolding by one of the editors. :-/ )
Posted Feb 14, 2025 20:32 UTC (Fri)
by corbet (editor, #1)
[Link] (1 responses)
It's sad, honestly, that we have to keep asking this.
Posted Feb 14, 2025 21:14 UTC (Fri)
by khim (subscriber, #9252)
[Link]
I posted my answer before I saw this request. Sorry about that. I'll stop here.
Posted Feb 14, 2025 21:13 UTC (Fri)
by khim (subscriber, #9252)
[Link]
Means instead of trying to fight the confirmation bias you are embracing it. That's also normal for humans and hard to fight, but that's also what we are supposed to fight if we want to create predictions that are even remotely close to what would actually happen. One simple, yet efficient way to combat it is to compare your past predictions to what have actually happened. This approach have revealed that my picture of the world have few definitely bad spots — one example is Apple. Probably because my view of that company is similar to RMS's view (Remember? Steve Jobs, the pioneer of the computer as a jail made cool, designed to sever fools from their freedom, has died), while the majority of Apple users are feeling themselves comfortably in that jail — and there are enough of them to ensure that Apple would be able to do things that any other company couldn't. That's an interesting phenomenon, but it's, thankfully, limited: while Apple continue to be able to squeeze a lot of money from Apple users and force them to do things that human beings are not supposed to acquiescence too… this world is still limited and looks like it would be dismantled at some point, anyway. There are some other blind spots where my predictions are not performing well, but these are mostly related to timings and these are quite hard to predict (e.g. Google killed ChromeOS development, to finally, make sure it only have one platform across laptops, tablets and phone — but that happened in 2024… not decade before that, as was my expectation… Why was Google pouring money into something that would never fly? I have no idea, the move was obvious… but something made it hard to do when it made sense to do). But can you present similar predictions that have been justified or not justified? Do you even look on what you have predicted 10 or 20 years before? I think we are seeing two entirely different lessons there. You look on the Never, ever think that you're not smart enough; that's all nonsense, self-pat yourself on the back and tell yourself “hey, I shouldn't think I'm not smart enough and they all should respect me because I'm smart enough”! And I look on the part that you also cited, yet ignored: it throws away all the doubts that are otherwise going to bother you, and it's going to free you to just concentrate on “what am I seeing, how is it working, what should I do about it, what am I trying to learn from this”… these are important things, respect is earned, not given — but you have the capability to earn it! And then he continues to part where he gives that respect to others: He's still better at it than I'll ever be. I mean, I watched him do what I consider near miracles, like walking up to a refrigerator and telling you that the bearings and the compressor are bad, walking up to a car, telling you that there's something wrong with the carburetor just by listening to it.. And I'm ready to do the same if you are actually better at doing such things: take a look on something where you are an expert and tell me what's wrong and what's going to happen… and then you can be respected for your abilities and knowledge. But… where is that area? Where have you “walked up to something” and amazed someone with your ability to do the correct conclusion from what little you may observe there?
Posted Feb 14, 2025 17:25 UTC (Fri)
by jem (subscriber, #24231)
[Link] (1 responses)
What was wrong with the i960? Wikipedia says it was a "best-selling CPU" in its segment and calls it a "success". Maybe you are confusing it with the i860, which "never achieved commercial success."
Posted Feb 14, 2025 17:38 UTC (Fri)
by khim (subscriber, #9252)
[Link]
Please read the Wikipedia article more carefully: i960 was essentially RISC with added pieces of iAPX432 on top (in the “object oriented” BiiN version). The “base” version (without BiiN silliness) was a success, while all that “object oriented” crazyness was only used in some government-related projects (and while details are classified we can assume that it was mostly for buzzword-compliance because there was no successor).
Posted Feb 14, 2025 17:29 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (8 responses)
But compiler developers aren't drooling idiots, and used the analyses they did that were required to make Itanium perform adequately to help non-Itanium CPUs. As a result, instead of Itanium outperforming Pentium III, as analyses of compiler output from the early 1990s would suggest, Pentium III tended to outperform Itanium because the compiler improvements needed to make Itanium perform well also improved P6's performance.
Posted Feb 14, 2025 18:27 UTC (Fri)
by branden (guest, #7029)
[Link] (7 responses)
That summary is consistent with other accounts I've read. But then that would make the Itanium, rather than a stupid folly, more like the right mistake to make at the right time. Yet that is a far more generous characterization than it normally gets.
I suggest that redemptive readings of architectures that "failed in the market" are more instructive to engineers than x86 (or even ARM) partisanship.
Posted Feb 14, 2025 18:45 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (6 responses)
What Itanium demanded was that you regress compilers for x86, PowerPC etc so that they were back at the level that they were in 1993 or so when the project that became Itanium showed that hand-written EPIC code could compete with compiled output for OoOE machines.
Combine that with the 1998 chip releasing in 2001, having had to be ported forwards a process and cut down to size because it was (quite literally) too big to fit on the maximum possible chip size for 180 nm, let alone the 250 nm it was supposed to come out on, and it was clearly a folly - they literally couldn't build the thing they promised for 1998, it never performed acceptably compared to other CPU designs, HP had to continue PA-RISC for several years because Itanium wasn't good enough to replace it, and the x86-32 compatibility never performed as well as promised.
Posted Feb 14, 2025 19:24 UTC (Fri)
by khim (subscriber, #9252)
[Link] (2 responses)
Note that Itanium wasn't designed by idiots. Like Transputer they were designing the CPU for the wold where development of single-thread core have “hit the wall” near 100MHz and thus new ways of faster execution were needed. In that imaginary world of slow CPUs and fast memory access VLIW made perfect sense and was, in fact, one of the most promising designs. But after Athlon hit 1GHz, at the end of XX century… it became obvious that Itanic just simply make no sense in the world of fast CPUs and slow memory… but Intel had to push it, for marketing reasons, even if it was doomed to fail and it was obvious that it has no future.
Posted Feb 15, 2025 16:37 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (1 responses)
Itanium was a bet that it would be hard to scale clock frequency, but that it would be trivial to go wider (both on external buses and internally). As a bet to take in 1994, that wasn't a bad choice; the failure at Intel was not cancelling Itanium in 1998 when it became clear that Merced would not fit into 250 nm, that process scaling wasn't going to allow it to fit into the next couple of nodes either, and that once you'd trimmed it down to fit the next node, it wasn't going to perform well.
Then, Pentium 4 was a bet that it would be hard to scale logic density, but that clock frequency would scale to at least 10 GHz. Again, wrong with hindsight, but at least this time the early P4 releases were reasonable CPUs; it's just that it didn't scale out as far as intended.
Posted Feb 15, 2025 17:40 UTC (Sat)
by khim (subscriber, #9252)
[Link]
What's notable is that both times bets sounded perfectly reasonable. Cray-1 reached 80Mhz is a year 1975 and PA-7100 and Pentium, POWER2, SuperSPARC… all fastest CPUs for almost two decades topped at around that clock speed! Assuming that trend would continue was natural. Then, suddenly, during Itanium fiasco, 100Mhz barrier was broken and clock speeds skyrocketed… assuming that this trend would continue wasn't too crazy, either! More importantly: when Intel realized that P4 story is bust, too – it quickly turned around and went with P6 descendants… Tejas was cancelled… but for some reason Intel kept Itanium on life support for years.
Posted Feb 14, 2025 20:21 UTC (Fri)
by branden (guest, #7029)
[Link] (2 responses)
> No, it doesn't, because the compiler changes were made before Itanium needed them, precisely because they helped other architectures, too.
I'll confess to being the sort of inferior mind that khim frequently laments, because I'm having a hard time inferring a coherent chronology from the foregoing.
This drooling idiot's going to need a good resource for the struggles of ISA and compiler development in the '90s. Anyone got any references to recommend?
Posted Feb 14, 2025 22:16 UTC (Fri)
by malmedal (subscriber, #56172)
[Link]
Search for EPIC. It's a frequently recurring topic on Usenet's comp.arch, you can try reading the archives or asking.
I don't think the compilers ever scheduled Itanium's instruction-bundles very well. Itanium had superior floating point performance, but I believe this was because the FPU was register-based. x87 used a stack-based instruction set, which did not play very well with superscalar scheduling.
Posted Feb 15, 2025 16:46 UTC (Sat)
by farnz (subscriber, #17727)
[Link]
You'll need a good understanding, also, of out-of-order execution (e.g. Tomasulo's algorithm) and register renaming, to understand why compiler developers were doing more instruction scheduling for IBM POWER CPUs even without Itanium's deep need for explicit scheduling.
The core, though, was that EPIC relied on the instruction stream being pre-scheduled in software with explicit dependency chains and speculation, making the core simpler. Out-of-order execution (OoOE) infers the dependency chains and speculation based on the instructions in the "instruction window"; being able to schedule instructions the way EPIC requires allows OoOE to see more opportunities for parallel execution, since it can execution instructions in parallel wherever there isn't a dependency chain.
Posted Feb 13, 2025 23:36 UTC (Thu)
by willy (subscriber, #9762)
[Link]
Rust is actually a good choice, stop trying to throw out alternatives.
Posted Feb 14, 2025 0:41 UTC (Fri)
by rolexhamster (guest, #158445)
[Link] (102 responses)
Speaking as someone who has developed in C and C++ for 25+ years, I can say that the above statements are demonstrably false.
There is no way to fix either C and C++ without breaking backwards compatibility. At that point it's more productive (and with far less technical debt going forward) to use a new language that has been developed from scratch with safety explicitly in mind.
All the diagnostic tooling (in conjunction with so-called restricted language subsets) are not sufficient to provide proper safety guarantees in C and C++. They're like band-aids to cover up symptoms rather than addressing the root cause. In less charitable terms, they are equivalent to putting lipstick on a pig; there is still a (very ugly) pig underneath.
The entire resistance to iteratively incorporate Rust in the Linux kernel is downright bizarre and counter-productive. It's as if certain maintainers are too set in their ways to learn a very useful new language, and are happy with half-measures and wishful thinking. This eventually leads to obsolescence.
Posted Feb 14, 2025 4:22 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link] (100 responses)
I think the root cause of the problem precisely is what is written above: there are people who consider that others do not *want* to learn their pet language, despite the fact that about everyone agrees that it's probably one of the most difficult languages around to learn. I tried. I failed. Too many cryptic symbols, code is not pronounceable etc. That simply doesn't work for me. As I said to Miguel along a discussion some time ago, it makes me feel like I'm trying to code with smileys. The effort is just too strong for me. And I think that's actually the problem a number of other maintainers are facing. If you want to impose a very difficult language, it's normal to face resistance. But the resistance is not necessarily against the introduction of *a* new language, but just a perceived inability to learn *this one*. And this must be respected, and addressed in a way that doesn't require all these people to even just have to parse it if they can't. Systematically accusing people of deliberately refusing to do something they can't do is particularly harsh, it's like attacking handicaps.
At an era where everything wants to be "inclusive", maybe start with making Rust inclusive ? BPF used to be byte code only that was used only in tcpdump; it finally got a compiler and verifier that now makes it compile from C into safe code. Maybe you need a similar C-to-Rust compiler that will allow developers to write their code using a much simpler C syntax and turn it to Rust, rejecting what is not proven safe ? Just like with eBPF it will put the effort on the developer but using a language they understand at least.
Posted Feb 14, 2025 9:10 UTC (Fri)
by hunger (subscriber, #36242)
[Link] (1 responses)
What would help you?
If you kernel guys continue with Rust, you as a community will need to come to grips with the hard part of introducing a new language: You will need to work with people approaching problems in different way than what you are used to. You will need to develop a basic understanding of the "rust way" in addition to "the C way". If you guys do not manage that, then you will continue to burn each other out. If you do pull this off, I think all of you can benefit a lot from the fresh wind this will bring.
The rust devs start with a slight advantage here: Many of them know C already. They also have a disadvantage though: They "got the rust way" and think it is superior, so they will be impatient, waiting for the old C guard to catch up to them. That won't happen: You will need to find common understanding somehow and that will neither be on the C nor the Rust side.
IMHO this Rust for Linux thing is a much more interesting social experiment than a technical one.
Posted Feb 14, 2025 15:02 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
Which is hard. Which is why when I mention databases everyone groans :-)
But I just see things completely differently to most other people. I see objects, which contain attributes and relations. People "classically trained" in databases see relations, attributes and tuples, out of which they build ... well I'm not sure ...
And never (well hopefully not) the twain shall meet ...
Vi/Emacs. Declarative/Procedural. MultiValue/Relational. Rust/C. Just try not to pick a fight with the other side. Try to see things through their eyes. And remember - THEIR BRAINS MAY NOT BE WIRED THE SAME AS YOURS!
Cheers,
Posted Feb 14, 2025 12:02 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (4 responses)
Syntax might be hard to figure out initially, especially in languages like Haskell that use a lot of operators instead of named functions you can search for in a general purpose search engine (but Haskell has Hoogle to help with that), it might be hard to break some habits when switching languages for a bit, sure. But those are all minor problems compared to semantic changes between languages in terms of actual learning effort.
Posted Feb 14, 2025 23:41 UTC (Fri)
by khim (subscriber, #9252)
[Link] (2 responses)
It's not as bizarre as you think if you view the whole thing from the mindset of someone who think all these fancy languages and other modern tools exist only to emit machine code in a certain sequence. For them any language is only an thin (or thick) veil that hides their beloved machine code from them. In that mindset syntax is, indeed, the biggest obstacle and things besides syntax simply just don't even exist. And that's second part that puts Rust developers and kernel developers into unsolvable bind: while some kernel developers have come to accept the fact that what they want (compiler that makes it possible to predictably emit efficient machine code and ignore any and all rules) couldn't exist and wouldn't exist… significant part still want that mythical too and wouldn't agree to deal with anything else. But they don't see it like that! Machine code is machine code… what “semantic changes” are you talking about?
Posted Feb 15, 2025 22:14 UTC (Sat)
by kleptog (subscriber, #1183)
[Link] (1 responses)
It's like there's something about being an ingrained C programmer that makes it harder to switch to something higher level.
Like the comments related to it being hard to make complicated data structures in safe Rust. If you're coming from a language where the only data structures are arrays, maps and objects, you don't understand how this can be a problem.
Posted Feb 16, 2025 11:20 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Not really. Rust attracted C++ developers when that wasn't in initial plan at all. It's just people use languages differently… and very often not in a way their creators intended these languages to be used! Rob Pike (and his team) saw C++ as hugely complex, convoluted beast… and “solved” that problem by creating simpler language… that moved error detection from compile time to runtime to achieve that… of course C++ developers would shun that! But for Python and Ruby developers… who were fed up with types mismatch runtime errors… it was perfect fit. Similarly with Rust: what was a tiny side-story initially, not even central part of the language design… solved issues that C++ (and most other languages have) – and that made it popular among people who were fed up with endless fights against invalidated pointers and iterators… in all languages… but not among the ones who don't see anything wrong with what they are doing and think the fact that certain error-handling paths contain mistakes is not reason good enough to raise all that racket with “Rewrite it in Rust”. Believe me, C++ have more data structures than you can list. And Rust can implement all the data structures that you may ever imagine. There are [inclusive lists](https://docs.rs/intrusive-collections/latest/intrusive_collections/), [arenas](https://docs.rs/bumpalo/latest/bumpalo/), [self-referential data structures](https://docs.rs/ouroboros/latest/ouroboros/) and many other things… No, the problem is different, this time, again. Problem lies on C side, not Rust side, this time. In most languages (be it C++, Ada, Java or even Go) it's relatively easy to separate data structures from the “business logic” – but C (especially “C used as portable assembler”) doesn't give you such luxury. That's why natural inclination of C user is not to import implementation of some data structure, but to reimplement them. In ad-hoc fashion. Hey, that could be more efficient and I wouldn't have to rely on someone's else work! “We code for the hardware” mindset. “C as portable assembler” is really as high as you can push it. And even that starts falling apart with modern compilers. Higher-level language means that you have to give up that control and, in particular, would start using data structures prepared by someone else. That's flat-out unacceptable for these guys (hardware don't have any pre-canned structures means their code should have them, either) even if it's easy to see that alternatives wouldn't work: most bugs that fuzzers and other tools find in kernel are precisely in implementation of ad-hoc data structures and it wouldn't be practical to write them in any language. Even if you use WUFFS that makes, in theory, possible to write ad-hoc implementations… you quickly find out that writing all these ad-hoc implementation and all these ad-hoc correctness annotations… it's too painful to not reuse code.
Posted Feb 15, 2025 11:07 UTC (Sat)
by jengelh (guest, #33263)
[Link]
Please. I don't know of anyone ever who has produced an ACME::EyeDrops-syntacted code *and* kept to maintain it after the "syntax might be hard initially" part.
Posted Feb 14, 2025 21:25 UTC (Fri)
by roc (subscriber, #30627)
[Link] (13 responses)
It seems strange to me because I know a variety of people who have learned Rust and none of them found it particularly hard to get into:
Possible explanations? Could it be that all these people are smarter than the average kernel maintainer? I don't believe that for a second.
Maybe some kernel maintainers subconsciously (or consciously) don't want to use or like Rust, and that sabotages their efforts, possibly without them being aware of it?
Maybe for some kernel maintainers who write nothing but C for decades, their minds get locked into certain ways of reading and writing code, and it's really hard to break out of that? Sounds weird, but this corresponds most closely to what you described in your comment.
I can't think of any other possible explanations right now.
Posted Feb 15, 2025 0:20 UTC (Sat)
by khim (subscriber, #9252)
[Link] (12 responses)
No, it's now weird and, in fact, our discussions with Wol have shown the problem extremely acutely. These developers don't think in terms of C. For them C is just a fancy way to generate machine code and they think in terms of machine code. Which is then back-converted, in their mind, into C before code is typed. Sure, C have significantly different syntax, but it was, initially, created for that very task and thus such use doesn't feel unnatural, at first. You have to remember that C wasn't actually, a high-level programming language and maybe, not even a language at all… it was a means to write machine code for different machines from one “macropackage”.
It was later turned into something resembling high-level language by different people, members of C (and, later, C++) committees… but for these developers who still think in terms of machine code… nothing have changed (except for the “evil compilers” that require more and more effort to stop them from breaking their “beautiful machine code”). They are not smarter, but they have different background. For them Rust actually exist. And in: it's something separate from the machine code that comes out of the compiler. It has separate rules, separate properties, it's something that you deal with not by tracking the path from the “another weird syntax” to the machine code, but like it's something that can be separated from machine. And that's exactly where one funeral at time vibe lives: to understand Rust you first have to understand that it doesn't try to invent yet another way to represent machine code in your program, but, in fact, it gives you a way to represent your intent (and then compiler have freedom to transform said intent into a machine code – and you may, mostly, simply ignore details of that process). That's not something you can actually learn, that's something you have to accept… and it's easier to accept it for someone with JS, Python, Java experience than for someone who first wrote machine code with PEEK/POKE in BASIC, then “graduated” to Assembler and “got a degree” in C. They try to apply the exact same rules to Rust… and they simply just don't feet: Rust, especially safe Rust, is very limited and doesn't allow them to express many things that were expressible even in C… that's awful experience for someone who thinks in assembler. P.S. The irony here is, of course, that the last real programmer had the exact same trouble trying to accept assembler. It, too, robbed him of some of skills that he was feeling were quite valuable. Kernel developers haven't lived in a era of drum memory thus they never developed these skills and it wasn't hard to them to live with such “sacrifice”. But Rust taxes their ability in a very similar way: it just makes something that they feel is part of what they are supposed to have in their toolbelt inaccessible… and that causes this strong rejection feeling.
Posted Feb 15, 2025 4:45 UTC (Sat)
by ebiederm (subscriber, #35028)
[Link] (8 responses)
I came to the conclusion very quickly that for the kinds of code I typically write, I can not express it in safe Rust.
Not that we are talking much of a challenge. I have to think hard to find any of the data structures from my intro to data structures and algorithms that Rust can be implemented in safe Rust.
At which point I do my research and I see that Rust dropped the ball and it is possible to machine check the implementation of those algorithms. Rust just doesn't give me the tools I need.
I do a bit more research and realize that C has a lot of accidental/unnessary complexity C++ is off the charts. Rust isn't as bad as C++ but there remains a lot of unnecessary complexity. Which all matters because if we want to build aecure software. The kind where we can stand a server up on the Internet and start having to worry what happens when the uptime counter wraps because there hasn't been any security issues found in the software for years. Completely matters.
The more complex the foundation is the harder it is to analyze.
Which means Rust really isn't a programming language I desire to use. It complains because it can not understand my safely written code, and it complicates analysis of that code.
Rust seems an incremental step forward. But we are talking an incremental step forward from code that has been good enough for the last 25 years to exceed the reliability of bleeding edge hardware. I have stories.
That is C really is good enough to get the job done.
I have seen people make the case that Rust can get the job done too. I haven't seen much support for the notion that Rust does the job better. I am pretty certain if I were using Rust I would find it an over complicated clap trap that I have to continually fight to get it out of my way.
Posted Feb 15, 2025 5:28 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
I struggle to understand how you can reconcile these two statements. C has shown again and again and again and again and again and again that it's NOT possible to write secure non-trivial software in it. Pretty much every large network-facing product in C has seen its share of critical security issues.
> Rust seems an incremental step forward. But we are talking an incremental step forward from code that has been good enough for the last 25 years to exceed the reliability of bleeding edge hardware. I have stories.
What "reliability"?
Posted Feb 15, 2025 13:08 UTC (Sat)
by ebiederm (subscriber, #35028)
[Link] (3 responses)
I do not see Rust making it possible to do the above.
The Rust type system does not allow modelling what needs to be modelled to allow for data structures to be written in safe Rust. For me it is like working with epicycles in an Earth centered solar system model, when what we need are ellipses in a sun centered model.
It is a lot easier to show an ellipse is correct because the model is simpler.
Although honestly I don't think Rust has managed to be even as good as epicycles were in astronomy. Rust just says: it is unsafe I give up.
If Rust only gave up on the implementation details of tricky things like RCU, and the fine details of spinlock implementations, sure. That stuff is hard. But into to data structures 101. That seems to push pretty much every pice of code I write into unsafe land.
So my first statement was about where I would like an operating system to be, and how I don't see Rust making it easy to get there
My second statement ( "C is good enough") is about practical utility. In the common cases. In the cases where nobody cares about security.
Try standing up to a boss who has a make or break the company feature they want to ship, and tell them what you are building on is implemented insecurely and it will be another year before the foundation has a proper foundation.
You say:
I will point out that every time it is some silly mistake. An off by one error, or not getting error handling quite right. Overall code paths that don't matter for what people are trying to do. Almost always a localized and simple fix will do.
Which is to say it isn't the hard stuff that C gets wrong, C simply does not give enough support to prevent errors in the easy stuff
Telling people what they have been doing for their entire career is hopelessly broken, is not a selling point especially when they know better. They know what they have been doing gets the job done.
To actually achieve secure software I can leave on the Internet for decades without updating, I am pretty certain that will require supporting statically verifying assertions about the code. As only machine validation can be thorough and patient enough to check every little corner case.
So far Rust makes some headway in that department but then it's type system sees an unsafe and bows out. I haven't seen anything in Rust beyond it's type system that can be used to catch the rare mistakes conscientious people make.
Posted Feb 15, 2025 13:26 UTC (Sat)
by mb (subscriber, #50428)
[Link]
Nobody claims that you can implement RCU or spinlocks in safe Rust.
>I will point out that every time it is some silly mistake.
In Rust most of these silly mistakes are impossible to do. That's the point.
>Almost always a localized and simple fix will do.
Not having to do a fix, because nothing is broken, will do better.
>Telling people what they have been doing for their entire career is hopelessly broken, is not a selling point
Sure. People tend to not like changes.
>They know what they have been doing gets the job done.
... with a steady stream of security issues.
>but then it's type system sees an unsafe and bows out.
unsafe blocks don't disable any of the type, borrow and safety checks.
>I haven't seen anything in Rust beyond it's type system that can be used to catch
Sure. But that's just because you didn't look. Not because they don't exist.
Posted Feb 15, 2025 21:43 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Which data structures Rust doesn't allow to be modeled (perhaps with a bit of `unsafe` inside the implementation)?
> My second statement ( "C is good enough") is about practical utility. In the common cases. In the cases where nobody cares about security.
No. C is not "good enough". It's downright dangerous. At this point, merely using C for new projects should be considered bordering on criminal negligence.
> I will point out that every time it is some silly mistake. An off by one error, or not getting error handling quite right. Overall code paths that don't matter for what people are trying to do. Almost always a localized and simple fix will do.
Yeah. And that's how we get airplanes crashes and nuclear reactor blowups. This attitude is just distilled nonsense and sheer arrogant ignorance.
The single greatest advantage in the safety science was the recognition that humans can't help but make mistakes, and that measures should be taken to make these mistakes impossible: LOTOs, automatic safety interlocks, etc. So every other engineering discipline (including the ones that are much younger than Computer Science) has adopted tools that reduce the possibility and the impact of "silly mistakes". Even non-engineering disciplines are doing that.
The latest advantage that drastically reduced the death rate in surgeries was not some kind of tricorder powered by AI, but simple checklists: https://en.wikipedia.org/wiki/WHO_Surgical_Safety_Checklist
Posted Feb 16, 2025 22:28 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Do you have any existing examples of this actually happening in any language, nevermind C? Maybe Erlang-based deployments at telcos (depending on your definition of "update")? But I can't think of any C program that has been "on the Internet for decades without updating" while also being "secure". Unless maybe you're counting `/bin/true` as "on the Internet"?
And I agree that machine validation is required for further trust in software. But Rust is not only more easily tooled (because its source just has more information than C does), but *has* better tooling in almost every dimension that matters (sure, UBSan is "better" for C than Miri is for Rust given Miri's limitations, but its need for such a tool is also *far far* lower).
Posted Feb 15, 2025 11:58 UTC (Sat)
by khim (subscriber, #9252)
[Link]
How is it different? If course they can all be implemented in safe Rust. Just put all your data structures into array and use indexes. That's how we studied them when I was in school which only had access to some crazy micros with BASIC in ROM. What you can not implement in safe Rust is something that gives you the exact same beautiful and clear machine code that exist in your head. That's different issue. Are you sure? Why do you implement the exact same data structures again and again? What's the point? What happened do “don't repeat yourself” and code reuse? Yes, less complex languages are desirable… but only if they are expressive enough to provide safety guarantees. C and even C++ don't provide them. Rust does. It's as simple as that. After all BASIC is arguably simpler than C and yet kernel is not written in QB64 As long as you are using Linux that's impossible because even just security in kernel are found more often that we may like. Only if “all other things are the same”. In case of C and Rust they are not the same. Yup. That's what FORTRAN and COBOL programmers were telling themselves half-century ago. When structured programming debates raged. And then they were fired and replaced with Pascal and C programmers, and their ability to juggle complex GOTO-filled programs haven't mattered one jot. COBOL programming got vindication years later when banks needed to do something for their decades-old systems. FORTRAN programmers mostly had to leave programming completely. I interviewed few, when they tried to return to IT, years later, and were even willing to learn “new ways of doing things”, but my boss rejected them. Ironically enough over time strict restriction on “no GoTo ever” was relaxed a bit with MISRA C 2012 allowing forward goto and exceptions being nothing more than interprocedural forward gotos… but that happened years later. Initially rejection of unstructured control flow was implemented with almost religious fervor. Just like FORTRAN and PL/I were “good enough” half-century ago. They are still with us, but no new projects are written in them. The same would happen to C. The interesting questions are not if C would be replaced but what would it be replaced with – and if Linux would survive that replacement or not. Because when all that heat was generated in the 1970th… no one could predict that it would be C that would win the race, in the end. Not just reigning king, Pascal, was much more popular, but even Ada was, briefly, more popular than C! Then lawsuit happened that changed the course of history… something like this may happen in the future, still. For example if Apple would either be broken up or would collapse (which would free swift from it's shackles) then we may end up with the future where Swift and not Rust would replace it. Currently tremendous advantage of Rust lies with the fact that it was developed in Mozilla that's currently not financially capable of supporting and controlling it and thus it's the first contender for the C/C++ replacement. But like AT&T breakup, suddenly propelled C and C++ to the stardom… something like this could still happen to Swift, or, heck, even Carbon. Language development is fascinating area where technical capabilities determines losers while social dynamics determines winners. That's how we can say that C is doomed (it's simply technically not good enough to survive), while saying that Rust would replace it… looks more and more likely, but then in the middle of 1980th it looked as if Pascal would replace FORTRAN and COBOL… OSes (e.g. Apple's Lisa and Classin MacOS) and programs (most early Microsoft programs for PC) were written in Pascal – yet ultimate winners were C and C++ from AT&T.
Posted Feb 15, 2025 17:53 UTC (Sat)
by excors (subscriber, #95769)
[Link]
I'd agree that Rust is a poor language for introductory computer science. E.g. it doesn't closely match the pseudocode of CLRS (where much of the book is dedicated to complex and subtle mathematical proofs of correctness, which is antithetical to Rust's idea of mechanically-verified correctness; and the book doesn't care about concepts like ownership because it never deallocates anything, effectively assuming you have a GC). Java or Python would be a much better fit while you're working through a textbook like that. For recursive data structures (singly-linked lists, trees), Rust is quite cumbersome - you'd be better off with ML or Haskell. For learning how CPU hardware works, you'd want an assembly language (or maybe C-as-portable-assembly, keeping away from its rough edges).
Rust isn't meant for computer science, it's meant for software engineering. It's for when you've already learned the CS and you're trying to apply it in real-world codebases, where scalability and robustness and concurrency and non-asymptotic performance are much more serious challenges than how to implement yet another chained hash table from scratch. You'll just use std::collections::HashMap. In the very rare case you need a data structure that nobody else has implemented and published as a crate, then you can make use of your CS education and write it yourself; Rust probably makes it harder to get it right, but you only need to do it once, and then you can go back to the real software engineering that Rust makes a lot easier.
If you aren't doing software engineering, that's fine - there's plenty of programming (scripting, research, scientific computing, etc) where you'd probably get little benefit from Rust, and other languages will be better. But OS kernels and internet-facing applications are exactly where Rust's benefits are needed.
Posted Feb 15, 2025 20:08 UTC (Sat)
by roc (subscriber, #30627)
[Link]
Posted Feb 15, 2025 8:55 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link]
Posted Feb 15, 2025 11:29 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
Hmm... interesting ...
Okay, I'm not a University Trained programmer, and I was already an experienced programmer when I first met C, but I do bang on about how people think and how your first experiences shape you. I very much treat programming like a school maths problem, my first real language was FORmula TRANslation, which encourages you to think that way, and so I've found Guile/Scheme and Forth very tricky, while I've always programmed C like it's Fortran. Like I'm answering a maths question.
And Pick/DataBASIC. While they were designed in lock-step (although the language came later), they are designed to store and manipulate objects, not rows. Again, a very different way of thinking.
It's always easy to extend your worldview to include similar views. Changing your worldview is a lot harder. It's quite likely kernel programmers think differently at a fundamental level, and Rust is maybe one step too far. I wonder how I'll fare when I really start digging in to it.
Cheers,
Posted Feb 15, 2025 12:31 UTC (Sat)
by khim (subscriber, #9252)
[Link]
Not as hard as it may look from all these cries about “step learning curve” and “ugly syntax” (which I still perceive as ugly, ironically enough). Story, typically, goes like this: I couldn't say that Guile/Scheme and Forth are super tricky, but they just have never “clicked” to me. I can force myself into mindset needed to use them but it's hard for me to think in these languages. Maybe if I would ever used them professionally I would have learned to do that, but since I never went beyond toy programs with them… I guess I never needed that. But Rust… it's actually very easy and simple language – but only if you accept it's desire to form all data structures into a trees (with some escape hatches for more complex cases in a form of pre-made data structures like After you stop trying to bend Rust to your will (possible but very hard and compiler fights you tooth and nail) it's actually pretty simple language… with the sole exception being, ironically enough, not the famed borrow checker, but it's typechecker that's some kind of Prolog (as expected) but with cut operator added in very strange places (half-expected, but still very jarring). The hard part is to stop trying to write some other language in Rust, be it C++ or JavaScript. But for some reason dropping JavaScript habits is easier than dropping “we code for the hardware” habits. Low-level programming in Rust requires thinking on two levels. Hardware is still there, hardware needs still need to be obeyed… but now you can not just back-translate assembler code into high-level language, but have to express hardware restrictions to Rust… this just feels wrong: I'm programming hardware, why the heck another agent is even needed? But Rust is complex enough that trying to understand how your code would be represented in machine code… just doesn't work. Sometimes you guess right, sometimes you guess wrong and for the Rust to be usable you really need to delegate work with hardware to it! But that's precisely what many experienced kernel developers and, ironically enough, especially maintainers, don't want to do! Once you accept that two-step process of learning Rust is not too hard… but as long as you are trying to sidestep Rust and convince it to produce the machine code that you envision in you head… you would have trouble.
Posted Feb 14, 2025 21:49 UTC (Fri)
by mb (subscriber, #50428)
[Link] (78 responses)
Please stop that nonsense.
> code is not pronounceable
How do you pronounce
int foo(void) { return 0; }
and why is this worse than the equivalent Rust syntax?
Rust syntax very much *is* pronounceable. If you don't know how to pronounce it, than it's rather that you probably didn't learn Rust.
>At an era where everything wants to be "inclusive",
Here we go...
>Maybe you need a similar C-to-Rust compiler that will allow developers to write their code using a much simpler C syntax and turn it to Rust
Really. Please read some basics about Rust.
The thing you are requesting is impossible and does not make any sense.
It's like requesting a machine code to C++ decompiler. That is impossible in general for obvious reasons.
Posted Feb 15, 2025 8:53 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (77 responses)
I began with BASIC in 82, I was almost 8 years old. I did a lot of BASIC since it was the only language I had access to, and when you're a kid you have plenty of time available. At 10 I got a PC-mostly-compatible, then at 12 I got an update to DOS 2.11 which came with the "DEBUG" utility that allowed me to start with ASM. I wrote many ".COM" programs entirely in DEBUG, I had no particular source code, I could read it using command "u" to see the disassembled version, and "a" to start assemble new instructions. I got used to reserving space for forward jumps and later patching, and I even got used to writing self-modifying code. That was quite efficient. Then I discovered Turbo PASCAL but still used jointly with asm (particularly for stuff that I needed to do fast using self-modifying code) and wrote my own assembler that supported symbols and named labels. Good improvement. I only learned C at 20 on UNIX. I didn't and still don't like the language much, it has a number of deficiencies and usability issues (like UB, unexpected operators precedence, unconvenient stdlib particularly for string processing, etc). But for me C has always just been a way to write portable, high-level asm code that builds and runs (almost) similarly on all machines, with as little asm() statements as possible, and I had to stop writing self-modifying code due to non-writable code segments, while in parallel CPUs got larger decode units and efficient branch prediction making this less useful. What I'm thinking about when I write C is the code it will produce, and the portability of the types involved. I constantly look at the resulting asm code when I write C, and on at least x86 and arm, just to have a glance at what it does (i.e. whether or not I sufficiently help the compiler understand what I'm doing).
When I tried Rust, I found that it had nothing in common with these other languages. It cares about concepts that are totally counter-intuitive to me. For example, why does it care about who owns a pointer to a location since a pointer is just an integer, in terms of C, it's an offset from NULL to a memory location, and a memory location is just a bunch of areas filled with either transistors (SRAM) or capacitors (DRAM) that I'm having difficulties imagining that a compiler wants to prevent me from using since they're in my address space and if I decide that from an algorithm point of view I want to use them this or that way, I don't see why the compiler would prevent me from doing so. In addition, the language makes the learning very difficult to me because it uses characters/operators/signs that have different meaning from those of the languages above. I'm always stopping my mental parsing when I see the "'" (single quote) character (which I already forgot what it's about) as in other languages it was used to delimit either a char or a string for example. The fact that a variable declaration is in fact a constant if you don't write "mut" also feels very strange to me. Generally speaking I find the syntax difficult and poorly expressive. You can disagree because you know the language and managed to learn it. But that's the way I feel about it. I have tried some online howtos, like "30mn to rust" etc. I get lost very quickly, too many differences with what I'm used to, too much difficult for me, and for no perceived value except maybe have more difficulties to write programs. I feel like I have to lock-pick a door to enter a jail. That feels very strange to me.
I suspect that there are different expectations from different people (maybe generations BTW). There seems to be those who don't care much about the underlying hardware and want to express their thoughts in a program so that the compiler does its best to represent them in an executable, and those who think how they will use the available hardware to best implement an idea they're having, and feel like the compiler shall not stop them from experimenting with their idea, because their ideas are often sparked by perceiving an opportunity (e.g. you read the description of a CPU instruction you had never heard of and suddenly you figure what you could do with it and a new idea is born). Neither is right or wrong, these are completely different approaches, and it's possible that some languages are more suitable for the ones with the first approach and others are more suited to those with the second approach.
And for the same reason some Rust developers can be shocked to see what C permits and the risks that come with it, some C developers might be shocked to see what Rust tries to prevent and the difficulties that come with it. The goals are just not the same.
I hope this gives you more background about *my* difficulties with the language, which may or may not match others', but I'm expressing this to try to help the two camps listen a bit more to each other without systematically considering there is bad faith on the other side, because these attitudes have only resulted in heated debates and people quitting, which is bad for everyone. Let's just accept that everyone's brain is not the same and let's not make fun of the ones different from yours.
Posted Feb 15, 2025 10:01 UTC (Sat)
by mb (subscriber, #50428)
[Link] (74 responses)
I have not said that. I agree that your problems with Rust are very real.
>For example, why does it care about who owns a pointer to a location since a pointer is just an integer, in terms of C
Because a pointer *isn't* just an integer. Even in C.
>"'" (single quote) character (which I already forgot what it's about)
In Rust it *is* used to delimit a char. I'm not sure what your point is.
>The fact that a variable declaration is in fact a constant if you don't write "mut" also feels very strange to me
It's just a sane default, because most variables are in fact not mutable.
>Generally speaking I find the syntax difficult and poorly expressive
Ok. But that's really strange, because a lot of things in the syntax are really equal to C.
>I have tried some online howtos, like "30mn to rust" etc
See this book:
It takes you baby step by baby step through all those new things. And you will be able to write basic Rust programs after a couple dozen of pages.
>I suspect that there are different expectations from different people (maybe generations BTW)
I also did all the things you mentioned. I also started with Basic and so on.
You seem to be complaining that Rust added things to the syntax to be able to express new things.
>and those who think how they will use the available hardware to best implement an idea...
Yes. C is completely unsuitable for that.
>Rust developers can be shocked to see what C permits and the risks that come with it
C does not permit more things than Rust.
>help the two camps listen a bit more to each other without systematically considering there is bad faith on the other side
Yes. You are in one of these two camps, as am I.
Posted Feb 15, 2025 11:08 UTC (Sat)
by jem (subscriber, #24231)
[Link] (7 responses)
>In Rust it *is* used to delimit a char. I'm not sure what your point is.
I think he means the quote character that is used to declare lifetimes.
Posted Feb 15, 2025 11:18 UTC (Sat)
by zdzichu (subscriber, #17118)
[Link] (6 responses)
See the problem?
Posted Feb 15, 2025 15:24 UTC (Sat)
by mbunkus (subscriber, #87248)
[Link]
No, I do not see a problem with characters serving different purposes simultaneously.
Posted Feb 15, 2025 20:32 UTC (Sat)
by excors (subscriber, #95769)
[Link] (4 responses)
In ML it's a type variable, representing an unspecified type that will be determined via type inference when the value is instantiated (I think). Basically the same as T in C++ templates. In Rust it's a similar idea, but just for the lifetime component of a type. The Rust compiler was originally written in OCaml, so they would have been familiar with that syntax.
However, it turns out ML had nothing to do with it. Until Rust 0.6 they used "&a/foo" instead of "&'a foo", but they weren't happy with that: https://smallcultfollowing.com/babysteps/blog/2012/12/30/... . That blog post suggested several options including "&{a}". Someone on Reddit suggested "&{'a}", to make the syntax less ambiguous. Graydon Hoare thought the braces were ugly (https://web.archive.org/web/20140716163946/https://mail.m...), so they settled on "&'a".
The use of the same syntax for loop labels is not a coincidence: the original idea was that loop labels were actually lifetimes, and you could write " 'a: { let x: &'a T = ...; }" to explicitly tie a variable's lifetime to a block (https://web.archive.org/web/20140716182842/https://mail.m...). That didn't happen, so now the loop labels are only used for control flow and exist in a different namespace to lifetimes.
Posted Feb 15, 2025 20:40 UTC (Sat)
by mb (subscriber, #50428)
[Link] (3 responses)
I'd like to add (for the people who don't know Rust, yet) that lifetime names are not restricted to single characters.
Posted Feb 15, 2025 20:57 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (2 responses)
And once you've got a good grip on how lifetimes work in your code, it's then a lot easier to apply the elision rules to remove useless lifetimes and leave your code clear to future readers. It is, of course, easier to start out without lifetimes at all, and just use owned copies (via Clone::clone()) whenever the borrow checker argues with you, but it's not always possible to do so.
Posted Feb 17, 2025 12:17 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (1 responses)
Over time I noticed that more abstract code is often actually easier to reason about because the more abstract some piece of code is, the fewer operations can be applied to it (e.g. you can't write a literal to produce values out of thin air, you can only pass on values you already have, can't call just any function, just the ones in the traits (or in Haskell typeclasses) specified in the constraints).
Posted Feb 17, 2025 12:37 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
I might, in that process, end up with short names, I might not - it will depend where I get to as I dig myself out of the mess. But it's similar to how I'd dig myself out of a mess with functions named "a", "b", and data items named "ud_1", "ud_2" etc - make the names excessively verbose, and shrink them to a sane size once I've understood WTF is happening here.
Posted Feb 15, 2025 11:19 UTC (Sat)
by jengelh (guest, #33263)
[Link] (40 responses)
This use:
fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> {
(it also complicates colorization rules for an editor if it can no longer expect that every opening ' has a symmetric closing '.)
Posted Feb 15, 2025 12:16 UTC (Sat)
by mb (subscriber, #50428)
[Link] (29 responses)
Would it have been better to use `a instead of 'a? I don't know. Then probably the next person would complain that it's hard to distinguish ` from ' or that it's harder to type on certain keyboard layouts.
Yes, you can write complicated looking code in Rust, but the common case is rather simple due to a set of default rules in lifetime and type inference.
The same thing applies to C. Complicated, confusing and clever macros, anyone? Or the way you read types in C? Which way do you read, left to right or right to left, where do you start and where do you flip directions?
>it also complicates colorization rules for an editor if it can no longer expect that every opening ' has a symmetric closing
Sure. Rust is not trivial to parse.
Posted Feb 15, 2025 14:52 UTC (Sat)
by jengelh (guest, #33263)
[Link] (28 responses)
That is another character where a handful of other languages (sh, perl) and documentation-related formats have popularized symmetric use. So, personally, no, I would not use `.
Posted Feb 15, 2025 15:19 UTC (Sat)
by mb (subscriber, #50428)
[Link] (26 responses)
Posted Feb 15, 2025 16:16 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (25 responses)
I guess one of the difficulties is that when you try to learn the language, you have to face both new concepts and new syntaxes at the same time. Getting a clear idea of what you're doing when blindly copy-pasting stuff that you're having a hard time developing reflexes for is hard, and that definitely does not help getting more up to speed with it.
And clearly, regarding some of the concepts, I spent two hours extending a hello world program to append a numerical argument passed on the command line, i.e. the equivalent of printf("hello world: %d\n", argc>1?atoi(argv[1]:0). I just gave up, being constantly told by the compiler that I was doing bad stuff. It did encourage me to try different things, which is great, but each thing I tried didn't work and at some point I was going in loops. It's quite discouraging, because I spend my time telling gcc to shut up when it doesn't know, and here I felt that the compiler was thousands of times more rigid and extremist. I can hardly see a use case where this could bring me anything except pain.
Posted Feb 15, 2025 16:28 UTC (Sat)
by intelfx (subscriber, #130118)
[Link]
Yes, that is expected. You need to learn the language, which does not end at learning the syntax. This means aligning your mental model with the language, learning some new habits, and unlearning some of the old ones.
There would be no point in Rust if it was just a clone of C with a more inscrutable syntax. Rust is valuable *precisely* because it represents more than just C with an inscrutable syntax.
> I can hardly see a use case where this could bring me anything except pain.
This does not mean such a use case does not exist. This is precisely the "hubris trap" that so many high-profile Linux developers and maintainers are falling into.
Posted Feb 15, 2025 16:30 UTC (Sat)
by mb (subscriber, #50428)
[Link] (16 responses)
Rust forces you to not code the same bug as in your C code. The one where argv[1] is not a numeric string.
Posted Feb 15, 2025 16:40 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (15 responses)
The thing is, there are plenty of cases where I *know* it's valid, e.g. because it has been validated a few lines before one way or another, or because it's guaranteed by contract in an API or anything. Instead I feel like the extra difficulty diverts me from doing the thing I was trying to do, and that constantly working around the compiler has serious chances of making me introduce bugs the same way I occasionally introduce some by trying to shut up an inappropriate gcc warning by rewriting code differently and making a mistake while the initial one was correct. I have strong doubts about the validity of all rust code in the wild in 10 years. Sure we'll see less overflows, but control bugs are very present as well and I suspect will be harder to spot (and sometimes even fix).
Posted Feb 15, 2025 16:56 UTC (Sat)
by mb (subscriber, #50428)
[Link]
Sure. And then you just have to tell the compiler about that knowledge. It's often as simple as calling unwrap(). Or sometimes even simpler by throwing in a question mark at the end.
In C you would also at least have to add a comment about where your assumption that an error can't happen comes from. In Rust you can just write that comment into code. For example with expect("The caller shall handle this").
>and that constantly working around the compiler has serious chances of making me introduce bugs
This "constantly working around the compiler" is just because you didn't learn the language.
Everybody who knows Rust knows that the compiler is extremely helpful when dealing with errors.
> I have strong doubts about the validity of all rust code
Based on what? Your non existing Rust experience?
Posted Feb 15, 2025 17:27 UTC (Sat)
by intelfx (subscriber, #130118)
[Link] (13 responses)
Sure. Then someday preconditions change, API contracts get violated (accidentally, or perhaps maliciously), and the CVE database grows a new entry.
If Rust forces you to code defensively, then that is a *very good thing*. That's the entire point.
Posted Feb 15, 2025 18:52 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (12 responses)
But are you sure that it's not C that forces you to code defensively instead, given that you have no safety belt and you're on your own. If on the opposite I say "it compiled so it's safe", I quickly learn not to care anymore about defensive approaches.
Posted Feb 15, 2025 19:00 UTC (Sat)
by mb (subscriber, #50428)
[Link]
Posted Feb 15, 2025 19:19 UTC (Sat)
by intelfx (subscriber, #130118)
[Link] (9 responses)
I would argue that if we are honestly trying to say "C keeps us on our toes by virtue of being such a mess", it's not a good picture either way.
Posted Feb 16, 2025 22:22 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link] (8 responses)
I understand the argument in the context of automating life-risky processes[1]. But the difference here is that Rust *isn't* guaranteeing "all bugs are gone". It is guaranteeing "the compiler will tell you when your code has *a class of problems*" so that you *can* focus on the logic bugs rather than having to think about "is this index calculation going to blow us up later?" Anything that has software for life-risky bits better have some level of logic bug detection (e.g., comprehensive test suite, formal verification, etc.), but this is needed *regardless* of the language unless one is actually doing their coding in Idris or something.
[1] A self-driving car that is wrong 50% of the time keeps the driver "in the loop" more effectively than a 75% accurate self-driving car, but once you hit some threshold, there's a bad overlap between human complacency with how accurate it *usually* is and hitting the gap in the AI behavior.
Posted Feb 16, 2025 22:33 UTC (Sun)
by intelfx (subscriber, #130118)
[Link] (1 responses)
That seems to align with what I was trying to say? If the only thing that keeps the codebase working is "bleed-over" of attention from memory correctness to logic correctness, that's not a sustainable practice either way (== we should instead be using tests and other stuff for logic correctness).
Posted Feb 16, 2025 23:07 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Posted Feb 16, 2025 22:58 UTC (Sun)
by jengelh (guest, #33263)
[Link] (5 responses)
A mechanism with 50% error rate is a mechanism that quickly gets disabled by the user. ("Fine, I'll do it myself" is effectively keeping the user in the loop, I give you that.)
Posted Feb 17, 2025 8:29 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
Posted Feb 17, 2025 8:36 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
Posted Feb 17, 2025 10:42 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (2 responses)
It will (less so now) select illegal speeds, accelerate inappropriately, do all sorts of things. I tend to refer to it as a "granny racer" given that it tries to as fast as possible at every opportunity, yet is excessively cautious at others. It will accelerate, and then when it gets itself into trouble it will scream at me to brake ...
Cheers,
Posted Feb 17, 2025 11:18 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Feb 17, 2025 12:58 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
And it breaks a whole bunch of safe UI guidelines as well, such as allowing a driver to *override* the acceleration!
Cheers,
Posted Feb 16, 2025 11:41 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Not possible. Not even remotely close. If you proved to the compiler that something is safe then you have proved to yourself that it's safe, as well. Precisely because compiler is dumb and doesn't understand many “subtle” ideas – you have to “dumb down” that proof of correctness that you have in your head to the level that compiler would understand. But compiler is also tireless and persistent. You couldn't convince it to like your code by writing long but incorrect explanation – like may happen with human reviewer. Nah. The most “defensively programmed” code that I saw was in Java or Python. Often “uselessly defensively programmed”. Because there you don't need to prove anything to anyone, any nonsense that you may write would be “memory safe” (by the virtue of virtual machine that decouples you from hardware) and then you have to program defensively and check everything 10 times because nothing (except these redundant checks) protect the integrity of your code from bugs.
Posted Feb 15, 2025 19:57 UTC (Sat)
by dralley (subscriber, #143766)
[Link] (6 responses)
Even without an existing understanding of Rust, that probably should not have taken 2 hours to do. I'm not sure what it was exactly that you were struggling with but it wasn't a "Rust problem". This was just as easy for me to write as your C code likely was for you.
> use std::env::args;
Posted Feb 15, 2025 21:19 UTC (Sat)
by dralley (subscriber, #143766)
[Link] (1 responses)
> use std::env::args;
There's nothing complex going on here, and Rust doesn't make it any more difficult than C. It's marginally more verbose, but only because Rust forces you to make potential failure points more explicit.
Posted Feb 16, 2025 1:02 UTC (Sun)
by MrWim (subscriber, #47432)
[Link]
Posted Feb 16, 2025 10:38 UTC (Sun)
by adobriyan (subscriber, #30858)
[Link] (3 responses)
Had they make main() to be main(arg0: &[u8], arg: &[&[u8]]) or equivalent it should have been more obvious what to do.
Posted Feb 16, 2025 14:19 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Sure, but why would they do something that doesn't work correctly? On POSIX there are no guarantee that In essence that's an example of what Rust does: instead of “easy” it usually picks “correct”.
Posted Feb 16, 2025 14:29 UTC (Sun)
by intelfx (subscriber, #130118)
[Link]
Yes. "For every complex problem, there's a solution that is simple, neat, and wrong."
argv is a pointer to global, mutable data. Attempting to represent it as a Rust reference is completely incorrect with respect to Rust aliasing semantics. The Rust standard library goes to some contortions to wrap argc/argv into a memory-safe abstraction, and an iterator is more-or-less the best way one can do it. Google "rust why args is an iterator" for details.
Posted Feb 16, 2025 14:52 UTC (Sun)
by intelfx (subscriber, #130118)
[Link]
Indeed, if Rust is worthless, then you can just compare it piece-by-piece to C and every idiom where C is "easier" than Rust means that C is the "winner," because Rust has no added value (by postulate) and therefore the "easier" thing wins.
However, this is a fallacy. If Rust had been just a clone of C with a worse syntax, then it would indeed be worthless, but that's not the case. Rust is valuable precisely because it is *not* a clone of C with a worse syntax. It is a different language, built on different concepts and abstractions, chosen for their *value*, and those concepts and abstractions necessitate different idioms to realize that value.
Posted Feb 15, 2025 15:30 UTC (Sat)
by mbunkus (subscriber, #87248)
[Link]
No matter where a language ends up on the spectrum, you certainly cannot satisfy everyone. Just a couple of hours ago someone complained having to write "mut" for mutable variables in Rust as they found it too… I don't know, tedious, I guess.
Posted Feb 15, 2025 15:59 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (5 responses)
Yeah exactly that thing. I'm sorry, but in this line for me there are far too many characters that I cannot map to something I understand. That's what I meant by "coding with smileys". At some point it's becoming too hard for me to figure which ones work together or individually and what designates what. I just can't mentally parse such strings. It reminds me a bit when I had fun with the Brainfuck language a long time ago for those who know it. And yes, it troubles me that this character doesn't have a corresponding closing one.
Posted Feb 15, 2025 16:34 UTC (Sat)
by intelfx (subscriber, #130118)
[Link] (2 responses)
Then you need to spend effort to *understand* it, and only then return to critique.
Put it more bluntly: the fact that the language in question has concepts that you do not understand (some of which come with their own syntax) is not a language problem, it is a you problem.
C is not the pinnacle of programming languages, and therefore it is unreasonable to assume that a person who is proficient in C is automatically proficient in everything else (from which it follows that if "something else" gives you trouble, then it must be a problem with "something else," because you assume that you are proficient by default). If you disown this fallacious assumption, everything else falls into place.
Posted Feb 15, 2025 16:57 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
Sounds like you've never worked designing a fail-safe system ...
Okay, when you're dealing with complex systems you can easily find yourself in a lose-lose situation, which is almost certainly the case here, but to say it's wtarreau's problem is just plain rubbish. "Which way is your brain wired?".
This must have been 80's, 90's, a magazine article. Where the writer was working on a gui program, and went to see - wonder of wonders - how the users were actually using it! And one user, demonstrating something, made a mistake and swore "I always get that wrong, what's wrong with me!"
But the lightbulb is that that bit was NOT CONSISTENT. Nine places in ten, what the user did was the RIGHT thing. That one place was badly designed, and here it was the WRONG thing to do. And this is why saying "syntax shouldn't be a problem" is bullshit. If your brain is programmed to think that you delimit strings with ", SQL is a damn nightmare! I'm seriously NOT used to using ' as a delimiter - I think FORTRAN used ", DataBASIC is happy with " (actually, it doesn't care, ", ', \, whatever ...) - being forced to use ' just grates on every level!
It's like forcing an emacs power user to use vi - NOTHING WORKS. And you can't (easily) reprogram them, because every where else is re-inforcing the emacs bindings, so vi is the odd one out, and it just grates - *all* the time.
There's quite a few places like this in programming languages. They have different heritages, they use the same syntax to mean different semantics, and are easy for people from the same heritage to learn while being a nightmare for others. And that's why I said it's a lose-lose situation. If 90% of your work is with C-style languages, every time you use a language with a different style you have to learn it from scratch. AGAIN. And AGAIN. Because your normal life is DE-programming all the new stuff you've learnt.
Cheers,
Posted Feb 15, 2025 17:03 UTC (Sat)
by mb (subscriber, #50428)
[Link]
Posted Feb 18, 2025 5:11 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (1 responses)
I don't know Rust either and am not a C developer, but I thank you for taking the time to be open and vulnerable (in that by admitting you don't understand you open yourself to critique in the nature of "you 'just' need to get good") about the learning process. You can often get the jist skimming a language you don't know, eg. the other day I was curious how Oxidized network device backup handled a few platforms, but I could follow along well enough even though I couldn't do a variable assignment in Ruby without checking the documentation, but that's not really true of Rust as I've seen a number of code examples posted of various things and I don't get the jist at all, even though I do believe the people who are very passionate about the benefits of Rust, I'd need to spend a _lot_ more focused attention to be able to even skim read it let alone write in it.
Not every person is willing to learn something radically new that makes them a beginner again where they put themselves in a position to need help from or get critiqued for beginner mistakes by less senior people who have less experience in their area of expertise, when they can continue to be the trusted expert that other people look up to. Someone a long time ago noted that progress happens "one funeral at a time" but many people do learn and grow with the changing world, its *notable* when they don't.
I said in another thread that what is needed is a kernel-focused targeted training to on-board seasoned C developers who are thinking in machine or assembly language and how the underlying hardware works because the mental models are quite different, and the kinds of tasks/operations needed by an application developer using the standard library and an OS kernel developer using the kernel internal library are vastly different. Kernel developers are going to need to understand how to design and audit an unsafe block to bang on memory and implement something other code can use in a way that an application developer can just abstract away and use a standard library function or some common 3rdparty crate. This training would be best delivered where everyone can see it on the mailing list or some other mechanism that puts it where the people are, because expecting busy people to get hyped enough for totally self-directed training is not a reasonable assumption, you have to meet them where they are if you want to effectively advocate for change. Loudly proclaiming "I'm right, you're wrong, get with the program!" *even* and *especially* when you ARE right and they ARE wrong isn't an effective mechanism to get them to understand why and then advocate for the change themselves, especially when you don't give people time to _understand_ and just want them to believe you.
Like, if you want to convince wtarreau that Rust is the best thing since beer (and given the number of thoughtful competent people who are passionate about it, it probably is) then you need to *show* them and understand that they are a _beginner_ at Rust but not to computing and make your examples accordingly. If you get feedback that something was difficult to understand then you should *believe* them and not argue about their own experience, that is entirely foolish, but instead interrogate why it might be difficult for someone with a different background, empathize with them and try to learn from their experience. Just saying the same things again louder like like they didn't hear you the first time is insulting to both and not effective, "Oh now that you implied I must be stupid if I didn't understand this right away I totally get it, thanks Internet person!" said no one ever.
Posted Feb 18, 2025 13:58 UTC (Tue)
by kleptog (subscriber, #1183)
[Link]
But that's one of the fundamental features of teamwork: no-one understands the whole of the kernel in its entirety. Any time a kernel maintainer is interacting with some other part of the kernel they're the "beginner that needs help". This is *normal* and *expected* and if you can't do that, it's going to make working on any large project a challenge.
As you noted, other threads here did clarify the situation somewhat: it's not the syntax (which is C-like with additions) nor even the standard library (which isn't used in the kernel anyway) but the fact that programming Rust requires the programmer to state their intent rather than simply be a "high level assembler". That's not a question of some training, that's like asking a chemist to learn biology. You're working at a completely different level of abstraction.
Not really sure if there is an easy solution here.
Posted Feb 15, 2025 17:11 UTC (Sat)
by dskoll (subscriber, #1630)
[Link] (3 responses)
fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> {
Wow. And all the criticism about Perl being line noise... huh...
Posted Feb 15, 2025 17:37 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link]
Posted Feb 15, 2025 18:09 UTC (Sat)
by mbunkus (subscriber, #87248)
[Link]
Posted Feb 16, 2025 0:40 UTC (Sun)
by himi (subscriber, #340)
[Link]
Compared to something like Python (sans type annotations) there's definitly a lot of syntactical complexity, but the only things that aren't in C syntax are the bits associated with the angle brackets, and the only thing that's not in C++ is the lifetime annotation. And importantly, the syntactic elements that are used are conceptually very close to what they mean in C/C++ - & means a reference to something, Foo<Bar> means a generic type Foo containing a Bar, both of which match the usage from C/C++, everything else is pretty much a one-to-one match.
The structure of the function definition is obviously different, but if you can't adjust to that change ("oh, the return type is at the end of the signature instead of in front of it") then I think it's probably reasonable to say the problem isn't the programming language so much as the programmer . . .
Posted Feb 15, 2025 11:58 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (24 responses)
I notice you didn't mention machine code there ... a fundamental part of wtarreau's experience. How close is Rust to machine code? How close is your brain's wiring to wtarreau's? Which computer (or mathematical) language does your brain think in? (As oppose to speak in, like when I'm in France, I'm translating English to French. When I'm in Germany, I'm (mostly) thinking in German.)
Cheers,
Posted Feb 15, 2025 12:16 UTC (Sat)
by pizza (subscriber, #46)
[Link] (16 responses)
There's another aspect there -- how well does machine code translate _back_ into Rust?
When you're debugging hardware or trying to analyze a random (optimized-to-the-moon with no debug symbols and _definitely_ no corresponding source code) binary, it turns out that C much more closely represents what the hardware sees and acts upon.
Posted Feb 16, 2025 1:10 UTC (Sun)
by himi (subscriber, #340)
[Link] (15 responses)
For highly optimised machine code with no corresponding debugging information or source code, there's very little chance that back-translating to /any/ higher level language will give you the original human-written code - optimising compilers just don't work that way. Any non-trivial code will be transformed quite drastically from the original source - it'll be a lot simpler in some ways (because the compiler optimised away a bunch of unnecessary stuff) as well as a lot more complex (because the compiler was doing stuff like unrolling loops or doing automated vectorising and what not); it may even be significantly different structurally (reordering blocks is allowed as long as it doesn't change the behaviour within the constraints of the language spec). Occasionally it just won't even be there at all! Back-translation won't undo those changes, it'll just give you one possible higher level language implementation of whatever came out of the final optimisation pass.
If the target is C that implementation may resemble something that a human would write, though it's probably not going to be anywhere near idiomatic human written C code; if you're targeting Rust it'll probably be /very/ different to what a human would write. But in either case the result should be human /readable/ (if the back-translation is of a similar quality), which is surely what matters in this scenario?
And you'll probably get Rust code that's far simpler than what a human would write, though almost certainly a lot more verbose - it'll be missing all the bits of the language that make it more human friendly to write. I imagine the result would be a lot easier for a C programmer to read, in fact.
Posted Feb 16, 2025 10:46 UTC (Sun)
by khim (subscriber, #9252)
[Link] (10 responses)
They do, if you add enough optimization-disabling options and write your code in close enough to machine code fashion. And now we go back to “insults”. People who use C like a portable assembler are humans, too! That's why I refuse to bend to the demands of SJWs: if someone wants to be offended – said someone would find a way to be offended, no matter how much effort would I use to step on eggshells… thus I don't even try: if you don't want to talk to me – you can, it's always your choice. No. “We code for the hardware”, quite literally, learn to write code in C in a fashion that it would make it possible for them to easily go from C to assembler and back. They don't just write arbitrary C code. Is it possible to write Rust in such fashion? Surprisingly enough yes – but that would be highly non-idiomatic Rust full of Maybe, but that's not “we code for the hardware” crowd wants. They want to continue to write code in their portable assembler – and new folks are not cooperating! They try to bring C++ or Rust or even just switch to a different style in C which would tear the code from the machine code on the output and would bring it closer to something that a human would write (in your words)… but for decade or two “we code for the hardware” guys have managed to browbeat down new guys to write the type of code that they like (remember that Linux developers insist on rewriting all the code in drivers “to match Linux standards”). This means that new generation, en masse, doesn't even want to touch such code: it's written in language that they, supposedly, “know”… but it doesn't look even remotely close to what they would write! And learning to write code that's more verbose and less understandable from their POV (because it's closer to their intent… and father from machine code)… why would they want that? That's where the source of the drama lies – and explains why Linux maintainers are almost all “old guys” who started worked with Linux decades ago… new generations wants to “throw away” precisely these things that “old guys” find valuable and important! Then Rust arrives… and the very first thing it does… is deepening of that schism: now that code that “old guys” want is not just no longer what the “new guys” want to learn to write – it's, now, very explicitly, marked as See where the core of that drama lies? It's much deeper than “old C guys don't want to learn Rust” or “new Rust guys don't want to respect old C guys”. Rust have only just exposed it, it haven't caused it. And Linux written in C is their “last stand”, really. All more modern languages (be it C++, Rust, or Haskell with Python) couldn't be turned in a form that would make their use as “portable assembler” feasible. That's what makes the whole story so bitter: when on one side you have guys who know, with 100% certainty, that future belongs to them (future may not be Rust, it could be Ada or Swift… but no matter what would that be… “we code for the hardware” approach wouldn't be used) – and on the other side are guys “defending this tiny foothold, their homeland… that has been under attack for 30 years”… can you really imagine civil talks and mutual understanding in such situation? I am actually impressed by how well Linus and his underlings handle the situation: only two high-level resignations so far… that's far less than I have expected. But the real drama would happen later: when people like Theodore Ts'o and Christoph Hellwig realize that for all their resistance and open and covert sabotage… they couldn't stop the ocean from swallowing their tiny island… then we would see high-level resignations from the other side… and these are actually more threatening: when “ocean” loses members it's not critical… it's “ocean”… other people would come… it's cynical, but true… but when “island” loses members… it can crumble and collapse! That's why Linus haven't done anything to Christoph Hellwig, BTW: loss of marcan may be unfortunate, but it wouldn't critical, for sure… loss of hch wouldn't be critical, per see, but may trigger mass exodus of “old guys” too early, before “ocean guys” are ready to pick the slack.
Posted Feb 16, 2025 13:37 UTC (Sun)
by pizza (subscriber, #46)
[Link] (6 responses)
I think you hit the hail on the head, thank you for writing all that up.
But over the course of my career, I've noticed that fewer and fewer (both proportionally and in absolute terms) know or even care about how the hardware actually works. Not just to make things work well/fast, but to be able to *debug* what's going on should something inevitably go wrong.
Even thirty years ago, these skills and interest were relatively rare, but today they're actively dumped on even as they're depended upon more than ever. Core infrastructure isn't sexy, but necessary -- It's the "I don't care about the plight of farmers, I get my food from the supermarket" attitude all over again,
Posted Feb 16, 2025 14:12 UTC (Sun)
by khim (subscriber, #9252)
[Link] (3 responses)
It's definitely true about “proportionally”, but I'm not sure it's true for “absolute terms”. There are enough people who want to know how the hardware actually works – or we wouldn't be getting so many emulators on /r/rust. We had no trouble of finding students who wanted to poke with bytecode and machine code generation for internship, that's for sure! They just don't want to think and worry about “how the hardware actually works” in every line of code they write! But isn't it the same with other human endeavors? Once upon time every car driver was a car mechanic, too. And early plane pilots were sure plane without open cockpit wouldn't ever work because one have “feel” air to successfully maneuver the plane! Today… there are still car mechanics and car designers and people who know how to build planes exist, still… but most drivers and pilots don't really care about “how all that actually works”. Why should software development be any different? True. That describes the majority. But the trouble for Linux (and for “old timers”, in general) comes from the other direction: It's one thing to “lift the lid” and understand what is happening when your program suddenly becomes 20x slower for no good reason (just a very recent experience when bad interaction of SSE and AVX caused us precisely that… we certainly needed to poke in the generated code to see what exactly have changed and what exactly makes everything so slow… filled the bug for the First one is fun and interesting and important… second one… no one from “new generations” want to do that! Every time someone like wtarreau asks “how am I supposed to translate this machine code to Rust” the answer is always “well, there's But, ultimately, Rust guys are right: writing code in way that you may always tell what precisely is generated from this or that line of source code was natural when compilers were extremely dumb and computers were extremely slow… today, even if code generated is not always the best… does it really matter if we don't have enough people to write anything else? Even the ones who do care about the machine code generated naturally assume that work of writing correct code and work of looking on what is happening on machine code level are separate works, you don't do them simultaneously!
Posted Feb 16, 2025 20:27 UTC (Sun)
by wtarreau (subscriber, #51152)
[Link] (2 responses)
And it's true that as time passes and machines improve, some older optimizations are no longer relevant. For example I took extreme care to avoid generating bsf/bsr on atoms because that was slow as hell while a hand-written version was much faster, to the point of being visible in the end-user's code. Nowadays first-gen atoms have disappeared and that distinction can disappear as well. Similarly, there's a lot of legacy code around to deal with each generation of compiler's misbehavior, like gcc 4.x doing stupidities with __builtin_expect(x, 1) that was turning x to an int and explicitly comparing it to value 1! Some of the complexity of old code does come from the collection of all such bad stuff, and many of us have been happy to fix such performance trouble caused by a single stupidity in a compiler or CPU that was causing 2-3% total performance loss. And code cleanups over time allow to get rid of that stuff, which sometimes can look like pieces of art, by the way, but just no longer relevant art.
Some of us know that such problems are recurring and continue to want to be able to provide solutions to them. Nowadays, with massively multi-core CPUs we're seeing extreme times caused by cache line sharing that happens very easily if you're not careful. I've been used to organize my structs to avoid accidental sharing between areas that are supposed to be accessible from other threads and those which don't for example. And that's just an example. Sometimes you realize that a function call significantly degrades performance just because pushing the return pointer to the stack forces cache writes because in parallel you're holding a lock that another CPU tries to acquire, and since that lock is in the same cache line as the data you're manipulating, every time it wants to read the lock's state, it causes a cache line flush which due to TSO also causes the stack to be written. I'd feel terribly useless by not being able to easily tweak all that when such problems are faced.
And at the same time I totally understand that the vast majority of developers don't want to care about this. A lot of code doesn't need to be finely tuned nor to be performant under specially stressful conditions. But what I like in programming precisely is getting the most from the hardware and making sure my code scales as much as possible. Just for the same reason I wouldn't develop a web browser in C myself and would rather have someone do it in a more suitable language, there would probably be specific points where that person would prefer to rely on low-level optimizations for certain things that are on the critical path and rely on someone doing my job in their preferred language. I think there are jobs for everyone, and when all developers will realize that this should be complementary instead of a competition, we'll have made great progress!
Posted Feb 17, 2025 12:58 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Feb 17, 2025 19:40 UTC (Mon)
by wtarreau (subscriber, #51152)
[Link]
There's no single rule for this, everyone has their own methods and intuitions depending on what they see, and based on their experience. There's no magic, it's just called "development". Tools to help for this already exist and are overly used. "perf" is one of them, there's a reason it by defaults shows the instructions where you're spending time and also delivers hardware performance counters to figure if you're waiting for data too often etc, and such tools users expect to retrofit changes in their code to change the machine code's behavior and the execution pattern. It's not uncommon to achieve multiple unit gains on a whole program's performance using perf, when you're facing a scalability issue.
Another very popular tool is godbolt.org. Similarly, it shows you the resulting asm for your code, and allows you to try many compiler flavors, versions and architectures to verify if your workaround for a limitation is portable enough. It's precisely because a lot of developers care about this that such tools exist and are popular.
The C language is quite terrible for many reasons, and C compilers are particularly stubborn and will to everything they can not to help you. However once you've figured that instead you are the one supposed to help them to produce better code by giving them some hints about what you're trying to do, they appear to show quite reproducible behaviors, with worst case being optimizations that just degrade to the default case. In this situation it's often worth investing time to help them, including only on certain platforms if only certain permit some optimizations. The gains are sometimes high enough to reduce the number of machines in production, and at this point that starts to count.
Posted Feb 16, 2025 15:54 UTC (Sun)
by dralley (subscriber, #143766)
[Link] (1 responses)
C represents a very simplified model of how *some* hardware works. There has also been a lot of convergent evolution such that hardware is designed to work in such a way that C works well with it, because C and C-like languages are so important.
Posted Feb 16, 2025 17:33 UTC (Sun)
by pizza (subscriber, #46)
[Link]
Please read and respond to what I actually wrote.
Posted Feb 16, 2025 15:20 UTC (Sun)
by corbet (editor, #1)
[Link] (1 responses)
You are (at great volume) doing something similar with this strawman you have created wherein kernel developers really want to be working in assembly. I don't know any kernel developers like that; this image is not really helpful to the discussion.
There are times when you have to know precisely what the hardware is doing; poking an MMIO region, configuring page tables, implementing lockless algorithms. Happily, Rust enables all of that, and everybody knows it. Beyond that, kernel developers understand the problems of premature optimization as well as anybody else.
Khim, you have posted 26 times (at last count) on this article; people are broadly tuning you out. Can I ask you, please, again, to give it a rest?
Posted Feb 16, 2025 15:27 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Fine with me. It's not as if I can change the future, I can only show my vision of the future and help someone to better prepare to the future… and I hope that ones who wanted to do that saw something interesting for them, but I'm not the “great savior”, the ones who want to ignore the future may continue to do that till it would arrive.
Posted Feb 16, 2025 19:09 UTC (Sun)
by branden (guest, #7029)
[Link]
Is anybody else reminded of "Only two remote holes in the default install, in a heck of a long time!"?
Posted Feb 16, 2025 13:18 UTC (Sun)
by pizza (subscriber, #46)
[Link] (3 responses)
I'm not talking about "give the original human-written code" -- I'm talking about *what the machine sees and operates on*.
Because that is what the hardware is _actually_ acting upon, not what humans _think_ or _intend_ for the hardware to act upon.
Do you see the difference?
Posted Feb 16, 2025 14:46 UTC (Sun)
by khim (subscriber, #9252)
[Link] (2 responses)
Well… sort of. But that's the only thing you care 99.9% of time! If your intent is correctly expressed… and compiler accepted it… and correctly compiled it… then everything should “just work”. That should only matter when there's a bug somewhere, isn't it? Could be bug in your understanding of language rules, or maybe bug in your description of hardware requirements, or even bug in the compiler (these also exist, sure)… but that's rare exception, not the rule! But that can be drastically different, depending on the exact version of the compiler, what library you used, what optimizations were enabled and so on… why would you even care about that, if things work as expected? Basically: what “old school” use as the basis, as the beginning of understanding of the language… new generation puts and the very end… something to study and understand after you already know the language “inside out” and can use it, then, finally… it's time to “open the lid” and see how it interacts with hardware. But the thing that's really scary and… not sure what word is best… maybe “sickening” in that whole story: Rust guys are not weirdos… Linux maintainers are! Just think about it: programming languages, since ALGOL 58 languages were developed entirely independently from the machine code and then hardware was made to better accomodate these high-level ideas and thus question of “how this high-level construct would be implemented in hardware” was always not the central question but of more or… detail of implementation, I guess. Then the microprocessor revolution happened… and Unix revolution happened… for two decades people had to deal with machine code and assembler – not because they liked that, but because there was no choice, their pitifully underpowered systems couldn't deal with anything like ALGOL or LISP – and that brought to us the generation or two of developers who think that knowing how their C code is translated into machine code is important… but that whole thing is the quirk of history! Neither people who developed computers and computer languages originally thought that it's important (and should drive anything) nor people who are following after them think in that vein (computers are powerful enough that “opening the lid” on the machine code is rare debugging tool and not something you deal with all the time)! For a long time “we code for the hardware” guys were needed because tiny microcontrollers couldn't be coded in anything but assembler… but today, when your charger have more powerful CPU than Apollo 11… that final bastion is crumbling, too!
Posted Feb 16, 2025 14:59 UTC (Sun)
by pizza (subscriber, #46)
[Link] (1 responses)
Not just bugs; It's he overwhelming norm when you *don't* have the source code and are trying to figure out WTF the binary is doing.
But even if it was just bugs, that doens't change the need for that basic capability.
Posted Feb 16, 2025 15:08 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Why would relationship between machine code, C and/or Rust even matter in that case? You have the machine code, it works in a certain way, why involve high-level languages at all? Especially if said code could be hand-written assembly… It changes the direction: instead of trying to imagine how that source code that produced that output can look like – you can just look. That essentially turns what you perceive as “basic capability” into a “parlor trick that can be used to amuse people, but has no relevance to anything”. P.S. I actually need to debug programs that I have no sources for pretty often at my $DAYJOB, but since they can be written in Java, C#, Go, or maybe even in some homegrown language with homegrown compiler… and rarely are written in C… I never had the luxury to back-translate low-level machine code code into high-level code… maybe that's what made it easy for me to accept impossibility of going back to C++ or Rust from the machine code. It's hard to mourn loss of something that you never had in the first place.
Posted Feb 15, 2025 12:23 UTC (Sat)
by mb (subscriber, #50428)
[Link]
I learnt to read and write assembly code alongside with Basic.
>How close is Rust to machine code?
It's as close or as far away as C is.
C and Rust are both languages with a virtual abstract machine model.
>When I'm in Germany, I'm (mostly) thinking in German
There's another option to not think in language at all.
Posted Feb 15, 2025 14:24 UTC (Sat)
by khim (subscriber, #9252)
[Link] (5 responses)
Why does it even matters? Machine code generated by Rust is “good enough”. That's it. It's the same as with [modern] C. Not too much different, I suspect. I also passed all these stages (except MS-DOS 2.11, I think the oldest one I used in XX century was MS DOS 3.20, although I played with MS DOS 2.11 in emulator in XXI century), but I haven't stopped there. When C have passed the stage when it was possible to pretend that you are dealing with portable assembler I embraced the duality: first you need to explain to the compiler what your program does – and then, perhaps, check what compiler actually produced. Second step is very much optional, not critical. I wouldn't cry if compiler misunderstood and generated suboptimal code, most of the time. Good analogue. Eseentially: the time have come to stop thinking about our program only in terms of machine code. Or even primarily in terms of machine code. Again, to understand the whole context it's best to reread the story of Mel, the real programmer. Please do. Then you would understand what this whole schism between Rust “believers” and C “diehards” is all about. Or at least that critical tidbit: I have no idea if either wtarreau or mb even did tricks similar to what is described there… but I did. My first ever programmable device was calculator that was left from my sister (she was already in college when I was in the primary school… 10 years difference). And I did all these tricks that “story of Mel” talks about. Calculator had no drum, of course, but with 98 bytes (not kilobytes, not megabytes, but bytes) of programming memory… you did what you could. And then, on real computer… with whopping 16KiB of memory… I can finally forget about all that silliness. Of course I can still recognize these tricks and for some time after than somewhat similar tricks were still employed as Jumping into the middle of an instruction is not as strange as it sounds tells us… but today we no longer care about these things. Most of the time. And Rust proponents just simply assume that people have stopped thinking about these tricks. Completely. For tricky cases, when, once in a blue moon, you may jump in the middle of instruction or do something not expressible in high-level language… sure. There's asm! for that. But most of the time? No. C was already unsuitable for that, why would anyone even try that with Rust? But the trick is that not everyone believes that C is no longer suitable for that. If you add enough “disabled optimizations” flags (which, to these guys that think like a wtarreau, sound more like “stop breaking my code, damn it” flags) you may still pretend that you can write “portable assembler” in C… but how long would it work? It wouldn't work with Rust, at least: Rust doesn't offer any such optimization flags – precisely to discourage that “thinking in terms of machine code” (again: you want machine code – there's an asm! for you). There are some flags but they don't change the language and don't make illegal constructs legal. The Rust users assume it doesn't matter: if you stop thinking about your program in terms of machine code and would accept the fact that "What The Hardware Does" is not What Your Program Does… then going from C to Rust is not too hard… but that's precisely that step that people are struggling with. Ultimately, though, Rust people are right: first there would be a mandate to only use safe languages, then all these flags that make it possible to still pretend that pointers are just an integers would go away… it would take decade or may be two decades, but eventually C would turn into a Fortran, pure legacy… OS kernels would be rewritten and people would accept that… and if you accept and embrace that future then learning to think in terms of high-level language would be needed, anyway… why delay the inevitable? But “we code for the hardware” developers continue to fight that future, tooth and nail! They would just not accept it! Especially the ones who are kernel maintainers: their job is more than secure, it's more-or-less guaranteed for the next 10 years or so… why should they do anything to accept these pesky new kinds? It's very explicit, isn't it: Telling people what they have been doing for their entire career is hopelessly broken, is not a selling point especially when they know better. They know what they have been doing gets the job done. P.S. It's also very funny how they say that Rust is not acceptable because it breaks their habits and then turn around and celebrate extra strong Rust (for the refernce: SPARK picked up its pointers safety story from Rust and thus, of course, inherited all the restrictions and limitations… in fact it's more strict that Rust… significantly more strict) – but I guess when someone is in “they are coming to take my job” position simple logic takes a backseat and anything that may delay the inevitable becomes a fair game… P.P.S. The real question about memory safe languages adoption would be question of liability. After usual disclaimers about “lack of liability” would be deemed illegal… insurance companies would kill C very quickly. By simply offering sharply different prices for things written in C and things written in other, safer, languages. Whether this would push Rust to be adopted or if some other languages would be deemed more suitable remains to be seen, though. It would be funny if Ada would see a resurgence and instead of Rust people would be forced to actually adopt something even more strict and even more limited. But my current bet is still on Rust because it's better balanced between flexibility and strictness.
Posted Feb 15, 2025 16:35 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link] (4 responses)
It's not on purpose. I mean the more I'm reading the articles, the more I'm feeling like there are different jobs and some don't understand why others just don't want to switch jobs. It's exactly like here in europe where you're considered to have failed if you do tech stuff all your life instead of becoming a manager to manage people despite having zero skills for that.
I think that there are people who are at ease with thinking at low levels, the same way other people are at ease in mechanics and a lot of similar stuff, and there are people more at ease at higher levels and relying on tools to abstract the underlying levels. Both are equally fine and needed. But just like no python developer is asked to learn to program PIC or AVR in assembly, python developers do not expect those programming the microcontroller in their alarm clock to learn python package dependencies. I don't see why there isn't the same type of cross-respect between people doing lower-level stuff in C and those doing higher-level stuff in Rust.
> The real question about memory safe languages adoption would be question of liability. After usual disclaimers about “lack of liability” would be deemed illegal… insurance companies would kill C very quickly. By simply offering sharply different prices for things written in C and things written in other, safer, languages.
Most of the work being done in C these days is only extending stuff that already exists, it's not about writing entirely new stuff. That makes dropping the language quite hard, as is currently seen in the kernel by the way.
Posted Feb 15, 2025 16:45 UTC (Sat)
by mb (subscriber, #50428)
[Link] (1 responses)
Yes.
The common approach is that new code shall be written in a safe language.
Posted Feb 15, 2025 17:06 UTC (Sat)
by khim (subscriber, #9252)
[Link]
This only works with code that's not supposed to change. But Linux kernel does change. And quite actively, at that. So the only viable alternatives is rewrite it in some other, safer, language – or replace it. Replacement would happen if Rust rewrite wouldn't take hold: there would be less and less maintainers, doing anything would become harder and harder and, at some point, one of numerous projects that are aiming to replace Linux would be able to catch up. Of course for that to happen Rust for Linux have to fail, first… and despite an occasional drama we don't see it failing, yet.
Posted Feb 15, 2025 17:02 UTC (Sat)
by khim (subscriber, #9252)
[Link] (1 responses)
Are you sure? I rather suspect that significant percentage of CircuitPython users go on to “program PIC or AVR in assembly”. They just start with Python because it's easier to learn. Why not? That's valuable skill if you are python user. It easy to see why: we are, quite literally, talking about the ones who would be retired and the ones who would replace them. They don't “work on different levels” like users of MicroPython and users of full-blown CPython. Not really. “Dropping the language” is only hard when you are not willing to drop the whole package. But give the companies enough incentive – and they would be ready to drop the whole thing and rewrite things from scratch. The interesting question is not whether programs written in C would be replaced with programs written in memory-safe languages (I rather suspect few years down the road the pressure to stop using non memory-safe would be too hard to ignore) but how exactly would that happen: would Linux kernel be rewritten in Rust or would it be replaced by something else. And if it wouldn't be rewritten in Rust but replaced – would the replacement use Rust or something else (Ada? Swift?). That are the real questions. The big issue that hurts people like marcan is that technical changes that are destined to happen in one funeral at time fashion is very hard to expedite. You can not pressure the majority, which means that initial stages of replacement happen very slowly. Mass exodus and resignation of “old timers” have to happen when their replacemnts are numerous enough, otherwise you risk destroying the whole thing, if they would leave before someone is ready to replace them.
Posted Feb 15, 2025 21:12 UTC (Sat)
by pizza (subscriber, #46)
[Link]
CircuitPython is a tiny minority of the overall Python ecosystem.
In fact, I'd wager the $7 I have in my pocket right now that proprotionally far more C developers are likely to write PIC or AVR asm than CircuitPython users.
Insofar as directly writing PIC or AVR is necessary to begin with.
Posted Feb 15, 2025 20:02 UTC (Sat)
by roc (subscriber, #30627)
[Link] (1 responses)
When I write C++ or Rust I often think about the code that will be produced. This still matters at times, and I have side projects (rr, Pernosco) that require deep understanding of binary code. Last week I wrote code to decode some Aarch64 instructions! But when working with large systems you have to think about a lot of things and "producing the machine code I want" is usually not the most important thing.
> why does it care about who owns a pointer to a location
Every C programmer working on non-toy software has to care about who owns the memory each pointer points to. If you don't, you drown in memory leaks, crashes, and security vulnerabilities. Rust's premise is, given the importance of this and that you have to think about it all the time, the compiler should care about it too so it can make sure you get it right.
Posted Feb 15, 2025 21:21 UTC (Sat)
by pizza (subscriber, #46)
[Link]
The destination of a pointer isn't always memory. Heck, I'd go so far as to say it rarely is.
Posted Feb 14, 2025 21:11 UTC (Fri)
by ferringb (subscriber, #20752)
[Link]
Well phrased, and exactly my experience.
Posted Feb 14, 2025 2:53 UTC (Fri)
by jmalcolm (subscriber, #8876)
[Link] (2 responses)
All we need is to build a vibrant community, including talented contributors, around something new.
Anecdotally, it seems that about 70% of all new kernel initiatives are written in Rust these days. Statistically, it feels like the next "Linux" is likely to be written in Rust. Perhaps it is Redox. Perhaps not.
Microsoft apparently wrote an entire OS in C# and that was before they added a bunch of features that would make that easier. I love C# but it is not an obvious choice to write a kernel in. It is maybe a better choice than a lot of people think though. It can compile to completely native code. It gives you low-level memory control and precise bit layout control if you want it. The garbage collector can be completely avoided if needed. And anywhere you have the luxury of forgoing such low-level control, it is an excellent, safe, and productive language with a crazy comprehensive standard library. It would be interesting to see a C# OS where the VES/CLR (.NET version of the JVM) was a feature of the kernel or micro-kernel.
Posted Feb 14, 2025 15:22 UTC (Fri)
by excors (subscriber, #95769)
[Link]
Their original attempt wasn't quite C#:
> Singularity is written in Sing#, which is an extension to the Spec# language developed in Microsoft Research. Spec# itself is an extension to Microsoft’s C# language that provides constructs (pre- and post-conditions and object invariants) for specifying program behavior. ... Sing# extends this language with support for channels and low-level constructs necessary for system code.
and it wasn't the entire OS:
> Counting lines of code, over 90% of the Singularity kernel is written in Sing#. While most of the kernel is type-safe Sing#, a significant portion of the kernel code is written in the unsafe variant of the language. The most significant unsafe code is the garbage collector, which accounts for 48% of the unsafe code in Singularity. Other major sources of unsafe Sing# code include the memory management and I/O access subsystems. Singularity includes small pockets of assembly language code in the same places it would be used in a kernel written in C or C++, for example, the thread context switch, interrupt vectors, etc. Approximately 6% of the Singularity kernel is written in C++, consisting primarily of the kernel debugger and low-level system initialization code.
(I think unsafe C# is basically normal C# plus raw pointers, and the compiler won't try to verify correctness. So it's quite similar to unsafe Rust.)
Singularity was succeeded by Midori which reportedly used "vanilla C#" instead of Sing#, but I guess it would still have a similar proportion of native code. There's an interesting series of posts about Midori at https://joeduffyblog.com/2015/11/03/blogging-about-midori/
The key idea of Singularity was that process isolation could be enforced via the high-level language's type safety and memory safety rules, instead of using MMU hardware, and that let them achieve the low-overhead IPC needed for a microkernel. But I think that idea is seriously undermined by Spectre; it turns out you do need support from hardware to provide security boundaries between processes, and that removes one of the main benefits of writing the whole system in a managed language.
Posted Feb 15, 2025 0:25 UTC (Sat)
by khim (subscriber, #9252)
[Link]
Yes, in 20 years OS kernels would be all in the memory safe languages. But I'm not 100% they would all be in Rust and, more importantly for the current discussion, the question is whether Linux would be among them.
Posted Feb 14, 2025 12:48 UTC (Fri)
by smurf (subscriber, #17840)
[Link] (3 responses)
Can I please have some of the drugs you're on?
Seriously. The whole point of Rust is that it enforces guarantees which you can't even express in C (memory safety, lifetimes, locking rules, no NULL pointers, …), so where would your autotranslator get those from? Thin air?
… well you could wrap the whole kernel in a big "unsafe" block and then rewrite the resulting mess to safe Rust piece by piece, but if you have to do that anyway you might as well start with the original C version and save everybody the mountain of bugs that such autotranslation would invariably introduce.
Posted Feb 14, 2025 13:52 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
As Rust code goes, the output of C2Rust is pretty awful today. The open question is whether it's going to be easier to manually rewrite all the C that benefits from rewriting into good Rust, or to put together Rust → Rust rewrite rules that fix the faults in C2Rust's output; if fixing C2Rust's output is easier than rewriting C into Rust, then people will prefer automatic translation.
Note, too, that there's the possibility of semi-automated rewriting of C2Rust's output - have the human identify useful properties (e.g. "this function's return value is an OwnedFd, not a libc::c_int") and have the rewrite tool do conversions of code based on what you've just told it (in this case, changing the return type to OwnedFd, then adjusting all function parameters that "obviously" borrow to BorrowedFd and those that look like they consume it to OwnedFd, leaving unclear cases as RawFd for the human to fix).
Of course, none of this will happen unless interested parties make it happen.
Posted Feb 14, 2025 13:57 UTC (Fri)
by butlerm (subscriber, #13312)
[Link] (1 responses)
Second, you do not seem to have a great deal of confidence in the potential for semantic analysis of the source code to something that necessarily follows strict correctness constraints like the ones used in the Linux kernel to identify those features that you express explicitly in Rust and which in my opinion should be expressed better in any modern systems programming language.
The constraints required for the Linux kernel to function correctly themselves could be expressed in a model or meta-language designed for expressing those constraints anyway and that model used to read in between the lines so to speak about what is actually going on with locking and a considerable number of other things in any given piece of source code.
Computer science is a rather new field compared to something like physics, chemistry, or materials science and in some ways is still in the stone age. Computer Automated Software Engineering isn't used nearly as much as it could be, and for some reason people do a lot less of it than they did as recently as three decades ago. Not sure why that is, but solving problems like the Linux kernel is facing seems like it would benefit from a lot more of it.
Posted Feb 14, 2025 17:33 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Real computer science is now more than 70 years old. That's about the same age as the modern electrodynamics was in 1940 (the Maxwell's equations were formalized in 1861). There are really no excuses about "it's a new field" anymore.
By now, the applied programming is a well-established area. We have a set of best practices, things to avoid, and we will likely be even getting government regulations to establish consumer safety.
Posted Feb 16, 2025 20:20 UTC (Sun)
by timrichardson (subscriber, #72836)
[Link] (2 responses)
Posted Feb 17, 2025 0:17 UTC (Mon)
by butlerm (subscriber, #13312)
[Link]
Posted Feb 17, 2025 13:38 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
Posted Feb 14, 2025 2:21 UTC (Fri)
by jmalcolm (subscriber, #8876)
[Link]
From the Redox web page, they are aiming to "be a complete alternative to Linux and BSD" that offers "source compatibility with Linux/BSD programs".
"complete alternative" sounds like a thrown gauntlet to me.
That said, while Linux source code compatibility is a goal for Redox, it actually seems more like a rewrite of Minix. If Redox succeeds, it will be the ultimate vindication for Andrew Tanenbaum and his famous "Linux is obsolete" flame-war with Linus. For years we have all concluded that Linus won that one. Who knows, maybe a microkernel will win in the end.
Posted Feb 14, 2025 1:36 UTC (Fri)
by jmalcolm (subscriber, #8876)
[Link]
Posted Feb 14, 2025 12:32 UTC (Fri)
by pizza (subscriber, #46)
[Link]
Those performing the yelling or blocking usually have a _very_ different view of what constitutes a "good reason" versus the recipient.
(That doesn't make either party necessarily correct, but IME the recipient is _usually_ "wrong")
Posted Feb 13, 2025 17:03 UTC (Thu)
by dobbelj (guest, #112849)
[Link] (9 responses)
Posted Feb 13, 2025 18:10 UTC (Thu)
by dralley (subscriber, #143766)
[Link] (7 responses)
There has been and continues to be a lot of bad behavior in kernel-land, and it was IMO the wrong move for leaders to step in and chastise him without actually addressing the issue at the root of this kerfuffle.
Posted Feb 13, 2025 20:41 UTC (Thu)
by ferringb (subscriber, #20752)
[Link] (6 responses)
As to the patch that kicked it off- https://lwn.net/ml/all/20250108122825.136021-3-abdiel.jan... . Even if you don't know rust, you should be able to read this.
Either introducing a centralized abstraction that drivers use, or (Hellwigs directive) having every driver try and muddle this out themselves... one of those is good engineering. One of those I lack an explanation for, especially when the person arguing it doesn't have to fricking maintain it and the R4L contract is "rust can be broken basically at will".
I really wish people would read the patches or pull stuff like above; it's easy to say "dudes pulling drama" until you start pulling the technical side and going "wait, wtf?".
Posted Feb 14, 2025 8:49 UTC (Fri)
by sima (subscriber, #160698)
[Link] (5 responses)
Posted Feb 14, 2025 9:14 UTC (Fri)
by dottedmag (subscriber, #18590)
[Link] (2 responses)
I'm sorry, but calling it «religiously motivated» is applying optics from a different country, epoch or society, like calling someone out for slavery if they mention Slavic people.
Posted Feb 14, 2025 14:55 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
Don't confuse religion with Religion. Don't confuse "a way that works" with "the one true way". And kernel maintainers who believe in "The One True Way of Kernighan and Ritchie" are definitely Religious with a very big R.
On the other hand, there's probably a lot of them who believe in the Standard of C (not sure if it's C89, C11, C18, Gnu or Brian) "because it works for me". And they're religious with a small "r". If a Rustacean moved in next door, they wouldn't care.
I'm somewhat Religious about databases as other people will very definitely attest - Relational/SQL is the work of the devil :-) - not really, it does have SOME good points. The *maths* is great ... not much else, though.
Cheers,
Posted Feb 14, 2025 15:23 UTC (Fri)
by marcan (guest, #103032)
[Link]
That is exactly what I implied. I started collecting a (private, as I stated in the post, "not for public consumption") list of frustrating kernel maintainers, and was amused by the fact that the first 3 were names like that. Then I made an (admittedly tasteless) joke about it. I thought it was clear it was a joke (triple question marks...), and then edited in the /s when some people didn't get the intent.
Posted Feb 14, 2025 10:09 UTC (Fri)
by intelfx (subscriber, #130118)
[Link]
Really? You may or may not consider the general act of calling someone out on social-adjacent media as "conduct unbecoming" (that's not a question that I'm going to opine on in this message), but seriously considering this kind of thick sarcasm as some sort of religiously motivated hate speech is probably the most disingenuously uncharitable interpretation of someone's words I have *ever* seen.
Posted Feb 14, 2025 13:45 UTC (Fri)
by valderman (subscriber, #56479)
[Link]
I don't think Hector's Mastodon posts were terribly constructive, but it's pretty clear that this is not the "toxic outsider baselessly attacks innocent kernel maintainers" situation you and some others are trying to make it iout to be.
Posted Feb 14, 2025 22:59 UTC (Fri)
by raof (subscriber, #57409)
[Link]
A core subsystem maintainer made it clear he was unilaterally vetoing Rust in the kernel¹ for non-technical² reasons. That *has* to be resolved, but I don't think the kernel continuity really has a way to resolve that within the normal processes.
Absent this sort of external pressure, it seems what people expected to happen was that Linus would eventually ignore the NAK and merge the patches, and everyone would try to ignore the fact that there's a core maintainer working directly against an agreed project goal. That's a terrible state for the community to be in!
¹: as most non-trivial uses of Rust would need to go through his subsystem
²: or, at least, no technical arguments that weren't already raised and considered before merging the initial Rust support
Posted Feb 13, 2025 17:01 UTC (Thu)
by Phantom_Hoover (subscriber, #167627)
[Link] (33 responses)
Posted Feb 13, 2025 18:44 UTC (Thu)
by proski (subscriber, #104)
[Link] (29 responses)
Asahi runs on expensive hardware that is marketed for its features. It's no surprise that some users feel entitled to have all features work for the money they paid for the hardware. (In practice, the limitations can be mitigated by using different cables or by adding inexpensive hardware such as a USB headset.)
The Rust on Linux is also a known pain point. We know about it from other LWN stories.
Posted Feb 13, 2025 20:39 UTC (Thu)
by Phantom_Hoover (subscriber, #167627)
[Link] (28 responses)
Posted Feb 13, 2025 20:47 UTC (Thu)
by koverstreet (✭ supporter ✭, #4296)
[Link] (27 responses)
I think we can show those people some understanding and appreciation for the work they've done, instead of just dismissing their grievances. If things get to the point where they quit, we all lose.
And the kernel community is not always a particularly welcoming place, some of his grievances do sound quite real.
A substantial fraction of the comments I see any time something like this comes up are just wanting to sort things into good vs. bad, winners vs. losers; "oh he screwed up/acted out therefore it's fine for him to leave or be shown the door", and I find those particularly unhelpful.
Posted Feb 14, 2025 10:26 UTC (Fri)
by Phantom_Hoover (subscriber, #167627)
[Link] (26 responses)
Posted Feb 14, 2025 12:40 UTC (Fri)
by dralley (subscriber, #143766)
[Link] (25 responses)
The thing is, on a broader level marcan is still entirely correct that remaining insular and refusing to cooperate with new people is just going to result in a death spiral. The kernel needs a pipeline of new dedicated long term contributors to survive, but the culture and the process seems to do a good job of scaring them away or burning them out.
Posted Feb 14, 2025 13:19 UTC (Fri)
by pizza (subscriber, #46)
[Link] (20 responses)
I've brought this up before here.
In nearly every profession or field of study, the expectation is that the new folks must learn and understand the whats, hows, and (most importantly) whys of the way things are being done.
Some professions require literally a decade (or more) of study and apprenticeship. Others may have harsh and gruelling training regimes -- and these traits (and the emergent cultures) are known in advance by those considering these professions.
"But it's too haaaaaard" simply does not fly. Do you want excellence, or not?
It turns out that folks are not actually interchangeable, lacking the physicality, skills and/or temperament to succeed. Not everyone has what it takes be a doctor or pilot. Not everyone is going to keep up with one the most successful (and important, and performance-critical) distributed software engineering efforts of all time.
And that's okay.
(BTW, this isn't to say that improvements aren't possible -- just that they happen slowly and incrementally, unless forced by a major externality. But "bend over backwards to accomodate the new folks" is rarely a good reason.)
Posted Feb 14, 2025 13:57 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (1 responses)
We used to also regularly beat children with the idea it would help them grow up into better adults/build character/that kind of thing. That idea is considered ridiculous these days, but I'm sure many people still believe it.
Sure, training for a profession doesn't have to be easy, but it also doesn't have to be harder than necessary.
Posted Feb 14, 2025 15:20 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
And as someone with medical qualifications (no I am not medically qualified, nor trained), it also should not involve filling trainees with propaganda that bears little relationship to reality. All this training can seriously hinder the spread of good practice.
Take it from someone who has been at the wrong end of arrogant doctors, doctors who are well meaning but ignorant, doctors who can't do a good job because they can't communicate ... and all of whom are people who would almost certainly be horrified if they realised the harm they'd done.
And most of it is down the Religious Dogma (or Politics - equally as bad) instilled in professions with long apprenticeships. But we see that everywhere in life, the powerful want to hang on to power.
Cheers,
Posted Feb 14, 2025 14:08 UTC (Fri)
by dralley (subscriber, #143766)
[Link] (14 responses)
> Some professions require literally a decade (or more) of study and apprenticeship. Others may have harsh and gruelling training regimes -- and these traits (and the emergent cultures) are known in advance by those considering these professions.
In nearly every profession or field or study, there is a reasonablely large body of "documentation" about how and why things work the way they work, it's not all kept as oral history dictated by the elites. Or else, those elites at least regularly undertake some form of mentorship or "advisor" relationship to help the next generation.
As was discovered last summer, even being asked to document or explain subtle details of the existing behavior was apparently too much for some maintainers, even though some of those maintainers couldn't even agree with each other about those same subtle details. Some people seem like they just want to be left alone in their sandbox and never asked to explain anything by anyone "beneath" them.
Those kinds of people certainly exist in all fields, but nobody likes working with them very much.
By the way, none of this should be "bending over backwards". The lack of documentation on API semantics is an actual problem with practical consequences.
Posted Feb 14, 2025 14:40 UTC (Fri)
by koverstreet (✭ supporter ✭, #4296)
[Link] (10 responses)
And they document more than anyone.
These problems are not unique to our profession.
Posted Feb 14, 2025 14:48 UTC (Fri)
by dralley (subscriber, #143766)
[Link] (2 responses)
Just that having low-bus-factor elites that don't document things *and* won't help mentor new developers / maintainers is kind of a problem for the long-term health of any project. Responding with "RTFM" (while having a manual that actually answered the most basic questions one could have) would be an improvement on the status quo in some cases.
Posted Feb 14, 2025 14:52 UTC (Fri)
by koverstreet (✭ supporter ✭, #4296)
[Link] (1 responses)
Oh, that I'd agree with.
I spend most of my time available on IRC and I actively tell new people working on the code "ask me if you're blocked on something, it's my job to get you unblocked". There does have to be a way for new people to get involved and learn.
And maintainers need mentorship, too...
Posted Feb 14, 2025 16:50 UTC (Fri)
by branden (guest, #7029)
[Link]
That is the essence of good engineering management. I laud you for recognizing it and practicing it.
Posted Feb 14, 2025 16:02 UTC (Fri)
by excors (subscriber, #95769)
[Link] (1 responses)
Probably because it doesn't provide the capabilities that anyone has wanted since the 1970s (it's no good for launching satellites or reaching the ISS or Mars), and it cost >10x more to build than a modern rocket using modern tools and materials, and its safety was much lower than would be acceptable nowadays.
Posted Feb 14, 2025 16:52 UTC (Fri)
by branden (guest, #7029)
[Link]
Vegas is accepting wagers on how well this comment ages.
Posted Feb 14, 2025 20:39 UTC (Fri)
by ggreen199 (subscriber, #53396)
[Link] (4 responses)
Would I be surprised, if after they used the documents for whatever reason they wanted them, they threw them out again? No, I would not be surprised. One constant I have learned from a long career, is that long term planning or retention does not seem to be a concern of any organization I have been associated with.
Posted Feb 14, 2025 22:23 UTC (Fri)
by excors (subscriber, #95769)
[Link] (1 responses)
> The longstanding story that NASA lost or destroyed the Saturn 5 plans quickly falls to pieces when one learns about the F-1 Production Knowledge Retention Program. This was a project at Rocketdyne, the company that built the F-1 engine, to preserve as much technical documentation and knowledge about the engine as was possible. According to an inventory of records, this produced twenty volumes of material on topics such as the engine’s injector ring set, valves, engine assembly, and checkout and thermal insulation and electrical cables, among others.
Engines are probably the most complex part of a rocket, and they can often be reused in new rocket designs, so they're worth preserving. A lot of the rest of a rocket probably isn't worth it; the original blueprints depend on 1960s components that are no longer available, they use 1960s materials that are inferior to modern ones, the tooling is too bulky to store unused for decades, the launch pads and assembly buildings have been repurposed, etc, and most parts aren't that hard to design anyway (compared to engines), so even if you had perfect documentation it would probably be cheaper to redesign the rocket from scratch.
So I think the reason they didn't keep perfect documentation is because they knew it wasn't going to be needed, not because their engineers just didn't want to bother writing it down.
Posted Feb 15, 2025 22:14 UTC (Sat)
by ggreen199 (subscriber, #53396)
[Link]
While materials and tooling do change, loading, dynamic response to flight regimes, etc are of course relevant. There are many more points to a design than the those two I cited also.
Posted Feb 14, 2025 22:24 UTC (Fri)
by pizza (subscriber, #46)
[Link] (1 responses)
In my experience, the opposite of "retention" is deliberately practiced, with documentation being automatically shredded as a matter of routine, solely because doing so means you can't ever be accused of willfully destroying evidence in hypothetical future lawsuits.
This pathology is combined with a propensity to cull experienced technical staff, outsource work, and never promote from within (the excuses invariably distill down to "too expensive") to produce what is effectively a sort of institutional anti-memory.
(...I've seen this occur while a product is still being actively produced and sold, in businesses with product support cycles that are measured in decades.)
tl;dr: "Retention" has guaranteed costs, with completely unquantifiable benefits in at some point in the distant future. (where "distant" is anything beyond the current fiscal year)
Posted Feb 16, 2025 22:13 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Pssh. We're reliant on the *quarterly* reports these days.
Posted Feb 15, 2025 16:02 UTC (Sat)
by hallyn (subscriber, #22558)
[Link] (2 responses)
Posted Feb 15, 2025 16:59 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Feb 16, 2025 22:32 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Posted Feb 14, 2025 21:35 UTC (Fri)
by roc (subscriber, #30627)
[Link] (2 responses)
You make a good point.
But in nearly every profession or field of study, there is also an expectation of continuing education --- that old folks must continue to learn and understand new ways of doing things. This is more true the more tech-adjacent the field.
Yet we see many examples of kernel maintainers who explicitly deny being subject to that expectation. For them, "but it's too haaaaaard" DOES fly.
Posted Feb 14, 2025 22:40 UTC (Fri)
by pizza (subscriber, #46)
[Link] (1 responses)
Sure. But it's a more general requirement ("X hours of continuing education a year") rather than "you will all learn and immediately adopt/incorporate THIS thing, or else you're out." Even in highly tech-adjacent (and/or highly regulated) fields.
> Yet we see many examples of kernel maintainers who explicitly deny being subject to that expectation. For them, "but it's too haaaaaard" DOES fly.
Is R4L an officially-blessed mainline feature? Or is it still considered an experiment?
"Let other folks who care undertake and maintain this experiment, I'm already working more than full time" is a perfectly rational attitude to take. (and how most Linux features have been, and still are, developed) Given the amount of technical churn that's already taken place, it's hard to argue that it's not a reasonably justifiable position.
Is Linux a C project? Is Linux a Rust project only if you have certian hardware or want specific features? Or is Linux a Rust project that consists mostly of (purely legacy) C? One way or another, it's well past the point where Torvalds needs to make a decision.
Posted Feb 15, 2025 3:35 UTC (Sat)
by dralley (subscriber, #143766)
[Link]
It's an officially blessed experiment, although at this point it's not much of an "experiment" anymore and it would be pretty surprising to see it completely rolled back.
The bigger question is whether it remains allowed for drivers only as it is currently, or if it is eventually allowed into the core kernel.
> "Let other folks who care undertake and maintain this experiment, I'm already working more than full time" is a perfectly rational attitude to take. (and how most Linux features have been, and still are, developed) Given the amount of technical churn that's already taken place, it's hard to argue that it's not a reasonably justifiable position.
That would be a very justifiable position, but it's not Christoph's position. Christoph's position is substantially less justifiable.
Posted Feb 14, 2025 13:50 UTC (Fri)
by koverstreet (✭ supporter ✭, #4296)
[Link] (3 responses)
Yeah, this is a really important point.
We've got a real problem with overprofessionalism (c.f. elite overproduction in society at large); this where some of my beef with the CoC and the committee's approach comes from.
It seems they want the kernel to be an emotionally safer place for maintainers, but there's a cost. Where do new engineers come from, the ones who really drive things decodes down the road?
They start out as engaged, hot headed young people who are interested in technology, of course. Seeing comments on Phoronix and elsewhere gives me fond memories of where I was 30 years ago, and I find more and more that there's a lot to be had in those interections if you show a bit of patience and empathy.
Along with Ted's "thin blue line" mentality, I see a lot of ways in which the kernel community is becoming more insular and closed off when we need to be engaging with the outside world.
Posted Feb 14, 2025 16:25 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
Superheroes can afford to stick to my way or the highway. Normal human beings can not.
Posted Feb 14, 2025 17:01 UTC (Fri)
by koverstreet (✭ supporter ✭, #4296)
[Link]
Posted Feb 14, 2025 17:53 UTC (Fri)
by branden (guest, #7029)
[Link]
"We've got a real problem with overprofessionalism (c.f. elite overproduction in society at large); this where some of my beef with the CoC and the committee's approach comes from."
I don't think you're wrong, but I think your statement is easily misread. Let me attempt my own interpretation of it, with which you will not necessarily agree.
Our society (I'll speak here mainly of the U.S.--problems are similar though less extreme elsewhere) overproduces people credentialed for management. Unlike many, I don't claim that our ratios of engineering to social science to management to liberal arts graduates are out of whack, for the simple reason that in much U.S. employment, a bachelor's degree in _any discipline of study_ is regarded as a qualifying criterion for a management role--and often does most of the lifting of "sufficiency" for such a position.
And the reason people seek out these management roles after graduating college is that in many or most sectors, they're the only ones that pay a true living wage or offer a plausible path to one. Everybody else is tied to the federal minimum wage (or compensated by some meager increment above it), which hasn't approached a living wage in the memory of most of the workforce.
Understandably, every kid's parents push hard to get as many of their offspring as possible into the college prep/future management track, even if they have no temperament for or interest in knowledge production via scholarly methods (the reason universities exist). We end up with more "managers" than we need, but there's a tacit agreement in business leadership not to proletarianize the bulk of them (say, by sectors of the economy proclaiming, "okay, that's enough, no first-line managers without master's degrees"), because that risks upsetting the political equilibrium upon which the existing systems of rent extraction depend. The end result is a sloshy mass of managers without much real managing to do, so they become mandarins or commissars, the latter being a feature of the Soviet system but, being too good an idea to let die with communism, now constitute a means of achieving government "efficiency".
The result is that we have a lot of superfluous people applying their management training--or what passed for it--in places and to situations where worker self-management was adequate, or should have been permitted to develop organically, from the bottom up rather than top down. We see repeated instances of two kinds of problem. (A) heavy-handed CoC enforcers, often drafted from outside the communities they serve--because they're "professionals"--decreeing expulsions of significant (but not top-tier) contributors and drawing backlash, not because there wasn't a problem, but because the instincts of that community respond appropriately to sledgehammer tactics applied by people in mallet-shaped who have no other function and can contribute nothing else. And (B) an elite class of special contributors against whom the sledgehammer will never be swung no matter what. Sure, we can talk them into going to "sensitivity training" once, maybe. After that they'll rediscovered their indispensability, knowing just as do their employers that they can find greener pa$ture$ elsewhere. Given the choice to retain between one of marquee people and a faceless functionary, the outcome is obvious. That dynamic creates intense competition for one of those coveted untouchable spots, which in turn promotes bunker mentalities, rivalries, territoriality, and personal attachment to work product (like kernel subsystems) with which one is publicly associated in the minds of the community.
In summary, many projects seem to have drifted into a place where all of an immature developer's worst inclinations are seen to be indulged--if you're one of the "right" developers. A project survives and develops successfully in spite of these perverse incentives, not because of them. It's good that we have so many basically honorable and decent people attached to FLOSS projects, and a deep shame that the conventional wisdom is that they need to be managed "better" with approaches that will actively harm them.
Management is often a necessary function. But as with many products and services, buying from the seller employing the highest-pressure tactics or who pushes the least rational arguments ("everybody else is doing it!"), often leaves one underwhelmed and experiencing remorse. (But we can talk you out of saying so. C'mon, you can't just be disparaging managers like that. Do you want people to think you're a Marxist?)
I'll leave you with a lengthy quote from a historian friend of mine that helps show how we got here.
"Whereas in previous models of corporate governance, large shareholders (often from founding families like Ford) appointed fellow members of the owner class, the old school bourgeoisie/owner class, to internal office within these companies. In other words, one amassed stock (by inheritance or profit in another company) and then ascended to leadership. But the commissar class offered a concurrent competing model whereby one was appointed to leadership by other members of this class who managed funds and held voting proxies and, from that position, compensated oneself with shares and stock options.
"This ascendant class based these appointments on an expanding academic discipline, “business administration,” the skill at and understanding of the management of people based on the ability to analyze and manipulate mass psychology through statistical analysis. Whether this actually made any company more efficient is entirely debatable. The point is that the commissar class could justify its amplification of its own power by using a meritocratic discourse of expertise, not in what the company made or did but in psychological manipulation. People who ran companies, according to the logic of the commissars, didn’t run them because they were rich or because they understood the industry and had risen through its ranks but because they were masters of an arcane science McNamara and his ilk had helped to create.
"It was therefore perfectly logical that the commissars would endow business schools with funds to make more commissars. And as the austerity programs the commissars championed went into effect, these schools came to exercise an outside influence on university cultures as they expanded financially while the rest of the universities contracted. Logically, of course, the way to save other parts of the universities was to make them more closely resemble the business schools that produced the commissars or, conversely, to reassure the commissars by withdrawing into various forms of immaterialism so as not to produce graduates who might make competing meritocratic claims on the basis of specific, disciplinary knowledge as opposed to the meta-science of management.
In other words, the postmodern turn and the rise of the business school were of a piece with one another, both driven by austerity, the ascendance of the commissars, Soviet subversion and immaterialism. By immaterialism, I mean that management was increasingly a science of psychological analysis and manipulation while humanities and social science scholarship relocated from describing the physical world to describing people’s thoughts about that world. The idea that reality is a social construction is one equally championed by the business schools and postmodernists who seized control of humanities and social science scholarship.
"These processes were already underway when the Eastern European dictatorships that had helped create them collapsed one after another. But without the worry of the Soviet Bloc as competition and with the removal of any serious political alternative, commissar-driven austerity could accelerate, as it rapidly did in the 1990s, raising tuition fees, ensuring that those working class people who did rise through the university system would be heavily indebted and thereby more controllable should they attempt to join this ascendant class.
I concede that this sort of analysis wanders far from LWN's editorial concerns. I realize that we're all here to hack, not to understand why firms like Intel or trade associations like the Linux Foundation operate the way they do.
Posted Feb 14, 2025 18:56 UTC (Fri)
by branden (guest, #7029)
[Link] (2 responses)
> Marcan’s one of those people whose grievances all look individually reasonable,
Okay, so we have a non-trivial set of grievances each of which has merit in isolation. 1+1+1+1+1...
> but when you add them all up it suddenly [suggests an implausible, or at least deeply distressing, result].
equals -1.
This process resembles nothing as much as shooting the messenger, or fallaciously rejecting a valid deductive conclusion because you don't like it. That's the opposite of a reasoning process.
When diagnosing this sort of surprising result, you should work harder to understand what's going on. Maybe you're right to reject the conclusion you reach (even once deflated of the hot gas you pumped into it when writing). A couple of approaches come to mind.
1. You noted a suddenness to the process. Okay. Delete marcan's grievances individually, one by one, until you reach a different conclusion that follows from them and yet doesn't surprise you. What is that conclusion?
2. It is sometimes the case that a person makes some decision premised on grounds that they don't disclose, then retrospectively cobbles together a "case" for it that bears little resemblance to the reasoning process they actually used. This comes up frequently in employment law cases, where a finding of a tortious or unlawful firing of an employee is made because the employer offered "shifting rationales" for termination over time (with the biggest differences typically showing up once the matter gets litigated and the employer retains professional legal advice). If you think that is the case with marcan, then the burden is on you to establish that he had some other basis, one the community would be less likely to accept as legitimate, for his decision. (Like, say, "too many Linux kernel people casually inspect their own names and reach the conclusion they're without sin".) Such a claim will demand evidence. If that exists only locked up inside marcan's scheming mind, then you have no objective evidence to offer.
In sum, the onus is on you to explain why 1+1+1+1+1 doesn't equal 5 here.
Posted Feb 14, 2025 22:01 UTC (Fri)
by Phantom_Hoover (subscriber, #167627)
[Link]
If you want the harsher aphorism version: "If you run into an asshole in the morning, you ran into an asshole. If you run into assholes all day, you're the asshole."
Posted Feb 15, 2025 6:16 UTC (Sat)
by milesrout (subscriber, #126894)
[Link]
Hector unfortunately seems to come into conflict with just about everyone. He always seems to be able to come up with a reason for having these conflicts but at the end of the day you have to sit back, look at the big picture, and ask if that isn't just "system 2" trying to make up for some serious "system 1" issues.
Posted Feb 13, 2025 18:23 UTC (Thu)
by Lennie (subscriber, #49641)
[Link]
Would it be a good idea to have a flag in the documentation for devices which don't have hardware manufacturer co-operation ?
To make it clear: this device will not be supported at release time, it will take longer before it's supported, etc. because the manufacturer gives us no information to work with/does not help fund the development, etc.
Posted Feb 15, 2025 14:42 UTC (Sat)
by rsidd (subscriber, #2582)
[Link] (37 responses)
Nouveau is planned to be superseded by Nova, which is Rust-based.
Rust is not going anywhere, and real-world hardware already depends on rust in the kernel. (That's Apple M* so far, but there will be more in the wider world, not just Nova.) I'm just astonished that Linus has let things get to this pass, after initially offering his support to R4L. If it is not resolved, a rust-friendly kernel fork seems inevitable.
Posted Feb 15, 2025 16:40 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (13 responses)
Posted Feb 15, 2025 17:04 UTC (Sat)
by rsidd (subscriber, #2582)
[Link] (12 responses)
The bigger point is that these particular, supposedly indispensable, maintainers are growing old too. If they really are indispensable the kernel is in trouble. As many have noted, young programmers just aren't interested in programming in C. And there is no good reason to.
So if this continues, I see only two possibilities: (1) A fork emerges that is explicitly rust-friendly and, over time, takes over the mindshare of linux developers so that it becomes the new non-Linus upstream. (2) An alternative like redox takes over.
Ok, there's a third (3): the dream of a general-purpose free/libre OS is dead. But I don't think that will happen.
Posted Feb 15, 2025 17:28 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (11 responses)
Posted Feb 15, 2025 18:12 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (10 responses)
Don't explain if you don't want to, but just remember that cultural references don't always translate very well ...
Cheers,
Posted Feb 15, 2025 18:27 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (7 responses)
As a consequence of the history, whether you see it as problematic or not depends very strongly on whether you would have been a person or a slave on the "lawful" side of the line.
Posted Feb 15, 2025 18:39 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (6 responses)
Politically incendiary disinformation like this is a lot more socially and institutionally toxic than grumpy maintainers.
Posted Feb 15, 2025 18:46 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (5 responses)
Whether you like it or not, historically, the US police forces have had problems with racism, and terms like this were popular within US policing to justify that racist behaviour. Ignoring that history means omitting a lot of significant context, just as ignoring a Hitler supporting using a swastika on the basis that it's a symbol of peace (as it is in many Asian cultures) is ignoring significant context.
Posted Feb 15, 2025 18:54 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (4 responses)
You're going to find it very hard to find citations supporting your claim that the phrase "was meant to represent the US's police as the thin blue line between the lawlessness caused by allowing slaves to escape and act like people", because the article clearly says the first recorded use of the term to refer to police was in 1922, a full 57 years after the abolition of slavery in the United States. Please do not keep spreading this falsehood.
Posted Feb 15, 2025 20:49 UTC (Sat)
by farnz (subscriber, #17727)
[Link] (3 responses)
Just as someone spraying a swastika on a synagogue in Germany cannot justify it by reference to its older meaning, so too is it unreasonable to say that this phrase can't have evolved over time.
Posted Feb 15, 2025 21:34 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (2 responses)
Posted Feb 16, 2025 19:02 UTC (Sun)
by branden (guest, #7029)
[Link] (1 responses)
No, he didn't. He said:
"Historically, it was meant to represent the US's police as the thin blue line between the lawlessness caused by allowing slaves to escape and act like people, and civilization where people acted lawfully."
He said "historically". Usage that was current when Richard Nixon was President is "historical", as was usage prior to the passage of the 13th Amendment. If you don't believe me, ask someone born after 1990. That Jefferson Davis may not have been familiar with phrase is not an argument against the imprecise claim he made, even if it would eviscerate the much more specific one you're putting into farnz's mouth.
The reframing of history to contrive a prelapsarian golden age is a frequent practice of reactionary movements. See, for example, _The Way We Never Were_, Stephanie Coontz, Basic Books, 1993.
Posted Feb 16, 2025 19:50 UTC (Sun)
by Phantom_Hoover (subscriber, #167627)
[Link]
Posted Feb 15, 2025 18:46 UTC (Sat)
by wtarreau (subscriber, #51152)
[Link]
Posted Feb 15, 2025 18:47 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link]
Posted Feb 17, 2025 13:09 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (22 responses)
What’s more disturbing in the message is a maintainer that vindicates his own pushing back by the fact the people being pushed back then leave. A completely self-destructive logic 101 circular argument.
Posted Feb 17, 2025 15:12 UTC (Mon)
by corbet (editor, #1)
[Link] (2 responses)
Posted Feb 17, 2025 16:12 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link]
Posted Feb 17, 2025 18:31 UTC (Mon)
by Phantom_Hoover (subscriber, #167627)
[Link]
Posted Feb 17, 2025 18:08 UTC (Mon)
by Phantom_Hoover (subscriber, #167627)
[Link] (14 responses)
I first encountered the phrase ‘thin blue line’ as a schoolchild in Scotland reading the biography of a South African playwright. It’s been a widely understood metaphor for around a century.
Posted Feb 17, 2025 22:37 UTC (Mon)
by micka (subscriber, #38720)
[Link] (13 responses)
Posted Feb 18, 2025 8:04 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (12 responses)
Applying *this* meaning to kernel maintenance is certainly quite extreme. IIUC, tytso is in the US and unless he lives even further under a rock than I do, is at least tangentially aware of this meaning.
Posted Feb 18, 2025 10:24 UTC (Tue)
by paulj (subscriber, #341)
[Link] (4 responses)
Projecting all kinds of political views onto people, on the basis of a short phrase that means many things to different people... may not be entirely reasonable, to put it politely.
Posted Feb 18, 2025 11:20 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
> Projecting all kinds of political views onto people, on the basis of a short phrase that means many things to different people... may not be entirely reasonable, to put it politely.
I agree and I don't think I was? I can see how my last paragraph could be read that way, but they're under conditionals. I don't know tytso enough to ascribe any specific views to him. Maybe he's unaware of these things, but given that they've reached even to me (who hasn't had TV service since 2007), I tend to assume there's at least some awareness of such things among those (and maybe I'm even completely wrong about him being US-based in any recent timeframe).
Posted Feb 18, 2025 11:27 UTC (Tue)
by paulj (subscriber, #341)
[Link]
Ah, OK. Sure, I accept that. Your comment was in the wider context of where at least /some/ _others_ are projecting their own political takes of "thin blue line" onto T'so - so, let me clarify that my own comment on reasonableness was about those others, not yourself. ;)
Thanks!
Posted Feb 18, 2025 11:35 UTC (Tue)
by Phantom_Hoover (subscriber, #167627)
[Link] (1 responses)
Posted Feb 18, 2025 11:52 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
As for such phenomena in general, sometimes things just get "taken" like this overall. The Hindu religious symbol is still used for its purposes in its homelands and by those who live elsewhere. But outside of that context, it is definitely poisoned to a large extent (regardless of its chirality). Is "thin blue line" poisoned that much? I don't think so, but there's a continuum between "pure good" and "pure evil" upon which things can lie. As another example, see the 4chan adoption of a certain frog character which the original author has tried to "take back" but has largely (AFAIK) failed to do so.
Such things can move across the spectrum over time and it can differ between different cultures and people. For example, see curse word usage acceptability in Australia and the US. Just because one c-word is OK to use in Australia doesn't mean they'd get away with it in the US. One can make mistakes and accept being wrong about when and where to use such terms, but continued usage after being informed of the connotations without disclaimers of some kind is the kind of gap dog whistles fit nicely into.
Posted Feb 18, 2025 10:31 UTC (Tue)
by Phantom_Hoover (subscriber, #167627)
[Link] (6 responses)
This obsession with language games, this belief that you not only have to have the correct politics but you have to express it in exactly the right way in all avenues of your life lest you be called out by someone purity policing, is an absolutely toxic dead end. You will lose more people in the long run to the alienation and dysfunction that comes with trying to enforce it than you will keep by pandering to those who demand it.
Posted Feb 18, 2025 11:24 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
I would agree if it were not for the existence of "dog whistles". Everyone can say "but it wasn't meant that way!", but when, e.g., US militia movements see some "innocent phrase" as enablement, you might want to consider the additional ramifications of using such words carelessly. I don't know if tytso means it in that way, but it is not, IMO, a phrase to be tossed around carelessly.
Posted Feb 18, 2025 11:57 UTC (Tue)
by Phantom_Hoover (subscriber, #167627)
[Link] (3 responses)
Posted Feb 18, 2025 12:15 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Anyways, we're far afield of LWN topics. I'll stop here at least.
Posted Feb 18, 2025 12:25 UTC (Tue)
by Phantom_Hoover (subscriber, #167627)
[Link] (1 responses)
Yes — for instance, if you exercise your freedom of speech to create a purity culture where everyone needs to constantly signal their adherence to the Right Politics, the consequence might be alienating everyone outside of a small elite constituency, and a bunch of absolute maniacs might win power and start trashing your country. Just hypothetically, of course. I’m probably worrying about nothing.
Posted Feb 18, 2025 13:27 UTC (Tue)
by jzb (editor, #7867)
[Link]
Posted Feb 18, 2025 12:54 UTC (Tue)
by sdalley (subscriber, #18550)
[Link]
The fact is, one simply cannot simply pick apart the expressions people (e.g. kernel maintainers) use, as if one were deciphering some odd C preprocessor macro, and pronounce "that clearly means they're anti-Rust" or something. English isn't like that; it has context and "global state" all over the place!
English (and other) language comprehension requires:
Re-reading (for example) the whole of Ted's email https://lore.kernel.org/lkml/20250208204416.GL1130956@mit... in that light, one realizes there's no reason to be contentious.
Posted Feb 18, 2025 10:18 UTC (Tue)
by paulj (subscriber, #341)
[Link] (3 responses)
Not all the world is America.
Posted Feb 20, 2025 11:02 UTC (Thu)
by dvdeug (guest, #10998)
[Link] (2 responses)
Posted Feb 20, 2025 11:37 UTC (Thu)
by paulj (subscriber, #341)
[Link]
I'm not from the north myself, but yet I have had British army soldiers point rifles at me at checkpoints. I grew up with people whose family fled the north cause of the troubles, and who also lost close family in the troubles.
You should also note the examples I gave are *not* at all widely considered "positive" examples of policing in the UK!
Posted Feb 20, 2025 12:04 UTC (Thu)
by Phantom_Hoover (subscriber, #167627)
[Link]
I did get a great laugh out of my relatives when I told them about the middle class student activists at my university calling for police abolition, who were pointing to the IRA’s role in West Belfast as a model of non-coercive community driven policing.
Some maintainers still behave like Linus used to
Some maintainers still behave like Linus used to
Some maintainers still behave like Linus used to
Some maintainers still behave like Linus used to
I am not a well-known individual in the FOSS world, but speaking out against someone else's favourite project got me to the point where I had to roll over my online identity more than once to shake the trolls.
I would caution anyone who gets into Open Source from using their real name or photo in association with their work. FOSS has caught up with the rest of the internet, and its not a safe space.
Some maintainers still behave like Linus used to
Some maintainers still behave like Linus used to
I didn't do that either. "Brigading" has a very specific meaning, and both Linus and David Airlie used the term incorrectly. Ranting on social media is not automatically brigading (and Linus and other people on that thread have done their share of ranting on social media too).
Some maintainers still behave like Linus used to
Some maintainers still behave like Linus used to
A lot of good stuff in there
A lot of good stuff in there
Yet, the kernel community is full of people yelling at other people and blocking other people.
A lot of good stuff in there
A lot of good stuff in there
How?
I have never said or even implied that.
Maybe this is a good point - if not past it - to stop this particular sub-discussion.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
>
> Pardon?
A lot of good stuff in there
I just wanted to let you know. Give some feedback.
I think letting you know is useful in both cases whether you did copy&paste something from an AI or whether you didn't.
That's all. Really.
But I would never call him names or shout at him, because that is the behavior that I'm actually criticizing.
I have been yelled at by the same people who yelled at you. It might be acceptable to you, but It's not acceptable to me.
A lot of good stuff in there
>
> I'm very sorry that this has apparently upset you a lot.
Alex
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Think about x.org or LibreOffice for just two examples of big project split-offs.
I think a huge part of that is due to the programming language that some of the maintainers love so much.
Lots of once big projects have completely been forgotten and replaced by now.
I think it's foolish to think that the current Linux is the only viable option to go forward.
A lot of good stuff in there
A lot of good stuff in there
> And this is especially the case because it is far from certain that C compilers, optimizers, and linkers will not be developed that remove the vast majority if not all of C's and C++'s undefined behaviors without a performance penalty
A lot of good stuff in there
A lot of good stuff in there
> it's still pretty tied to Apple.
A lot of good stuff in there
As someone who has worked on both kernels and garbage-collected language implementations, they are two great tastes that do not go great together. It's not theoretically impossible, but it's Hard Mode on several axes at once. And you generally need to make compromises around the kernel's abilities, or bypass the GC for long stretches of code.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Thanks for making me read (was A lot of good stuff in there)
I recently stumbled about this blog post. Can highly recommend it, as it goes into further detail on this! (If you haven't come across it)
Thanks for making me read (was A lot of good stuff in there)
Thanks for making me read (was A lot of good stuff in there)
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Wol
A lot of good stuff in there
A lot of good stuff in there
> Not sure what happened to two or three replies I posted here, but generally speaking insulting others is not the best way to win friends and influence people.
A lot of good stuff in there
Intel 432
Another iAPX 432? It would work exactly as “efficiently” (that is: very inefficiently) and would achieve exactly the same level as success as original.
Bob Colwell, later to be senior architect on the P6 (= Pentium Pro) and Pentium 4, published postgrad research on why the 432 failed. He was insistent that a lot of people had taken the wrong lessons:
Bob said that nearly all the performance problems were due to the space limitations of the existing manufacturing processes (leading to no memory caches and no general-purpose registers, for example), "part of the problem with the 432 was these guys just weren't paying attention to performance", and this anecdote:
Eventually, I said "Okay, who are you? You know way too much about this technology." He
tells me "I'm the leader of the compiler team." And I said "In that case I probably just fatally
offended you." He said "No, not at all because I know we generate bad code and I don't
care." He said "We don't like the 432 hardware team." And I thought "Oh my God, there is
no hope that this project is going to work when you have the two main casts killing each
other." He said "That hardware team never listened to us compiler folks. At some point we
decided that we'd live up to the letter of the contract but beyond that? No."
Bob thought that 90+% of the performance issues were down to Intel mis-steps, and with a newer manufacturing process, more attention to performance, a redesigned ISA, and a competent compiler, there could be a competitive part.
And the answer came back, "We don't care
what it is, doesn't have to go faster. We're satisfied with the 486." And I thought okay, you're
doomed.
Intel 432
> I gather that much of what what the last demanded of compilers, drawing furious opposition that labeled the machine dead on arrival, is now done by compilers as a matter of routine
Intel 432
Intel 432
> Fortunately it's one that applies only to other people (or maybe just me)--not to you.
Intel 432
To bro or not to bro
As you can imagine, we will indeed ask that the folks involved in this subthread put an end to it.
Countdown reached
Countdown reached
> (2) he reached conclusions that seemed consistent with my own <mumble> years of experience in software engineering; and that, bluntly, I was inclined (or biased) to believe already.
To bro or not to bro
Intel 432
Intel 432
Itanium and compiler changes
the notorious IA64 (Itanium)--I gather that much of what what the last demanded of compilers, drawing furious opposition that labeled the machine dead on arrival, is now done by compilers as a matter of routine.
What made the Itanium dead on arrival is not what it demanded of compilers, but that to succeed it required the compilers to do things for Itanium but not for other architectures. Itanium could not perform adequately unless the compiler scheduled code - but to perform better than Pentium Pro, POWER3 and other out-of-order machines, you needed to ensure that the compiler did not use the scheduling information to do a better job of codegen for those systems.
Itanium and compiler changes
No, it doesn't, because the compiler changes were made before Itanium needed them, precisely because they helped other architectures, too.
Itanium and compiler changes
Itanium and compiler changes
It's notable that Intel bet twice in a row on technology futures that didn't happen.
Itanium and compiler changes
> It's notable that Intel bet twice in a row on technology futures that didn't happen.
Itanium and compiler changes
Itanium and compiler changes
> > But compiler developers aren't drooling idiots, and used the analyses they did that were required to make Itanium perform adequately to help non-Itanium CPUs.
Itanium and compiler changes
I'd suggest looking at the things that came out of IBM research - Single Static Assignment and so on - along with looking deep into how EPIC was supposed to work.
Itanium and compiler changes
A lot of good stuff in there
A lot of good stuff in there
... because it is far from certain that C compilers, optimizers, and linkers will not be developed that remove the vast majority if not all of C's and C++'s undefined behaviors without a performance penalty - not only without a performance penalty but with a performance increase. There are people working on that right now that have been mentioned here on LWN.net and from what I can see they have been making pretty good progress.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Wol
A lot of good stuff in there
> As someone who has learned many programming languages as a hobby in the past I find the view that syntax is even close to the most difficult thing to learn about a new language outright bizarre.
A lot of good stuff in there
A lot of good stuff in there
> Some of this thread sounds very similar to the comments by one of the creators of Go, that they were hoping to attract people that wrote C but ended up attracting way more people from more managed languages.
> It's like there's something about being an ingrained C programmer that makes it harder to switch to something higher level.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
-- Me. I was "Rust-adjacent" from the beginning of Rust (in fact I was one of the people who first reviewed Graydon's proposal, when it was a very different language). But from first trying to write Rust code to being reasonably productive was a few days max. But OK, I had a CS PhD, lots of experience with C++, some experience with other weird languages.
-- My Pernosco co-founder. No PhD, lots of C++ experience but not other languages. With Rust, had a similar experience to me.
-- My son, an undergrad CS student. A quite small amount of experience with JS, Python, Java and C. Picked up Rust pretty easily.
-- Various people I worked with at Mozilla and Google who ended up on a project where they needed to use Rust. C++ experience, some like Rust more than others, but none "failed to learn Rust" or anything close to that. AFAICT they all picked it up pretty easily.
(This is all "can read Rust code and write code to solve problems", not "have mastered everything".)
> Maybe for some kernel maintainers who write nothing but C for decades, their minds get locked into certain ways of reading and writing code, and it's really hard to break out of that? Sounds weird, but this corresponds most closely to what you described in your comment.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
> That is C really is good enough to get the job done.
A lot of good stuff in there
> The kind where we can stand a server up on the Internet and start having to worry what happens when the uptime counter wraps because there hasn't been any security issues found in the software for years. Completely matters.
> C has shown again and again and again and again and again and again that it's NOT possible to write secure non-trivial software in it. Pretty much every large network-facing product in C has seen its share of critical security issues.
A lot of good stuff in there
If you can accept that then yes, C gets the job done.
>the rare mistakes conscientious people make.
A lot of good stuff in there
A lot of good stuff in there
> There is a different issue for me.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Wol
> I wonder how I'll fare when I really start digging in to it.
A lot of good stuff in there
> And so I've found Guile/Scheme and Forth very tricky, while I've always programmed C like it's Fortran
unsafe and pointless.HashMap or BTreeSet)A lot of good stuff in there
Even those fancy lifetime annotations and generics can be pronounced.
Yes, in C you don't have to learn that, because C lacks these features.
You didn't even read the beginners book. That's obvious.
And that's perfectly fine, unless you shoot against Rust.
It is not possible to express the majority of Rust safety details in C. Therefore, it's impossible in general to translate C to safe Rust.
A lot of good stuff in there
A lot of good stuff in there
https://doc.rust-lang.org/std/ptr/index.html#provenance
>as in other languages it was used to delimit either a char or a string for example
I guess you are locked-in to C then.
https://doc.rust-lang.org/book/
I also have learnt many languages with very different syntax and look and feel. That's why I really don't see why Rust would be so different here. In fact it's very similar in look and feel to C. It's much closer to C than say Perl or Python or even C++.
Well. That's not fixable then and you won't ever be able to learn new languages.
C lacks the ability to express certain fundamental things. The only way forward is to add syntax for these things. Just like C did, when they changed their code from ASM to C. All those fancy for-loops there. I want my JMP back!
>and it's possible that some languages are more suitable for the ones
That is just not true.
It *does* "permit" more UB all over the place, though. But that is not a useful feature. It's a trap.
I am not considering bad faith in most people.
I am considering the lack of knowledge most of the time.
A lot of good stuff in there
>>as in other languages it was used to delimit either a char or a string >>for example
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
They are single characters starting at 'a by convention, but sometimes it is useful to use more descriptive names such as 'ctx or even longer names.
FWIW, I encourage people who are "fighting the borrow checker" and can't just stick to owned copies of data to use long and explicit lifetime names, and then to work out how to elide them. While you can write fn foo(&mut self, name: &str) -> &str, it's often easier to reason about what's going on when you're hazy on the elision rules if you write that as fn foo<'this, 'name>(&'this mut self, name: &' name str) -> &'this str), because it makes it much clearer where the surprising behaviour is coming from.
Lifetime names
Lifetime names
I also find similar - but I find that it's often easier to dig myself out of a mess if I start by thinking about things in deeply concrete terms (with long clear names), and move to the abstract once I understand the concrete sphere. So starting out with lifetimes like 'closure_changing_id and 'id_source, put in very explicit 'id_source: 'closure_changing_id style of annotations (rather than relying on lifetime variance), and then gradually trim back down to a minimal setup once I've understood what was going on.
Concrete versus abstract
A lot of good stuff in there
>>as in other languages it was used to delimit either a char or a string for example
>
>In Rust it *is* used to delimit a char. I'm not sure what your point is.
A lot of good stuff in there
I really don't see how one could mistake 'a for 'a', because they come in completely different places of the code (type position vs. value position).
And also in the vast majority of the cases you don't even have to write lifetimes ('a) down at all. Your example is in-between having to write it down and not having to write it down, because it already elides the name, but not the whole '_ syntax. The advantage of '_ is that it allows you to not having to write down the lifetime declaration. So (at least) one less ' character in the code.
http://unixwiz.net/techtips/reading-cdecl.html
A lot of good stuff in there
The goal is not to make use of every symbol that's on the keyboard (hey, remember Perl?) and looks like a mathematician's scribblings, but to make a program that's amenable, to others, as well as oneself 10 years down the line. C# and Python's use of keywords appear like a nice idea, e.g. "f(ref int x)" and "for i in stuff", respectively.
A lot of good stuff in there
So..., how would you spell out lifetime annotations?
A lot of good stuff in there
> So..., how would you spell out lifetime annotations?
A lot of good stuff in there
A lot of good stuff in there
To me that is a good thing.
A lot of good stuff in there
To me that is a good thing.
A lot of good stuff in there
And if you were wrong with that assumption, it won't UB on you like C does.
It often suggests what exactly to change.
It explains exactly what is wrong and often even provides a link to an article about the error with examples of the coding error and examples how to fix it.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Wol
A lot of good stuff in there
A lot of good stuff in there
Wol
> If on the opposite I say "it compiled so it's safe", I quickly learn not to care anymore about defensive approaches.
A lot of good stuff in there
A lot of good stuff in there
>
> fn main() {
> let arg = args().nth(1).expect("must provide an argument");
> let number: u32 = arg.parse().expect("argument must be a number");
>
> println!("hello world: {}", if number > 1 { number } else { 0 });
> }
A lot of good stuff in there
>
> fn main() {
> let number: u32 = if args().len() > 1 {
> args().nth(1).unwrap().parse().expect("argument must be a number")
> } else {
> 0
> };
>
> println!("hello world: {}", number);
> }
Using match to avoid one unwrap:
A lot of good stuff in there
use std::env::args;
fn main() {
let number: u32 = match args().nth(1) {
None => 0,
Some(x) => x.parse().expect("nan"),
};
println!("Hello, world! {number}");
}
A lot of good stuff in there
> Had they make A lot of good stuff in there
main() to be main(arg0: &[u8], arg: &[&[u8]]) or equivalent it should have been more obvious what to do
arg0 exists and Windows program doesn't even receive the list of command-line arguments, but one, single, array of USC-2 characters (no, not UTF-16 as people often think)!A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
Wol
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
I also have learnt many languages with very different syntax and look and feel. That's why I really don't see why Rust would be so different here. In fact it's very similar in look and feel to C. It's much closer to C than say Perl or Python or even C++.
Wol
A lot of good stuff in there
A lot of good stuff in there
> For highly optimised machine code with no corresponding debugging information or source code, there's very little chance that back-translating to /any/ higher level language will give you the original human-written code - optimising compilers just don't work that way.
A lot of good stuff in there
unsafe. Going from C to this kind of Rust wouldn't help anyone.unsafe… something that you have to avoid, not something that you may want to embrace!A lot of good stuff in there
> I've noticed that fewer and fewer (both proportionally and in absolute terms) know or even care about how the hardware actually works.
A lot of good stuff in there
clang, too… it should be included in version 21 – and that was done by former students) – and completely different thing to write every line of code with full understanding of what would it produce on the machine code level.unsafe and intrinsics and asm and other such tools for these exotic cases where that's needed”… they entirely miss the point that wtarreau brings about how someone may want to know that in every piece of code that one writes!A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
You are using "SJW" like a slur. Do not do that here.
Perhaps this is enough
> Can I ask you, please, again, to give it a rest?
Perhaps this is enough
A lot of good stuff in there
A lot of good stuff in there
> Do you see the difference?
A lot of good stuff in there
A lot of good stuff in there
> Not just bugs; It's he overwhelming norm when you *don't* have the source code and are trying to figure out WTF the binary is doing.
A lot of good stuff in there
A lot of good stuff in there
Programming in C with machine code in mind is a myth.
> I notice you didn't mention machine code there ... a fundamental part of wtarreau's experience. How close is Rust to machine code?
A lot of good stuff in there
Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the "read head" and available for immediate execution. There was a program to do that job, an "optimizing assembler", but Mel refused to use it.
"You never know where its going to put things", he explained, "so you'd have to use separate constants".
It was a long time before I understood that remark. Since Mel knew the numerical value of every operation code, and assigned his own drum addresses, every instruction he wrote could also be considered a numerical constant. He could pick up an earlier "add" instruction, say, and multiply by it, if it had the right numeric value.A lot of good stuff in there
A lot of good stuff in there
But that's not what is happening in most projects.
The old C code is well tested and there are not many bugs. It can stay as-is for the next 3 decades.
Bugs and security issues primarily happen in new code.
> The old C code is well tested and there are not many bugs. It can stay as-is for the next 3 decades.
A lot of good stuff in there
> But just like no python developer is asked to learn to program PIC or AVR in assembly
A lot of good stuff in there
A lot of good stuff in there
> Are you sure? I rather suspect that significant percentage of CircuitPython users go on to “program PIC or AVR in assembly”.
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
("An Overview of the Singularity Project" - https://www.microsoft.com/en-us/research/wp-content/uploa...)
("Singularity: Rethinking the Software Stack" - https://www.microsoft.com/en-us/research/wp-content/uploa...)
> Statistically, it feels like the next "Linux" is likely to be written in Rust.
A lot of good stuff in there
A lot of good stuff in there
Automated large-scale code translation from C to Rust is likely to look a lot like C2Rust with a set of rewrite rules that target obvious deficiencies in the output Rust (in the same way that cargo clippy --fix has rewrite rules that target obvious deficiencies in human-written Rust).
Automated translation from C to Rust
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
A lot of good stuff in there
There's plenty of blame to go around here
There's plenty of blame to go around here
There's plenty of blame to go around here
There's plenty of blame to go around here
There's plenty of blame to go around here
Wol
There's plenty of blame to go around here
There's plenty of blame to go around here
There's plenty of blame to go around here
A lot of good stuff in there
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Wol
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
"...and its safety was much lower than would be acceptable nowadays."
Big picture
Big picture
Big picture
>
> But the project went beyond simply preserving documentation. Rocketdyne actually sought to preserve the knowledge inside the heads of the people who designed and manufactured the engines. They conducted tape-recorded interviews with them, asking about parts that were difficult to produce and manufacturing tricks that they had learned in the process of building multiple engines.
(https://www.thespacereview.com/article/588/1)
Big picture
Big picture
Big picture
Big picture
Big picture
Wol
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Big picture
Permit me to attempt to rise to the challenge of the "Big picture" subject. The reader may want to brew a coffee for this one. (Or skip it.)
Big picture
Big picture
Big picture
Big picture
Not followed it closely... but was just wondering.
Karol Herbst resigned from nouveau citing, specifically, Ted Ts'o's "thin blue line" comment. Earlier Ts'o's behaviour at a conference caused a R4L maintainer, Wedson Almeida Filho, to resign.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Wol
Historically, it was meant to represent the US's police as the thin blue line between the lawlessness caused by allowing slaves to escape and act like people, and civilization where people acted lawfully.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Given that the Wikipedia link you provide has citations about it being used to refer to exactly that, including police silence about racist attacks on non-white US citizens in the 1970s, calling it "entirely untrue" is stretching the words "untrue" and "entirely" to their limits.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
It was used in that sense in the 1970s and 1990s, and that sense has overtaken the original meaning by a significant margin. Further, it was referencing back to the history of the US, before slavery was abolished - and there were still plenty of people in the 1970s who wanted to go back to the era of slavery, where people knew their place.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
I believe your second paragraph is a complete misreading of what the original message said. It is all too common for contributors to disappear once they get their code upstream, leaving maintainers with the responsibility for it. it's not surprising that maintainers respond by at least wanting that code to be in good shape.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Reading comprehension is a biggie
* reading the full context,
* putting yourself in the shoes of the writer. What might their general circumstances be? What do they probably know that you don't? They clearly have a reason for saying what they do.
* Not making emotional assumptions about what they *might* mean
* What is their overall point?
* When in doubt, assume good faith.
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
Slightly OT, one more resignation from one more project caused by that LKML thread
