LWN: Comments on "Zig 2024 roadmap" https://lwn.net/Articles/959915/ This is a special feed containing comments posted to the individual LWN article titled "Zig 2024 roadmap". en-us Thu, 02 Oct 2025 15:34:14 +0000 Thu, 02 Oct 2025 15:34:14 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Zig 2024 roadmap https://lwn.net/Articles/963613/ https://lwn.net/Articles/963613/ kelvin <div class="FormattedComment"> <span class="QuotedText">&gt; Firefox originates from the Mozilla organization</span><br> <p> that was supposed to be "Rust originates from the Mozilla organization"<br> </div> Mon, 26 Feb 2024 10:33:21 +0000 Zig 2024 roadmap https://lwn.net/Articles/963611/ https://lwn.net/Articles/963611/ kelvin <div class="FormattedComment"> <span class="QuotedText">&gt; I didn't know Firefox is written in Rust. I thought it's mainly C++.</span><br> <p> Firefox originates from the Mozilla organization, and while C++ is still the most used language in Firefox as measured by lines of code, significant parts of Firefox are now written in Rust.<br> <p> Here's a project which tracks the language stats of Firefox: <a rel="nofollow" href="https://4e6.github.io/firefox-lang-stats/">https://4e6.github.io/firefox-lang-stats/</a><br> <p> C++ ~9.9 MLOC<br> JavaScript ~9.5 MLOC<br> C ~5.3 MLOC<br> Rust ~4.3 MLOC<br> <p> </div> Mon, 26 Feb 2024 10:31:32 +0000 Zig 2024 roadmap https://lwn.net/Articles/963408/ https://lwn.net/Articles/963408/ pawel44 <div class="FormattedComment"> <span class="QuotedText">&gt; There are several large Rust projects (firefox 2.3MLOC[...]).</span><br> <p> I didn't know Firefox is written in Rust. I thought it's mainly C++.<br> </div> Fri, 23 Feb 2024 18:05:30 +0000 Zig 2024 roadmap https://lwn.net/Articles/961164/ https://lwn.net/Articles/961164/ rghetta Multithreaded yes. Fuzzed only on import interfaces, not on the complete codebase. Wed, 07 Feb 2024 12:59:13 +0000 Zig 2024 roadmap https://lwn.net/Articles/961087/ https://lwn.net/Articles/961087/ pbonzini <div class="FormattedComment"> GCC and clang have promised for over 10 years to not use that latitude (not sure if left shift overflow is undefined, but anyway negative numbers can be shifted left and right with no other worries). So yeah it should be a no brainer.<br> </div> Tue, 06 Feb 2024 15:25:37 +0000 Zig 2024 roadmap https://lwn.net/Articles/961058/ https://lwn.net/Articles/961058/ kentonv <div class="FormattedComment"> From: <a href="https://en.wikipedia.org/wiki/Memory_safety">https://en.wikipedia.org/wiki/Memory_safety</a><br> <p> <span class="QuotedText">&gt; Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers.[1] For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences.[1] In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking,[2] and thus are potentially memory-unsafe.[3]</span><br> </div> Tue, 06 Feb 2024 14:30:24 +0000 Zig 2024 roadmap https://lwn.net/Articles/961047/ https://lwn.net/Articles/961047/ khim <font class="QuotedText">&gt; Nope:</font> <p>Ugh. I think someone should propose to fix it, then. C++ have finally removed “undefined behavior” from there (crazy values still trigger undefined behavior when <b>E2</b>, the <b>right</b> operand is “strange”, but anything is accepted as <b>E1</b>, the left operand — starting from C++20), thus in practice compilers already have code to handle everything properly</p> Tue, 06 Feb 2024 10:13:31 +0000 Zig 2024 roadmap https://lwn.net/Articles/961041/ https://lwn.net/Articles/961041/ pbonzini <div class="FormattedComment"> Nope:<br> <p> <span class="QuotedText">&gt; 6.5.7 Bitwise shift operators</span><br> <span class="QuotedText">&gt;</span><br> <span class="QuotedText">&gt; [...] 4 The result of E1 &lt;&lt; E2 is E1 left-shifted E2 bit positions [...] if E1 has a signed type and nonnegative value, and E1 * 2^E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined</span><br> <span class="QuotedText">&gt;</span><br> <span class="QuotedText">&gt; 5 The result of E1 &gt;&gt; E2 is E1 right-shifted E2 bit positions [...] If E1 has a signed type and a negative value, the resulting value is implementation-defined</span><br> </div> Tue, 06 Feb 2024 09:02:03 +0000 Zig 2024 roadmap https://lwn.net/Articles/961040/ https://lwn.net/Articles/961040/ LtWorf <div class="FormattedComment"> Most people believe that their project (however simple it might be) is the apex of difficulty.<br> </div> Tue, 06 Feb 2024 08:54:38 +0000 Zig 2024 roadmap https://lwn.net/Articles/961039/ https://lwn.net/Articles/961039/ LtWorf <div class="FormattedComment"> Changing the definition to have the definition apply… classic!<br> </div> Tue, 06 Feb 2024 08:50:57 +0000 Zig 2024 roadmap https://lwn.net/Articles/961007/ https://lwn.net/Articles/961007/ atnot <div class="FormattedComment"> C++ templates are probably actually a good example in multiple ways here. Anyone really into their types would sneer at someone calling templates generics, they aren't really generics for similar reasons as comptime isn't dependent typing. But they're still extraordinarily popular and helpful, and they make it very easy to sell people on proper generics by pointing at them and saying "they're like that, but better".<br> </div> Mon, 05 Feb 2024 20:35:19 +0000 Zig 2024 roadmap https://lwn.net/Articles/961004/ https://lwn.net/Articles/961004/ atnot <div class="FormattedComment"> Yes, for anyone curious, one reason is that comptime isn't statically typed, it's dynamically typed but at compile time. For example, there is no way to write things like "comptime function that returns a function that returns either int or float", you can only write "comptime function that returns a function that returns some mystery surprise type". Like C++ templates, there's no type checking going on until after things have already been evaluated.<br> <p> That said, you can do a lot of similar constructions and I think it faces a lot of similar issues regarding ergonomics when used in a non-fp language. Plus I think it does also demonstrate some of the benefits of having a single unified type system nicely without having to teach someone to read haskell-like syntax.<br> </div> Mon, 05 Feb 2024 20:16:33 +0000 Zig 2024 roadmap https://lwn.net/Articles/961001/ https://lwn.net/Articles/961001/ roc <div class="FormattedComment"> What Zig does is not "dependent types" as in academia.<br> </div> Mon, 05 Feb 2024 19:02:17 +0000 Zig 2024 roadmap https://lwn.net/Articles/961000/ https://lwn.net/Articles/961000/ roc <div class="FormattedComment"> Is it multithreaded and being fuzzed by experts?<br> </div> Mon, 05 Feb 2024 19:00:23 +0000 Zig 2024 roadmap https://lwn.net/Articles/960976/ https://lwn.net/Articles/960976/ atnot <div class="FormattedComment"> <span class="QuotedText">&gt; Nothing would have been avoided because Rust would have been just ignored.</span><br> <p> You're just making my own arguments back at me now, but snarkier. I said this two messages ago.<br> <p> Look, I like Rust too, it's my personal language of choice. There's no need for this level of aggressive defensiveness over someone on the internet thinking it would be useful to make some pretty marginal tradeoff differently. With the data we have, I'm convinced it makes sense today and I've explained why. You're welcome to disagree.<br> </div> Mon, 05 Feb 2024 16:26:59 +0000 Zig 2024 roadmap https://lwn.net/Articles/960977/ https://lwn.net/Articles/960977/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;10x or more slowdown is not that uncommon if you enable these checks and then process</span><br> <span class="QuotedText">&gt;large arrays of integers, which is definitely easily measurable in real world programs.</span><br> <p> You do not have to decide globally, if you want overflow checks or not.<br> <p> You can enable overflow checks in release builds and for the performance critical code you can use <a href="https://doc.rust-lang.org/std/num/struct.Wrapping.html">https://doc.rust-lang.org/std/num/struct.Wrapping.html</a><br> With that you get fast code and safe code where it matters.<br> </div> Mon, 05 Feb 2024 16:14:45 +0000 Zig 2024 roadmap https://lwn.net/Articles/960969/ https://lwn.net/Articles/960969/ khim <font class="QuotedText">&gt; In a hypothetical world where overflow checking was enabled by default</font> <p>…Rust would have played the role of “new Haskell”: something which people talk about but don't use, except for a few eggheads and then rarely.</p> <font class="QuotedText">&gt; And millions of mysterious production bugs and a hundred CVEs would have been avoided, at barely any cost to most programmers and one extra profiling iteration of a thousand for a few people writing heavily integer math code.</font> <p>Nothing would have been avoided because Rust would have been just ignored. Rust, quite consciously, <a href="https://steveklabnik.com/writing/the-language-strangeness-budget">used up it's weirdness budget</a> for other, more important, things.</p> <p>Perhaps Rust with slow-integers-by-default would have saved someone from themselves, but chances are high that it would have hindered adoption of Rust too much: people are notoriously finicky about simple things and seeing these dozens of checks in the program which should be, by their understanding, two or three machine instructions long would have gave Rust a bad reputation for sure.</p> <font class="QuotedText">&gt; This is just based on just testing my own programs.</font> <p>If you are happy with that mode then why couldn't you just enable it in your code? <code>-Z force-overflow-checks</code> exists precisely because some people like these overflow checks.</p> <p>I'm not big fun of them because in my experience for hundreds of bugs where some kind of buffer is too small and range checks are catching the issue there exist maybe one or two cases where simple integer overflow check is capable of catching the issue which is not <b>also</b> caught by these range checks. Certainly not enough to warrant these tales about <i>millions of mysterious production bugs</i> (why not trillions if you go for imaginary unjustified numbers, BTW?)</p> Mon, 05 Feb 2024 15:36:03 +0000 Zig 2024 roadmap https://lwn.net/Articles/960965/ https://lwn.net/Articles/960965/ pbonzini <div class="FormattedComment"> Oh, I would be happy to be wrong!<br> </div> Mon, 05 Feb 2024 15:24:24 +0000 Zig 2024 roadmap https://lwn.net/Articles/960956/ https://lwn.net/Articles/960956/ khim <p><a href="https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1236r1.html"> P1236R1</a> changed definition to precisely what you say in C++20.</p> <p>Are you sure C23 haven't picked up that change?</p> Mon, 05 Feb 2024 15:18:59 +0000 Zig 2024 roadmap https://lwn.net/Articles/960954/ https://lwn.net/Articles/960954/ khim You are talking about right argument, pbonzini talks about left one. Mon, 05 Feb 2024 15:17:38 +0000 Zig 2024 roadmap https://lwn.net/Articles/960907/ https://lwn.net/Articles/960907/ atnot <div class="FormattedComment"> <span class="QuotedText">&gt; 10x slowdown is “barely measurable”? What world do you live in?</span><br> <p> This is just based on just testing my own programs. Which are generally bottlenecked on memory not integer math, as most programs are. Even then, 10x is a ridiculous number, I can't find anyone reporting anything even close to 2x in real world programs.<br> <p> But here, let's look at some actual data, someone running specint:<br> <p> <span class="QuotedText">&gt; On the other hand, signed integer overflow checking slows down SPEC CINT 2006 by 11.8% overall, with slowdown ranging from negligible (GCC, Perl, OMNeT++) to about 20% (Sjeng, H264Ref) to about 40% (HMMER). [...]</span><br> <span class="QuotedText">&gt; Looking at HMMER, for example, we see that it spends &gt;95% of its execution time in a function called P7Viterbi(). This function can be partially vectorized, but the version with integer overflow checks doesn’t get vectorized at all. [...]</span><br> <span class="QuotedText">&gt; Sanjoy Das has a couple of patches that, together, solve [some missed optimizations]. Their overall effect on SPEC is to reduce the overhead of signed integer overflow checking to 8.7%.</span><br> <a href="https://blog.regehr.org/archives/1384">https://blog.regehr.org/archives/1384</a><br> <p> Specint is a bit biased towards HPC but we see, even there most normal business logic style code doesn't lose out at all. The losses are dominated by a few, very hot functions that are presumably heavily optimized already.<br> <p> As you note, overflow checks interfere with vectorization, but so do millions of other things, it's notoriously finicky. Rust regularly misses autovectorization because of bounds checks too. It's very hard to write non-trivial code that vectorizes perfectly and reliably across platforms by accident.<br> <p> Which gets me to the actual point I was getting at you ignored: In a hypothetical world where overflow checking was enabled by default, here's how this would have gone in a profiling session:<br> <p> "why is hmmer::P7Viterbi() so slow now?"<br> "oh, it's not vectorizing because of overflow"<br> "let me replace it with a wrapping add, or trap outside of the loop body, or use iterators since I'm using Rust"<br> "that's better"<br> <p> And millions of mysterious production bugs and a hundred CVEs would have been avoided, at barely any cost to most programmers and one extra profiling iteration of a thousand for a few people writing heavily integer math code.<br> </div> Mon, 05 Feb 2024 15:09:00 +0000 Zig 2024 roadmap https://lwn.net/Articles/960915/ https://lwn.net/Articles/960915/ farnz <p>That then makes <tt>x &gt;&gt; y</tt> and <tt>x &lt;&lt; y</tt> a multiple instruction sequence, not a single instruction sequence. On AArch64, for example, <tt>LSL x1, x2, x3</tt> is defined as "take the bottom 6 bits of x3, shift x2 left by that amount"; this ignores the sign bit completely, and to implement the behaviour you're suggesting, I'd have to check the sign bit of x3, then choose whether to do an LSL, LSR, or ASR instruction based on signedness of x2's current contents and sign bit of x3. Mon, 05 Feb 2024 15:01:15 +0000 Zig 2024 roadmap https://lwn.net/Articles/960914/ https://lwn.net/Articles/960914/ pbonzini <div class="FormattedComment"> I never quite understood why shifts of negative values are still an issue now that C23 mandates twos complement representation. They should be changed to be equal to x*2^n for left shift, and x/2^n rounded towards negative infinity for right shift.<br> </div> Mon, 05 Feb 2024 14:53:57 +0000 Zig 2024 roadmap https://lwn.net/Articles/960899/ https://lwn.net/Articles/960899/ khim <font class="QuotedText">&gt; But it's usually barely measurable in real world programs.</font> <p>10x slowdown is “barely measurable”? What world do you live in?</p> <font class="QuotedText">&gt; I should say Rust has this issue a bit too, thanks to the regrettable decision to disable overflow checks in release mode by default.</font> <p>If you are talking about arithmetic (which is wrapping in Rust when not in debug mode) then they really had no choice: while trying to vectorize code with these checks is not impossible infrastructure is just not there. And they really had no resources to add custom passes to LLVM which would make the whole thing usable from Rust with non-wrapping arithmetic.</p> <p>10x or more slowdown is not that uncommon if you enable these checks and then process large arrays of integers, which is definitely easily measurable in <i>real world programs</i>.</p> <p>And having different rules for integers in arrays and standalone integers would be just too weird (although it may be interesting optional mode of compilation, now that I think about it).</p> Mon, 05 Feb 2024 13:49:25 +0000 Zig 2024 roadmap https://lwn.net/Articles/960896/ https://lwn.net/Articles/960896/ atnot <div class="FormattedComment"> <span class="QuotedText">&gt; comptime instead of generics is a non-starter, I suspect, since generics in Rust lay out data differently, not just change the code</span><br> <p> You totally can do that! Zig and other languages with dependent-ish types generally have a unified type system for all types, including types themselves. In practical Zig terms, this means that you can't only return an integer from a comptime function, but the integer type itself, or a different type _depending_ on the arguments (that's where the term comes from), or an arbitrary struct type you just made, or a function, or anything really.<br> <p> So in mathematical terms it's actually far more powerful than anything Rust has. Rust can do a few of these things as bespoke features but it's not a generalized system in the same way it is in dependently typed languages. This has its advantages and disadvantages, with dedicated syntax generally being more compact, readable and debuggable but also leaving weird incongruences between various parts of the language that are hard to solve, as Rust is experiencing.<br> <p> It's been a pretty hip thing to experiment with somewhat recently (at least before effect systems really hit the scene) so I'm looking forward to seeing how Zig fares with it in a non-academic setting.<br> </div> Mon, 05 Feb 2024 13:44:36 +0000 Zig 2024 roadmap https://lwn.net/Articles/960887/ https://lwn.net/Articles/960887/ farnz <p><tt>comptime</tt> instead of generics is a non-starter, I suspect, since generics in Rust lay out data differently, not just change the code. But <tt>comptime</tt> instead of macros and some uses of traits would be extremely interesting to see; I suspect that there's a lot of cases where people currently have to write procmacros in Rust where <tt>comptime</tt> would be a good fit. Mon, 05 Feb 2024 10:22:54 +0000 Zig 2024 roadmap https://lwn.net/Articles/960884/ https://lwn.net/Articles/960884/ ballombe <div class="FormattedComment"> <span class="QuotedText">&gt; The program might not do what you want, but this is not what memory safety means.</span><br> <p> This is my point. rust defines memory safety, not the other way round.<br> </div> Mon, 05 Feb 2024 10:13:17 +0000 Zig 2024 roadmap https://lwn.net/Articles/960877/ https://lwn.net/Articles/960877/ rghetta I don't know for C, but I work on a 5MLOC C++ very active project, and in my experience memory errors are not so frequent, especially after C++/11. Each year we have *some* memory bugs (and none in production) but hundreds of logic errors. RAII, smart pointers, references, even templates can make a huge difference in preventing resource bugs, imho. Mon, 05 Feb 2024 07:02:55 +0000 Zig 2024 roadmap https://lwn.net/Articles/960861/ https://lwn.net/Articles/960861/ roc <div class="FormattedComment"> I think the question is misplaced because we're not actually pouring resources into Zig. Even advocates agree that it's years away from stabilization, and it's not going to get used much until after that happens. In the meantime, there are some interesting ideas like comptime that will get some testing.<br> <p> We might even discover at some point in the future that "Rust, but comptime instead of generics and macros" would be an improvement on Rust.<br> </div> Sun, 04 Feb 2024 20:28:19 +0000 Zig 2024 roadmap https://lwn.net/Articles/960860/ https://lwn.net/Articles/960860/ Wol <div class="FormattedComment"> Yup. I got the impression the LLVM people were quite happy to fix the problems, but if it's pervasive right through the IT and implementation, I can understand it being a long and painful process.<br> <p> Certainly I've seen a fair few complaints that "C/C++ isn't strict, so the LLVM isn't strict. Rust is strict and it breaks".<br> <p> Cheers,<br> Wol<br> </div> Sun, 04 Feb 2024 20:18:11 +0000 Zig 2024 roadmap https://lwn.net/Articles/960859/ https://lwn.net/Articles/960859/ roc <div class="FormattedComment"> FWIW I wrote about half of a 250K-ish line system (Pernosco) in Rust, using async for parts of it. It was a great experience and I'll use Rust for new production systems whenever I possibly can. It's not just about reliability and safety but also easy parallelism and maintainability. And FWIW I've been coding in C++ for over 30 years including stints at Mozilla and Google.<br> <p> *My* wish is that more advocates of less safe languages such as C, C++ and Zig had experience working on multi-million line multithreaded programs that are constantly scrutinized by malicious experts. Then we'd hear less of "have you tried just not writing memory safety bugs?"<br> </div> Sun, 04 Feb 2024 19:51:14 +0000 Zig 2024 roadmap https://lwn.net/Articles/960858/ https://lwn.net/Articles/960858/ ojeda <p>So, you are saying we can avoid temporal memory safety mistakes by "thinking carefully". But somehow, the same argument would not apply to spatial memory safety mistakes, and in fact, we would "constantly" make them.</p> <p>Well, according to the <a href="https://www.chromium.org/Home/chromium-security/memory-safety/">Chromium project</a> (and other projects), some of those issues you say we would not "run into in practice" are, in fact, quite prevalent sources of vulnerabilities: "Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs."</p> <blockquote>&gt; If that was Zig code, using a slice for that buffer would have given me a panic</blockquote> <p>...unless the safety checks are disabled, e.g. via <code>-OReleaseFast</code>.</p> <blockquote>&gt; Rust would not let code with those bugs compile, it would also make it a lot harder to write anything in the first place</blockquote> <p>I mean, you state this as a fact, but you also recognize you have almost no experience with Rust. If you already think in terms of lifetimes anyway as you said, then writing safe Rust should be a very nice experience.</p> <blockquote>&gt; I hope I have at least demonstrated that the dismissive, rhetorical question of "Why are we, as an industry, pouring resources into non-memory-safe languages [like Zig]?" is equally misplaced :)</blockquote> <p>It is not misplaced. The point is that nowadays we know how to do better. The industry (and other entities) is interested in getting away from memory unsafety as much as possible. Thus introducing a new language that essentially works like C (especially if you consider existing tooling for C) is not a good proposition.</p> Sun, 04 Feb 2024 19:39:30 +0000 Zig 2024 roadmap https://lwn.net/Articles/960857/ https://lwn.net/Articles/960857/ roc <div class="FormattedComment"> I don't think LLVM is a lost cause there --- John Regehr and others are doing heroic work in this area --- but it sure is a problem.<br> </div> Sun, 04 Feb 2024 19:19:29 +0000 Zig 2024 roadmap https://lwn.net/Articles/960855/ https://lwn.net/Articles/960855/ khim <font class="QuotedText">&gt; My thinking for those "Portable assembler" people is a revision of C which eliminates all the Undefined Behaviour and in the process also removes any semblance of modern convenience or decent performance.</font> <p>It wouldn't work. Because they are after efficiency, just feel that compiler have to provide it if possible and leave their low-level tricks alone if it doesn't understand something.</p> <p>Usually they even realize that <b>some</b> unlimited undefined behaviors are needed (look on their <a href="https://www.complang.tuwien.ac.at/papers/ertl17kps.pdf">semi-serious proposals</a>… <i>Loosening the basic model</i> part is precisely about that), they just couldn't accept the fact that it's either <b>fully defined</b> behavior, or <b>fully undefined</b> behavior, their dreams of, somehow, pulling <s>the rabbit out of the hat</s> what-the-hardware-does behavior into high-level languages just wouldn't work.</p> <font class="QuotedText">&gt; This hypothetical language would have wrapping overflow for all its integer types, it would define all bounds misses to result in a zero value, and its memory model would be an idealized vast array of raw bytes, thus enabling Provenance Via Integers.</font> <p>And even if you would invent a way to define show right shift is supposed to work… they would immediately turn around and ask why compiler <i>doesn't do the sensible thing</i> and doesn't put local variables in registers (which is impossible in such a language).</p> <font class="QuotedText">&gt; I think such a language could be made to deliver software that's maybe a thousand times slower than Python</font> <p>No, it wouldn't be even that much slower if you would use it as “portable assembler” and would write code like you are writing assembler and compiler (that may usually do some optimizations, but not if you “code for the hardware”) just doesn't do anything. The problem is that they wouldn't be satisfied, anyway.</p> <font class="QuotedText">&gt; They wouldn't necessarily _write_ software in this revised C, but they could spend their days arguing with each other about the definitions, which keeps them off the streets.</font> <p>They don't really spend that much time arguing about things. They actually produce code and their rants are usually limited to complains about how this or that compiler misunderstands their genious ideas and instead of producing optimized code breaks their valuable programs.</p> <p>When asked about what compiler should do instead they usually either provide <a href="https://blog.regehr.org/archives/1287">random incompatible answers</a> or just say that “compiler should preserve hardware semantics” without elaborating what that phrase even means.</p> Sun, 04 Feb 2024 18:10:42 +0000 Zig 2024 roadmap https://lwn.net/Articles/960854/ https://lwn.net/Articles/960854/ kentonv <div class="FormattedComment"> What do you mean by "bypass refcounting"? JavaScript doesn't use refcounting.<br> <p> Also "memory safety" isn't really about collecting garbage / leaks, it's about preventing memory access errors like out-of-bounds access or use-after-free. A garbage collector which never actually collects anything would technically be "memory safe".<br> <p> Are you saying that e.g. JavaScript in Node.js doesn't sufficiently prevent such memory errors? Can you give a specific example?<br> </div> Sun, 04 Feb 2024 17:56:18 +0000 Zig 2024 roadmap https://lwn.net/Articles/960853/ https://lwn.net/Articles/960853/ IntrusionCM <div class="FormattedComment"> In theory, thanks to Garbage collection, yes.<br> <p> In practice, especially given its (ab-)use in backends, no.<br> <p> Pure JavaScript as a browserspecific, client side only language has become niche. <br> <p> Most complex frameworks have the need to bypass refcounting and thus make the GC collector useless.<br> <p> Same goes for example for Java. <br> <p> Memory safety isn't perfect in Rust, either - just far better as it was integrated into the core design of the language.<br> </div> Sun, 04 Feb 2024 17:18:34 +0000 Zig 2024 roadmap https://lwn.net/Articles/960851/ https://lwn.net/Articles/960851/ mb <div class="FormattedComment"> <span class="QuotedText">&gt;you could build this type of allocator using only safe code</span><br> <p> Certainly.<br> <p> But note that the allocator traits are unsafe. That means if an allocator implements these traits, it promises to not violate memory safety rules. Therefore, an allocator can't reduce Rust's 100%-safe guarantee, unless it is unsound. An unsound allocator is a bug.<br> <p> And if you don't register the allocator to Rust by implementing the unsafe trait and your allocator is 100% safe Rust, then it's just a normal piece of code that is also 100% memory safe. Doesn't reduce Rust's guarantees either.<br> <p> Rust's safety guarantees assume and depend on all unsafe code and the operating environment (all linked libraries, the operating system and the hardware) to be sound and not violate Rust's memory model.<br> <p> Of course that means in practice one will probably find bugs in unsafe or foreign language code that breaks Rust's safety guarantees. A CVE in libc, for example (<a href="https://lwn.net/Articles/960289/">https://lwn.net/Articles/960289/</a>). But that is an argument for pushing Rust code even further down the chain of dependencies, into the operating system.<br> </div> Sun, 04 Feb 2024 17:18:22 +0000 Zig 2024 roadmap https://lwn.net/Articles/960850/ https://lwn.net/Articles/960850/ mpr22 <div class="FormattedComment"> There kind of is "compile-time checking" for a hammer; the manufacturer calls it "quality control", and it's there to make sure that the carpenter, if they do their job properly, won't get injured in a way that attracts an unacceptable financial liability to the manufacturer or tool supplier.<br> </div> Sun, 04 Feb 2024 16:40:16 +0000 Zig 2024 roadmap https://lwn.net/Articles/960848/ https://lwn.net/Articles/960848/ matthias <div class="FormattedComment"> <span class="QuotedText">&gt; Sound Rust does not provide 100% memory safety, only the illusion of 100% memory safety.</span><br> <p> Rust is quite precise in what memory safety means. First and foremost it means that it is impossible to invoke undefined behavior, e.g. data races. It certainly does not mean that the owner of a portion of memory cannot write data to the memory.<br> <p> <span class="QuotedText">&gt; I work on a C software that use a custom memory manager. It works this way:</span><br> <span class="QuotedText">&gt; Use mmap to allocate a large ( 1GB to 1TB) array of virtual memory.</span><br> <span class="QuotedText">&gt; The custom memory allocator will give you a range of indices that you can use in this array.</span><br> <p> The question is: Who owns the array? If it is owned by the allocator, then all reads and writes must go through the allocator. Only the function/struct that owns this array is allowed to access it. Of course you can give away mutable access to the array, but again, there can be only a single owner of the mutable reference.<br> <p> <span class="QuotedText">&gt; From the point of view of rust (and valgrind!), all the memory is owned by main(), there is no pointers, all accesses are checked to be within the bound of the allocation, all memory is initialized etc.</span><br> <p> Why should the memory be owned by main()? It should be owned by your custom allocator (which is transitively owned by main()).<br> <p> <span class="QuotedText">&gt; But really, there is nothing that prevent the code to write outside the allocated range of indices as long as it is inside the array. So buffer overflow are still possible, even though the memory manager is safer than malloc.</span><br> <p> It depends how you give access to the memory. If you really only give indices and have a functions in the allocator for reads and writes, then yes, you have a function that can write to arbitrary locations in the array. It is perfectly safe to do so and this will not invoke undefined behavior. The program might not do what you want, but this is not what memory safety means.<br> <p> If you you give slices of the array, then each receiver of a slice can only write to that slice and you have bounds checking etc. This is the only way different parts of the program can modify parts of the array independent of each other. However, rust will make sure that they only write to the parts of the array, the custom allocator has reserved for them. Writing such an allocator certainly requires some unsafe code to split up the one array into several smaller pieces that have independent lifetimes. There is at least split_at_mut() as a safe abstraction, so you could build this type of allocator using only safe code, but you certainly will need a bit more fine grained control if you really write an allocator yourself.<br> <p> Valgrind will only see individual allocations, but rusts ownership tracking can be much more fine grained than a single allocation from the system allocator. And clearly it will ensure that you cannot overwrite the stack and that there are no data races and much more.<br> </div> Sun, 04 Feb 2024 16:39:09 +0000 Zig 2024 roadmap https://lwn.net/Articles/960843/ https://lwn.net/Articles/960843/ kentonv <div class="FormattedComment"> What do you mean? JavaScript is memory-safe.<br> </div> Sun, 04 Feb 2024 15:55:53 +0000