|
|
Log in / Subscribe / Register

Rust Keyword Generics Progress Report: February 2023

Rust Keyword Generics Progress Report: February 2023

Posted Feb 25, 2023 17:50 UTC (Sat) by rrolls (subscriber, #151126)
In reply to: Rust Keyword Generics Progress Report: February 2023 by bartoc
Parent article: Rust Keyword Generics Progress Report: February 2023

I'm not a Rust programmer, but I like to keep an eye on how a number of different languages develop just out of interest, whether I use them or not.

From that perspective-

I've noticed two main ways of doing async code: "the Node.js way", which started out as callback functions, then turned into Promises, then became what we now call "colored functions" - which has been adopted by Python and Rust; and "the Ruby way", aka Fibers, where any function could potentially suspend, which has been adopted by PHP and (IIUC) Zig. Personally, despite Python being the only one of these languages I actually use on a regular basis, I'm massively in favor of "the Ruby way", for the very reason you point out that it allows you to use, say, a third-party library, with both sync and async code and the library doesn't need to care. I do wonder if the only real reason any language still does it "the Node.js way" is that it'd be a massive backward compatibility break to change it.

It seems the Rust team has come up with their own ingenious solution that should allow de-duplicating most code that suffers from the "is it async or not" problem, though perhaps not as cleanly as languages doing things "the Ruby way", which don't have to mark calls which could potentially be async at all: in Rust, even with the proposal being discussed here, you'll still have to write .await? or .do on every potentially-async function call.


to post comments

Rust Keyword Generics Progress Report: February 2023

Posted Feb 26, 2023 8:34 UTC (Sun) by burki99 (subscriber, #17149) [Link] (1 responses)

Thanks for bringing this up - I found https://journal.stuffwithstuff.com/2015/02/01/what-color-... explaining the details

Rust Keyword Generics Progress Report: February 2023

Posted Feb 27, 2023 8:11 UTC (Mon) by rrolls (subscriber, #151126) [Link]

Good read. I remember coming across that post myself some years ago!

Rust Keyword Generics Progress Report: February 2023

Posted Feb 27, 2023 15:46 UTC (Mon) by jaymell (guest, #106443) [Link] (2 responses)

I have not used it but understand there are some attempts underway to introduce a coroutine-based concurrency implementation to Rust, e.g., May -- https://github.com/Xudong-Huang/may -- similar to "goroutines" in Go and (I presume) the implementation in Ruby you describe.

I enjoyed using Go for the reasons you describe: Generally, any code from any lib can generally be put into a goroutine and interacted with via channels. It does force you to structure your code very differently than async/await syntax does, however. From what I understand, Kotlin also has a pretty mature coroutine implementation at this point, though it also requires a certain amount of "coloring" functions as well.

I'm not sure how this will ultimately play out in Rust, but it will be interesting if we ultimately have multiple options for approaching concurrency.

Rust Keyword Generics Progress Report: February 2023

Posted Mar 6, 2023 13:12 UTC (Mon) by ssokolow (guest, #94568) [Link] (1 responses)

The big problem is that the fibers/stackful coroutines approach Go uses plays poorly with FFI and FFI is Rust's bread and butter.

Give Fibers under the magnifying glass by Gor Nishanov a look.

Rust Keyword Generics Progress Report: February 2023

Posted Mar 8, 2023 21:46 UTC (Wed) by bartoc (guest, #124262) [Link]

The other problem is that it's motivated by performance considerations that no longer apply to modern operating systems (esp if we get io_uring clone/exec)

Rust Keyword Generics Progress Report: February 2023

Posted Mar 8, 2023 21:45 UTC (Wed) by bartoc (guest, #124262) [Link] (1 responses)

The problem with "the ruby way" (fibres) is that you still need to rewrite the whole runtime to support them (since IO routines need to be taught how to switch tasks) and you don't really save any resources over just making a normal thread. At best you can stop allocating stacks (both the kernel stack and the user stack) for each task, but usually you just save the kernel stack. And if you _can_ eliminate both stacks that means your language / runtime heap allocates basically everything. The only other option is to get very, very, very clever often at the expense of some safety or adding limitations on the depth of coroutine invocations (I think zig takes this approach).

These sorts of runtimes also tend to be bug-prone because tasks can call out to libraries that are unenlightened and use things like thread local storage and get surprised when the values change out from under them as a task gets resumed on another "real" thread. This isn't a problem if you only have one "real" thread, but these sorts of systems usually want to use one thread per CPU.

Also, the performance advantages of fibre-like schemes over "just using a real thread" are not that pronounced anymore, they became popular in the days where most operating systems had "one big lock" around the whole scheduler, that's no longer true and so normal OS schedulers scale much better with large numbers of threads and cores now, making these sorts of N:M fibre schemes a little pointless.

Rust Keyword Generics Progress Report: February 2023

Posted Mar 10, 2023 2:05 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

> At best you can stop allocating stacks (both the kernel stack and the user stack) for each task, but usually you just save the kernel stack. And if you _can_ eliminate both stacks that means your language / runtime heap allocates basically everything.

Rust's async (or JavaScript's, or Python's) is basically isomorphic to segmented stacks. You save your true stack in a linked list of heap-allocated objects, and the system/kernel stack is only borrowed to run coroutines. You only need a handful of real threads, and there is no problem with having millions of coroutines.

The problem is the speed. Go tried essentially this approach earlier in its life, and segmented stacks failed because they can cause unpredictable and horrible slowdowns when a tight loop crosses over the segmentation threshold.

Instead, now Go uses moveable and resizable stacks, which provide the best of both worlds. This is possible because Go can maintain an invariant that no pointer on the heap can point to an object on the stack. So the runtime can just use contiguous stacks, without any penalty for normal functions. At the same time, the minimum stack size can be very small (2kb for Go, it can go down further, but apparently this is the best compromise).

This kind of design is probably the best overall, but it's very hard to do without a rather intrusive runtime support.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds