|
|
Log in / Subscribe / Register

I see what you mean but...

I see what you mean but...

Posted Dec 2, 2025 21:40 UTC (Tue) by dcoutts (subscriber, #5387)
Parent article: Zig's new plan for asynchronous programs

> Languages that don't make a syntactical distinction (such as Haskell) essentially solve the problem by making everything asynchronous, which typically requires the language's runtime to bake in ideas about how programs are allowed to execute.

Yes there's an analogy in there somewhere but no. Asynchronous code and threaded code have some similarities but are different. Async code is about trying to do cooperative execution in a single thread (and often with little to no runtime support). Threaded code (with language support) typically means a runtime system with a thread scheduler, and some compiler support to implement proper thread pre-emption.

In Haskell in particular (which is what I'm familiar with) the compiler doesn't need to make everything async. It compiles to very traditional-looking low level sequential code. The only magic is the compiler inserts yield points (where it anyway has to do stack or heap checks), and yes there is a thread scheduler in the runtime system (and thread synchronisation primitives interact with the scheduler too of course).

Turning everything async is a rather different compiler transformation.


to post comments

I see what you mean but...

Posted Dec 2, 2025 21:58 UTC (Tue) by daroc (editor, #160859) [Link] (1 responses)

I agree that Haskell code compiles down to code that looks more like threads than like Rust's async, for example. But that's at least partly because Haskell's design, as a language, has pervasive support for suspending computations as part of being a non-strict language. The unit is just the thunk, not the asynchronous function.

Compare a Rust async function that does some work, and then goes to work on another async function due to an .await, and then finishes its work. That is quite conceptually similar to a Haskell function that does some work, demands another thunk, and then finishes its work. They're really quite similar in that they don't usually involve the runtime, unless there's some multithreading going on or its time for a context switch. In both languages, the operation (.await or forcing a thunk) are theoretically implemented with a call instruction, but can in practice have the compiler inline parts or do them ahead of time if it can prove that they're used later. In both languages, the in-progress state of these things is partly stored in registers and mostly stored in a specific object in memory.

I accept that it's not a perfect analogy. There are serious differences between the language, and in particular the GHC runtime's "everything is a function, even data" approach is pretty different from Rust's "everything is data, even async functions" approach. But I also think that it's not a misleading comparison when the language mechanisms are solving similar problems (letting computation occur in an order that doesn't strictly match the order that a traditional strict, imperative language would demand) in a similar way (by using specialized objects in memory that a runtime helps to manage, but that can do basic interactions between objects just by calling through the appropriate function pointer).

I see what you mean but...

Posted Dec 4, 2025 14:55 UTC (Thu) by dcoutts (subscriber, #5387) [Link]

I think an analogy between rust's async and Haskell's laziness is a much better one. Yes, bouncing around between bits of code due to thunk forcing could look a lot like async code.

Whereas Haskell's normal I/O lightweight threading/concurrency code is really all about threads (schedulers, sync primitives, pre-emption etc) and is not related to Haskell's laziness at all.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds