|
|
Subscribe / Log in / New account

Rust 1.39.0 released

Version 1.39.0 of the Rust language is available. The biggest new feature appears to be the async/await mechanism, which is described in this blog post: "So, what is async await? Async-await is a way to write functions that can 'pause', return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses."

to post comments

Rust 1.39.0 released

Posted Nov 7, 2019 16:08 UTC (Thu) by ms-tg (subscriber, #89231) [Link] (3 responses)

Cannot find words to sufficiently express my excitement about this!

Rust 1.39.0 released

Posted Nov 7, 2019 19:21 UTC (Thu) by rahulsundaram (subscriber, #21946) [Link] (2 responses)

Please do, tell us why you are excited!

Rust 1.39.0 released

Posted Nov 8, 2019 1:26 UTC (Fri) by ms-tg (subscriber, #89231) [Link] (1 responses)

Given that this is LWN, rather than advocating for the powerful ergonomics of the async/await programming model at last reaching low level use cases for perhaps the first time ever; I imagine it may be of more interest to link to a blog post about the road to optimizing the underlying state machine representations that the Rust compiler is generating for the state of the Futures returned by async functions:
https://tmandry.gitlab.io/blog/posts/optimizing-await-1/

From the post:

> Seeing code written like this that compiled down to one state machine, with full code and data inlining, and no extra allocations, was captivating. You may as well have dropped out of the sky on a flying motorcycle and told me that magic exists, and I was a wizard.2

Rust 1.39.0 released

Posted Nov 8, 2019 1:36 UTC (Fri) by ms-tg (subscriber, #89231) [Link]

And from the footnote [2] of the above quote — why the potential excited the author so much that he was inspired to write the optimized implementation;

> At the time, I’d been writing some asynchronous object-oriented state machines by hand in C++11. This experience had been so difficult and error-prone that once I read Aaron’s post, it was inception: I couldn’t get the idea out of my head, and more than anything, I wanted to start using Rust at my job. Eventually, this led me to make a fateful decision, and find a new job where I could invest more of my time in Rust. But that’s another story for another day.

Rust 1.39.0 released

Posted Nov 7, 2019 20:53 UTC (Thu) by zorro (subscriber, #45643) [Link] (23 responses)

Can anyone explain what is so great about async/await?

To my untrained eye it looks like user mode scheduling, except that the runtime is not sophisticated enough to provide a simple fiber abstraction so instead forces the programmer to pollute their code with async/awaits everywhere.

Rust 1.39.0 released

Posted Nov 7, 2019 21:33 UTC (Thu) by NAR (subscriber, #1313) [Link] (1 responses)

To my eye it looks like a "function object". As far as I understand, it is not even executed in the background, so there's nothing asynchronous about it. I don't really see anything more than an "apply" function LISP has had for half a century...

Rust 1.39.0 released

Posted Nov 8, 2019 3:24 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

All of the await points in the function are suspension points. Rust is doing explicit cooperation here, so some kind of executor is needed to actually use the async function. Without it, you basically just have a suspended execution thunk. You can execute it on the current thread or throw it at a thread pool or any of a number of possible implementations. Those define the "background"-ness of the execution.

I suggest reading the blogs in the last few This Week in Rust summaries. There's lots of good discussion and explanation in them.

Rust 1.39.0 released

Posted Nov 7, 2019 22:28 UTC (Thu) by pkolloch (subscriber, #21709) [Link] (8 responses)

It is more like a generator in Python. I.e. a function with yield points that can be continued later. This allows cooperative multitasking.

Async functions themselves don't execute the bulk of the code when called but return a Future which doesn't do much unless it is passed to an executor. All this ceremony allows IO heavy functions to compile to state machines that don't need heap allocations under the hood.

It is very cool for this use case and its efficiency. Especially, in combination with the extended guarantees of Rust that go beyond memory safety without using garbage collection: no data races.

Rust 1.39.0 released

Posted Nov 7, 2019 22:33 UTC (Thu) by atai (subscriber, #10977) [Link] (7 responses)

obvious question: how to approximate this in standard C++ as closely as possible?

Rust 1.39.0 released

Posted Nov 7, 2019 22:58 UTC (Thu) by roc (subscriber, #30627) [Link] (1 responses)

C++ coroutines are being standardized. http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p...

Rust 1.39.0 released

Posted Nov 7, 2019 22:59 UTC (Thu) by roc (subscriber, #30627) [Link]

Rust 1.39.0 released

Posted Nov 7, 2019 23:24 UTC (Thu) by alsuren (subscriber, #62141) [Link] (4 responses)

The closest thing I know of in C++ would be stackless coroutines. I'm not sure how "standard" they are though, and I've never used them, so I don't know how ergonomic they are.

Async functions are super-common in managed languages like C# JS and Python. Rust manages to get similar ergonomics, with an execution model that allows pluggable schedulers and all sorts of interesting performance properties. Naturally everyone is quite excited.

I suspect that you could build something similar on top of stackless coroutines, but writing a multithreaded work-stealing scheduler in C++ doesn't sound fun. There is also a perception that async/await is a pattern that is mostly useful for writing web services, and that C++ is not a language that you want web developers to use. Us web devs write enough security vulnerabilities as it is in memory-managed languages. Can you imagine what would happen if you got a bunch of rushed web devs to write multithreaded C++ and then exposed it to the internet?

Rust 1.39.0 released

Posted Nov 8, 2019 17:04 UTC (Fri) by kleptog (subscriber, #1183) [Link] (3 responses)

The advantage of async/await is that you get asynchronous execution is many cases where threads are clear overkill. Suppose you want to listen on a bunch of sockets (like an IRC server), then starting a thread for each socket is a sledgehammer to hit a nail. Now you can write your code in a straight-forward way as if you're processing each socket separately, and underneath the compiler reorganises your code so it has a loop running select() driving a large collection of state machines. Look ma, no race conditions!

The real awesomeness is combining this with generators. Now you can easily loop over data structures in memory, but then you can have iterators that iterate over data that has to be retrieved over the network.

But you're right that it's mostly useful for network stuff, since that's the most common thing to wait on. Async I/O for disks just isn't as well supported or useful. And it requires compiler support, because it requires significant transformations of the structure of the code.

Rust 1.39.0 released

Posted Nov 8, 2019 22:23 UTC (Fri) by joib (subscriber, #8541) [Link] (2 responses)

Seems we're finally getting useful async io for disks on Linux with io_uring.

Speaking of which, any project wiring up Rust async/await with io_uring?

Rust 1.39.0 released

Posted Nov 9, 2019 4:16 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (1 responses)

These links seem like a good place to start (haven't investigated too much myself; there aren't async functions here, but something could probably be built from them if you can make a poll() function for a Future):

https://www.reddit.com/r/rust/comments/dtfgsw/iou_rust_bi...
https://github.com/withoutboats/iou
https://docs.rs/iou/0.1.0/iou/

Rust 1.39.0 released

Posted Nov 20, 2019 11:01 UTC (Wed) by ms-tg (subscriber, #89231) [Link]

This blog post explains the current state of iou and the likely path forward:

https://boats.gitlab.io/blog/post/iou/

Rust 1.39.0 released

Posted Nov 7, 2019 22:57 UTC (Thu) by roc (subscriber, #30627) [Link] (2 responses)

Here's a good writeup explaining some of the downsides of fibers: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p...

Rust 1.39.0 released

Posted Nov 8, 2019 20:56 UTC (Fri) by ms-tg (subscriber, #89231) [Link] (1 responses)

> Here's a good writeup explaining some of the downsides of fibers: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p...

Interesting! The position of this paper is to argue *against* stack-ful coroutines (Fibers), and explicitly *for* stackless coroutines (async/await) as a superior alternative. And therefore, that there are not...

> ...enough motivating reasons for C++ language to adopt and maintain a highly-platform dependent facility

Based on the lack of context in the initial post, I had assumed incorrectly this paper would be arguing against async/await, rather than for it - did you read the paper differently?

Rust 1.39.0 released

Posted Nov 8, 2019 21:37 UTC (Fri) by roc (subscriber, #30627) [Link]

The paper is very clear, I read it the same as you ... which is why I said "explaining some of the downsides of fibers".

Rust 1.39.0 released

Posted Nov 8, 2019 1:12 UTC (Fri) by flussence (guest, #85566) [Link] (7 responses)

It's a de-facto standard across multiple languages for doing {native,green} threads in a non-lethal and fairly readable way. Much better than polluting code with callback hell or manual juggling of synchronisation and spawning primitives. The high level syntax also means the language has more room to optimise.

Rust 1.39.0 released

Posted Nov 8, 2019 7:12 UTC (Fri) by zorro (subscriber, #45643) [Link] (6 responses)

I fail to understand why async/await is considered "high level syntax". To me it is low level syntax. Instead of "n = socket.Receive()" I have to write "n = await socket.ReceiveAsync()", then recursively write all my code as async methods and call await on them as well. Async/await has the effect of turning me, the programmer, into a low-level task scheduler, forcing me to pollute my code with low-level task scheduling logic that has nothing to do with what my code is trying to achieve. Why can the compiler/runtime not keep the async stuff under the hood?

Rust 1.39.0 released

Posted Nov 8, 2019 8:08 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Coroutines can have zero overhead. In many cases they can be optimized into simple loops, a classic example is a tree iterator. This allows them to be used for fast collection processing, for example.

But async/await can be used for higher-level scheduling, yes.

Rust 1.39.0 released

Posted Nov 8, 2019 13:49 UTC (Fri) by zwol (guest, #126152) [Link]

In my personal bitter experience, I don't want the async stuff hidden away under the hood. You know how extremely threaded code is a nightmare to debug because it's so easy to miss a race condition? Making the suspension points explicit in every subroutine is a huge help with that. It cuts the state space of possible execution traces down to something a human can reasonably keep in their head.

Rust 1.39.0 released

Posted Nov 8, 2019 19:58 UTC (Fri) by farnz (subscriber, #17727) [Link]

Because async/await isn't really about task scheduling logic (at least in Rust) - it uses that name because then it's familiar to programmers coming from languages like Python.

Instead, it's syntax sugar for constructing a state machine - you write code as-if you were not writing a state machine, putting in `.await` at points where you want to delay changing state until another state machine has also changed state, and the compiler generates your new state machine for you.

At the very bottom of the stack of state machines you're creating, you usually have small state machines that change state in response to external events - for example, a socket might change state when there is new data to read - and that's how you end up with a top-level state machine that's useful. But it's state machines all the way down, just written nicely.

If you actually want task scheduling, there's thread APIs for that, but the state machine representation tends to perform better on real systems than the tasks representation of a solution, because the size of the state machine's saved state at points where it waits for another state machine to change state tends to be much smaller than the size of a task stack that can be switched out at any time.

Rust 1.39.0 released

Posted Nov 8, 2019 22:48 UTC (Fri) by roc (subscriber, #30627) [Link] (1 responses)

Async/await forces the programmer to separate normal "run to completion" functions from async functions which can be suspended, which means you can compile async functions differently from normal functions: the former as suspendable state machines, the latter as regular code. If the programmer doesn't do that, i.e. it's kept "under the hood", the obvious implementation options are:
* Compile all functions as suspendable state machines: slow code because you can't leave values on the stack across a function call, you effectively have to allocate stack frames on the heap.
* Compile all functions as normal code: every suspendable computation has to have its own stack, i.e. fibers. See http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p... for some of the problems there.
* Try to make the compiler detect which functions definitely can't (indirectly) suspend, compile those functions as normal code, use state machines for the rest: totally fails due to separate compilation, FFI, the difficulties of interprocedural control-flow analysis with function pointers, etc.

If you have a different solution no-one has thought of, please let the world know.

Also, as zwol noted, syntactic separation of async from run-to-completion code also helps reasoning about program behaviour. For languages like Rust where the compiler effectively forces you to prove safety, this means in general the code you are allowed to write in a function is different depending on the await-points it contains. (This is better than in C++, where the code you are allowed to write is the same, but the wrong code will exploitably crash.)

Rust 1.39.0 released

Posted Nov 8, 2019 22:55 UTC (Fri) by roc (subscriber, #30627) [Link]

There is actually a fourth option, I guess: if you have a JIT compiler, you could dynamically generate either the state-machine version or the normal version of a function. It would be super hairy though, because you'd have to handle the case where the normal version of a function runs to some point then unexpectedly (perhaps indirectly) calls a function that suspends. You would then have to unwind the stack, copying on-stack function activation state into equivalent off-stack state. Have fun dealing with pointers to existing stack data. Anyway, requiring a JIT is a non-starter for C++, Rust and Go at least.

Rust 1.39.0 released

Posted Nov 10, 2019 0:15 UTC (Sun) by ofranja (guest, #11084) [Link]

Unlike Go or Python, Rust does not have a runtime: the compiler only helps with type inference and declarations, but the implementation of futures or schedulers is made by libraries.

async declarations in Rust are just there to ease declarations and for error message ergonomics, since you could explicitly return a Future type for any async function before this support landed on stable Rust.

Rust 1.39.0 released

Posted Nov 16, 2019 9:00 UTC (Sat) by iq-0 (subscriber, #36655) [Link]

One angle the other commenters haven’t highlighted:

Rust already had a whole ecosystem based around futures and combinators for doing async logic. But using it turns out to be quite complex in combination with the ownership and borrowing model of Rust. This leads to a lot of small allocations and unergonomic code.

What `async`/`await` add to this is a way that the compiler can see where the suspension/resumation of logic is and thus can “prove” that the ownership and borrowing rules aren’t being violated.

Thus, apart from bringing the same easy of writing async code like you’d write normal code, you can now write this logic without having to jump through hoops that were needed to satisfy the borrow-checker.

Rust 1.39.0 released

Posted Dec 4, 2019 12:19 UTC (Wed) by kitanatahu (guest, #44605) [Link] (1 responses)

Don't be fooled by the naming, this is just a misnamed lambda function... it does nothing asynchronously, it's just synchronous execution with a delay... when you actually need the result.

From the article on this:
"In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them."

Rust 1.39.0 released

Posted Dec 4, 2019 12:39 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

While it is that at the surface, you also get suspend points from your internal await usages. That's not something you can do ergonomically with lambdas (and was the main problem with futures 0.1).


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds