Heuristics for software-interrupt processing
Heuristics for software-interrupt processing
Posted Mar 19, 2023 15:08 UTC (Sun) by mtaht (subscriber, #11087)In reply to: Heuristics for software-interrupt processing by Paf
Parent article: Heuristics for software-interrupt processing
Living with some new aspect of Spectre every year, seeing capabilities make it into an arm processor, seeing message passing make a comeback with io_uring, watching ebpf take off,
makes me wish for something, even a virtual processor, that was more secure by design.
The Mill instruction set continues to evolve, mostly in directions that I think are excessive, and I keep wishing, as always, that somehow, with all the great compiler development happening, that someone will manage to create a vm and compiler for it.
Another thing I would like to see is dedicated cache for irq routines so as to always have consistent latency while servicing them.
A new thing (I am getting seriously offtopic) now out there (not mill-related) is a possible new floating point standard, called posits: http://loyc.net/2019/unum-posits.html which I find sort of comforting. I would love it if we could recompile and cross check so many models against a different floating point standard. Those of you that don't distrust IEEE floating point, worry me.
Posted Mar 21, 2023 5:45 UTC (Tue)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Mar 21, 2023 6:23 UTC (Tue)
by joib (subscriber, #8541)
[Link]
I think some of the concepts they present could be interesting topics for some academic research, but it seems them being super protective around it has caused academics to decide to not touch that stuff with a 10ft pole. Or then academics with much more architecture experience than me have looked at Mill and concluded it's uninteresting, IDK.
Posted Mar 21, 2023 13:24 UTC (Tue)
by excors (subscriber, #95769)
[Link] (1 responses)
That sounds quite interesting. Seems the main difference is that posits are less uniformly distributed than IEEE floats: they have better precision for numbers around +/- 1.0, and also a wider range (smaller minimum positive value, larger maximum finite value), with the tradeoff of worse precision for numbers closer to the extremes. They implement that by effectively having a variable-length exponent and mantissa, so numbers further from 1.0 will allocate more bits to range at the expense of precision.
There may be many use cases where it's very useful to have more precision around +/- 1.0 - in particular there seems to be some interest among HPC people - but I'm not convinced it'd be a good idea for general-purpose computation where many people use floats nowadays. I'm thinking of stuff like 3D games, which are a significant driver of PC hardware performance, where the graphics and physics engines do a lot of work on coordinates that are within a reasonably limited range (the size of a game world which is small enough to walk around) and are rarely close to +/-1, and they need reasonably consistent precision across that range (because players get sad when they walk to the far corners of the world and everything goes wobbly), which is the exact opposite of what posits provide.
Fixed-point integers provide even more consistent precision, but stuff like sqrt(a^2+b^2+c^2) is relatively expensive and awkward to do without overflow in fixed-point, so people don't want to do that either. IEEE floats seem like a useful compromise - you can be sloppy and ignore both precision and overflow, and you can largely ignore scale (it doesn't matter if you define 1.0 as representing 1mm or 1km, you'll get similar precision either way, unlike posits or fixed-point), and if you're working at a normal human range (from mm up to km) then you'll probably get away with it.
Posits sound like they're probably better than IEEE floats when you're interested in numerical analysis and you can scale your data and design your algorithms to work around +/-1, but possibly worse in the default case where people don't want to think about all that.
Posted Mar 21, 2023 13:54 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Why is fixed point a problem? Yes it causes a bit more grief in that the programmer has to keep track of the decimal, but if I define 1 as mm to 0 decimal places, then I simply redefine the number to be 6dp and I've converted it to km.
You can't just use fixed point without knowing what you're doing, but provided you know the typical range of the numbers you are dealing with, it can make life much simpler.
Cheers,
Heuristics for software-interrupt processing
Heuristics for software-interrupt processing
Heuristics for software-interrupt processing
Heuristics for software-interrupt processing
Wol