An introduction to lockless algorithms
An introduction to lockless algorithms
Posted Feb 22, 2021 12:14 UTC (Mon) by Wol (subscriber, #4433)In reply to: An introduction to lockless algorithms by jezuch
Parent article: An introduction to lockless algorithms
Cheers,
Wol
Posted Feb 24, 2021 23:49 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (7 responses)
Of course, if you *really* want to motivate memoization, you probably want to talk about something more interesting, like Hashlife. I remember reading a lengthy series of articles on Eric Lippert's blog where he spends much of the series painstakingly squeezing out every ounce of performance from the iterative approach (vectorization, bit-twiddling, lookup tables, etc.), and then he switches to Hashlife ("Gosper's algorithm"), with hardly any weird optimizations at all beyond basic algorithm design and memoization, using *garbage collected objects* to represent everything, and suddenly the whole thing is orders of magnitude faster* for large-enough N. My conclusion: Using the right algorithm is far, far more effective than heavily optimizing the wrong algorithm.
* Strictly speaking, it is faster when solving the problem "Figure out what the board will look like at generation N." If the problem is instead "Figure out what the board will look like at every generation from 1 through N," then Hashlife is much less effective. So using it for an interactive UI that sequentially displays one generation at a time is probably not a great idea. But if you want to advance directly to generation 2^27 or something like that, Hashlife is basically the only game in town.
Posted Mar 2, 2021 20:07 UTC (Tue)
by jezuch (subscriber, #52988)
[Link] (6 responses)
Fibonacci numbers are defined by a recursive function which has an exact solution, so you can calculate every number in O(1) time (assuming no bignums are needed). So it's doubly (or triply) a bad example to intoduce recursion to new programmers ;)
Posted Mar 5, 2021 3:16 UTC (Fri)
by dtlin (subscriber, #36537)
[Link] (5 responses)
Posted Mar 11, 2021 9:56 UTC (Thu)
by anselm (subscriber, #2796)
[Link] (4 responses)
If your platform only supports integers up to a certain maximum (say 2^64-1), it is O(1) because it is straightforward to create a lookup table for all n where F(n) < 2^64 and use that rather than the recursive definition.
Posted Mar 11, 2021 19:28 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
If you strict permissible inputs to a predetermined finite range then every algorithm becomes O(1). That's not a very useful definition, though. And while a 93-entry table suffices to represent all Fibonacci numbers less than 2^64, many languages have built-in arbitrary-precision integer types limited only by available memory.
Posted Mar 12, 2021 10:49 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (1 responses)
Obviously. I think the point here is that Fibonacci numbers are often used as a prime example for the use of recursion, but in practice, calculating Fibonacci numbers through a 1:1 implementation of their recursive definition is not what one would actually ever do. As such it gives people silly ideas. It would be better to come up with examples where there are more compelling arguments for using recursion in the first place, or at the very least put a big health warning on the Fibonacci example.
Posted Mar 13, 2021 8:54 UTC (Sat)
by neilbrown (subscriber, #359)
[Link]
Posted Mar 16, 2021 20:49 UTC (Tue)
by nix (subscriber, #2304)
[Link]
An introduction to lockless algorithms
An introduction to lockless algorithms
An introduction to lockless algorithms
An introduction to lockless algorithms
It has a closed-form solution but that does not make it O(1).
An introduction to lockless algorithms
An introduction to lockless algorithms
An introduction to lockless algorithms
An introduction to lockless algorithms