|
|
Subscribe / Log in / New account

Python sets, frozensets, and literals

Python sets, frozensets, and literals

Posted Jan 20, 2022 12:12 UTC (Thu) by excors (subscriber, #95769)
In reply to: Python sets, frozensets, and literals by NYKevin
Parent article: Python sets, frozensets, and literals

> n has nothing to do with the sizes of individual elements. n is the number of elements.

But if your elements are e.g. 32-bit integers then you can't have more than 2^32 distinct elements, so the behaviour of your algorithm as n tends to infinity is undefined. For the big-O notation (which is based on n tending to infinity) to be valid, you have to pretend your elements are integers with infinite range (but can still be processed in constant time) so there's no upper bound on n. That's usually a reasonable thing to pretend if you want the theory to closely match a practical implementation, but not always, so O(1) and O(log(n)) are both valid answers for set membership depending on what assumptions you make.


to post comments

Python sets, frozensets, and literals

Posted Jan 21, 2022 15:53 UTC (Fri) by jwarnica (subscriber, #27492) [Link] (1 responses)

If we go from base principles that computers are finite state automaton then your argument about 2^32 applies to everything in reality.

O(n) might be mathematically defined as "tending to infinity", but unless the analysis determines that weird things happen at n>2^32+1 then its the same thing. Off hand can't think of any of the handful of the standard selection of simple lines that are used where things get weird past some useful point off to the right.

As others have mentioned, one must not consider n-->infinity considerations, but n->expectations. As expectations will always be less than infinity. And memory size of any computer you will ever run something on will also be less than infinity.

Python sets, frozensets, and literals

Posted Jan 21, 2022 17:29 UTC (Fri) by nybble41 (subscriber, #55106) [Link]

> And memory size of any computer you will ever run something on will also be less than infinity.

The general approximation for treating finite computers as equivalent to Turing machines is that while any given computer may have finite memory you can always add more if an algorithm requires it for some particular input. Even if that might require multiple levels of indirection due to e.g. the address space limits of the CPU. Of course, for physical computers at some point you run into the physical limits of computation (information density limited by thermodynamics; Bremermann's limit on computational speed), but this is with respect to an idealized model of computation where e.g. your computer architecture (ALU and currently installed memory) may be finite but there is no predetermined restriction on how long you're willing to wait for the result.

Also, it doesn't really matter that your memory is limited when analyzing *time* complexity, provided the algorithm has constant *space* complexity and can stream the input and output.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds