LWN: Comments on "Advanced computing with IPython" https://lwn.net/Articles/756192/ This is a special feed containing comments posted to the individual LWN article titled "Advanced computing with IPython". en-us Sun, 05 Oct 2025 11:45:29 +0000 Sun, 05 Oct 2025 11:45:29 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Advanced computing with IPython https://lwn.net/Articles/757891/ https://lwn.net/Articles/757891/ excors <div class="FormattedComment"> Oh, and also the Python version returns the value from n=1e8-1, while the Fortran version goes up to n=1e8, because 'range' takes an exclusive upper bound and 'do' is inclusive. But even after fixing that, it seems there's still a (slightly smaller) difference in their outputs.<br> </div> Wed, 20 Jun 2018 18:39:04 +0000 Advanced computing with IPython https://lwn.net/Articles/757887/ https://lwn.net/Articles/757887/ excors <div class="FormattedComment"> They should both be using double-precision floats, and should calculate 1+1/1e8 identically. But the ** operator is much more complicated and can be implemented in many different ways with different levels of accuracy and performance. I guess Python probably uses pow() from the C standard library, so might not even give the same answer if you run it on Windows, and the Fortran compiler presumably has its own implementation. So it's more a consequence of their maths libraries than of the language itself, exacerbated by doing a silly computation .<br> </div> Wed, 20 Jun 2018 18:22:45 +0000 Advanced computing with IPython https://lwn.net/Articles/757879/ https://lwn.net/Articles/757879/ yroyon <div class="FormattedComment"> Thanks for the article. A great gentle introduction.<br> <p> A couple of things were not obvious to me.<br> <p> 1) First you time your snippets using %%time, then %%timeit. It's not clear why. From the docs, I gather %%timeit accepts extra options, though you don't use them. It's still not clear to me why two separate magics exist instead of one. Could just add the options to %%time, no?<br> <p> 2) The approximation of e using Python vs. Fortran gives a pretty different output. Not really a desirable outcome. Is that solely because of the type precision?<br> </div> Wed, 20 Jun 2018 17:03:04 +0000 Advanced computing with IPython https://lwn.net/Articles/757470/ https://lwn.net/Articles/757470/ njs <div class="FormattedComment"> Yes, that's a mistake in the article. There's no parallel code inside the NumPy project itself; the only way it uses multiple cores is if it's configured to use a BLAS library that uses multiple cores, like OpenBLAS, MKL, or Accelerate. In practice this is almost always the case, and it's a pretty fine distinction, so it's an easy mistake to make...<br> <p> What this means for end users is that matrix multiplication and the functions in the numpy.linalg module often use multiple cores automatically, but the rest of NumPy does not.<br> </div> Thu, 14 Jun 2018 05:38:33 +0000 Advanced computing with IPython https://lwn.net/Articles/756908/ https://lwn.net/Articles/756908/ leephillips <div class="FormattedComment"> I took a closer look at my hardware. Your suspicions are correct: there are two physical cores, but two threads per core. I mistook the four threads for four cores.<br> </div> Fri, 08 Jun 2018 02:51:37 +0000 Advanced computing with IPython https://lwn.net/Articles/756857/ https://lwn.net/Articles/756857/ excors <div class="FormattedComment"> <font class="QuotedText">&gt; However, the second version actually took a bit more than twice as long, which means that we only achieved a speedup of about two. This is due to the overhead of interprocess communication and setup, and should decrease as the calculations become more lengthy.</font><br> <p> 1.7 secs is not lengthy enough? That seems surprising. It's also a bit suspicious how 3.44+/-0.016 is exactly double 1.7+/-0.02. Are you sure you don't actually have a dual-core CPU? :-)<br> </div> Thu, 07 Jun 2018 14:46:23 +0000 Advanced computing with IPython https://lwn.net/Articles/756657/ https://lwn.net/Articles/756657/ leephillips <div class="FormattedComment"> I think you're correct. The underlying BLAS routines for many array operations are multithreaded, if you have a modern version, but elementary operations seem not to be.<br> </div> Tue, 05 Jun 2018 15:59:39 +0000 Advanced computing with IPython https://lwn.net/Articles/756655/ https://lwn.net/Articles/756655/ fghorow <div class="FormattedComment"> I've been using numpy and its predecessors since the late '90s. <br> <p> Numpy using multiple threads for typical arithmetic operations (+ - * / sqrt etc.) must be new. There are hooks to underlying parallel algorithms in certain cases, but for common operations, things are (used to be?) single threaded. <br> </div> Tue, 05 Jun 2018 15:31:22 +0000 Advanced computing with IPython https://lwn.net/Articles/756654/ https://lwn.net/Articles/756654/ osma <div class="FormattedComment"> Thank you for the informative article! It's great to hear about different ways of parallelizing Python code.<br> <p> I've personally used the multiprocessing.Pool class to parallelize code execution. Combined with the "with" statement, it's a really easy way of executing code in parallel on multiple cores. For example:<br> <p> inputs = [1, 2, 3, 4]<br> <p> with multiprocessing.Pool(processes=4) as pool:<br> outputs = pool.map(slow_function, inputs)<br> <p> This runs slow_function in four parallel processes, using a different value as input for each, and places the results in the outputs list.<br> </div> Tue, 05 Jun 2018 15:14:53 +0000