Making Python 3 more attractive
Larry Hastings was up next at the summit with a discussion of what it would take to attract more developers to use Python 3. He reminded attendees of Matt Mackall's talk at last year's summit, where the creator and project lead for the Mercurial source code management tool said that Python 3 had nothing in it that the project cares about. That talk "hit home for me", Hastings said, because it may explain part of the problem with adoption of the new Python version.
The Unicode support that comes with Python 3 is "kind of like eating your vegetables", he said. It is good for you, but it doesn't really excite developers (perhaps because most of them use Western languages, like English, someone suggested). Hastings is looking for changes that would make people want to upgrade.
He wants to investigate features that might require major architectural changes. The core Python developers may be hungry enough to get people to switch that they may be willing to consider those kinds of changes. But there will obviously be costs associated with changes of that sort. He wanted people to keep in mind the price in terms of readability, maintainability, and backward compatibility.
![Larry
Hastings [Larry Hastings]](https://static.lwn.net/images/2015/pls-hastings-sm.jpg)
The world has changed a great deal since Python was first developed in 1990. One of the biggest changes is the move to multi-threading on multicore machines. It wasn't until 2005 or so when he started seeing multicore servers, desktops, and game consoles, then, shortly thereafter, laptops. Since then, tablets and phones have gotten multicore processors; now even eyeglasses and wristwatches are multicore, which is sort of amazing when you stop to think about it.
The perception is that Python is not ready for a multicore world because of the global interpreter lock (GIL). He said that he would eventually get to the possibility of removing the GIL, but he had some other ideas he wanted to talk about first.
For example, what would it take to have multiple, simultaneous Python interpreters running in the same process? It would be a weaker form of a multicore Python that would keep the GIL. Objects could not be shared between the interpreter instances.
In fact, you can do that today, though it is a bit of a "party trick", he said. You can use dlmopen() to open multiple shared libraries, each in its own namespace, so that each interpreter "runs in its own tiny little world". It would allow a process to have access to multiple versions of Python at once, though he is a bit dubious about running it in production.
Another possibility might be to move global interpreter state (e.g. the GIL and the small-block allocator) into thread-local storage. It wouldn't break the API for C extensions, though it would break extensions that are non-reentrant. There is some overhead to access thread-local storage because it requires indirection. It is "not as bad as some other things" that he would propose, he said with a chuckle.
A slightly cleaner way forward would be to add an interpreter parameter to the functions in the C API. That would break the API, but do so in a mechanical way. It would, however, use more stack space and would still have the overhead of indirect access.
What would it take to have multiple threads running in the same Python interpreter? That question is also known as "remove the GIL", Hastings said. In looking at that, he considered what it is that the GIL protects. It protects global variables, but those could be moved to a heap. It also enables non-reentrant code as a side effect. There is lots of code that would fail if the assumption that it won't be called simultaneously in multiple threads is broken, which could be fixed but would take a fair amount of work.
The GIL also provides the atomicity guarantees that Messier brought up. A lock on dicts and lists (and other data structures that need atomic access) could preserve atomicity. Perhaps the most important thing the GIL does, though, is to protect access to the reference counts that are used to do garbage collection. It is really important not to have races on those counts.
The interpreter could switch to using the atomic increment and decrement instructions provided by many of today's processors. That doesn't explicitly break the C API as the change could be hidden behind macros. But, Hastings said, Antoine Pitrou's experiments with using those instructions resulted in 30% slower performance.
Switching to a mark-and-sweep garbage collection scheme would remove the problem with maintaining the reference counts, but it would be "an immense change". It would break every C extension in existence, for one thing. For another, conventional wisdom holds that reference counting and "pure garbage collection" (his term for mark and sweep) are roughly equivalent performance-wise, but the performance impact wouldn't be known until after the change was made, which might make it a hard sell.
PyPy developer Armin Rigo has been working on software transactional memory (STM) and has a library that could be used to add STM to the interpreter. But Rigo wrote a toy interpreter called "duhton" and, based on that, said that STM would not be usable for CPython.
Hastings compared some of the alternative Python implementations in terms of their garbage-collection algorithm. Only CPython uses reference counting, while Jython, IronPython, and PyPy all use pure garbage collection. It would seem that the GIL and reference counting go hand in hand, he said. He also noted that few other scripting languages use reference counting, so the future of scripting may be with pure garbage collection.
Yet another possibility is to turn the C API into a private API, so extensions could not call it. They would use the C Foreign Function Interface (CFFI) for Python instead. Extensions written using Cython might be another possible approach to hide the C extension API.
What about going "stackless" (à la Stackless Python)? Guido van Rossum famously said that Python would never merge Stackless, so that wasn't Hastings's suggestion. Instead, he looked at the features offered by Stackless: coroutines, channels, and pickling the interpreter state for later resumption of execution. Of the three, only the first two are needed for multicore support.
The major platforms already have support for native coroutines, though some are better than others. Windows has the CreateFiber() API that creates "fibers", which act like threads, but use "cooperative multitasking". Under POSIX, things are a little trickier.
There is the makecontext() API that does what is needed. Unfortunately, it was specified by POSIX in 2001, obsoleted in 2004, and dropped in 2008, though it is still mostly available. It may not work for OS X, however. When makecontext() was obsoleted, POSIX recommended that developers use threads instead, but that doesn't solve the same set of problems, Hastings said.
For POSIX, using a combination of setjmp(), longjmp(), sigaltstack(), and some signal (e.g. SIGUSR2) will provide coroutine support though it is "pretty awful". While it is "horrible", it does actually work. He concluded his presentation by saying that he was mostly interested in getting the assembled developers to start thinking about these kinds of things.
One attendee suggested looking at the GCC split stack support that has been added for the Go language, but another noted that it is x86-64-only. Trent Nelson pointed to PyParallel (which would be the subject of the next slot) as a possible model. It is an approach that identifies the thread-sensitive parts of the interpreter and has put in guards to stop multiple threads from running in them.
But another attendee wondered if removing the GIL was really the change that the Mercurial developers needed in order to switch. Hastings said that he didn't think GIL removal was at all interesting to the Mercurial developers, as they are just happy with what Python 2.x provides for their project.
Though there may be solutions to the multi-threading problem that are architecture specific, it may still be worth investigating them, Nick Coghlan said. If "works on all architectures" is a requirement to experiment with ways to better support multi-threading, it is likely to hold back progress in that area. If a particular technique works well, that may provide some impetus for other CPU vendors to start providing similar functionality.
Jim Baker mentioned that he is in favor of adding coroutines. Jython has supported multiple interpreters for a while now. Java 10 will have support for fibers as well. He would like to see some sort of keyword tied to coroutines, which will make it easier for Jython (and others) to recognize and handle them. Dino Viehland thought that IronPython could use fibers to implement coroutines, but would also like to see a new language construct to identify that code.
The main reason that Van Rossum is not willing to merge Stackless is because it would complicate life for Jython, IronPython, PyPy, and others, Hastings said (with Van Rossum nodding vigorously in agreement). So having other ways to get some of those features in the alternative Python implementations would make it possible to pursue that path.
Viehland also noted that there is another scripting language that uses reference counting and is, in fact, "totally single threaded": JavaScript. People love JavaScript, he said, and wondered if just-in-time (JIT) compiling should be considered as the feature to bring developers to Python 3. That led Thomas Wouters to suggest, perhaps jokingly, that folks could be told to use PyPy (which does JIT).
Hastings said that he has been told that removing the GIL would be quite popular, even if it required rewriting all the C extensions. Essentially, if the core developers find a way to get rid of the GIL, they will be forgiven for the extra work required for C extensions. But Coghlan was not so sure, saying that the big barrier to getting people to use PyPy has generally been because C extensions did not work in that environment. Someone else noted that the scientific community (e.g. NumPy and SciPy users) has a lot of C extensions.
Index entries for this article | |
---|---|
Conference | Python Language Summit/2015 |
Posted Apr 15, 2015 12:48 UTC (Wed)
by spanezz (guest, #56653)
[Link] (6 responses)
Something compelling in python3.4 is asyncio and coroutine-based asynchronous programming: they allow to do what node.js is doing, but avoiding the callback hell. I see Tornado as the node.js alternative that works and is actually pleasant to use, and 3.4 brings so much more to it.
Something compelling in python 2.7, however, is stability. I can write code in 2.7 and I am certain that I won't need to change it until 2020. Can the same thing be said for 3.4? Maybe not: https://docs.python.org/dev/whatsnew/3.5.html#deprecated
So, the way I see it, what we have now is already very cool indeed. So: first, /tell everyone about it/, and second, pretty please, /stop breaking it/.
Posted Apr 15, 2015 15:22 UTC (Wed)
by jtaylor (subscriber, #91739)
[Link]
If people think performance is the way to get people to use python3, one should look at improving serial performance.
Posted Apr 15, 2015 17:06 UTC (Wed)
by kjp (guest, #39639)
[Link]
"await/yield from" and the static type annotation checking (not sure if either are done) is the only thing I've seen interesting about 3.x. We have 50K LOC in python and it's a bitch to change - python has to be able to handle "success" better (where success is your codebase goes from prototype to much, much, larger and interonnected).
Posted Apr 15, 2015 18:46 UTC (Wed)
by njs (subscriber, #40338)
[Link] (3 responses)
So, uh, what on that list are you bothered by? It looks like a tiny list of tiny obscure cleanups to me.
Posted Apr 16, 2015 8:44 UTC (Thu)
by spanezz (guest, #56653)
[Link] (1 responses)
I do not see "Obscure" as an objective definition: what is obscure for you may be everyday work for me. A change on something that is obscure for me may not affect the code that I wrote, but can still break some module written by someone else that is part of my dependency chain.
There are ways of addressing this: one would be to mark some stdlib features as "stable" in the documentation, and make a guarantee that they will not be broken in any new 3.x release. Another would be to consider everything released in a 3.x release as stable, and make a commitment not to break it regardless of how obscure it is, scheduling API-breaking cleanups for 4.0. Both things would make me happier.
I suffered greatly back in the days when at every new 2.x release I started getting bug reports of DeprecationWarnings on something of other, and I feel a great sense of relief now that I can release 2.7 code that does not rot that easily. I really do not like the idea of going back: http://www.enricozini.org/2015/python-api-stability/
Posted Apr 16, 2015 20:14 UTC (Thu)
by iabervon (subscriber, #722)
[Link]
On the other hand, if they said that 3.4.x would be maintained for at least as long as 2.7.x (under the same policy), that might tempt people, although making similar promises about more and more versions would be a big maintenance burden.
Posted Apr 16, 2015 11:15 UTC (Thu)
by federico3 (guest, #101963)
[Link]
Writing a library that can run natively under Python 2.7, 3.3 and 3.4 is painful and requires an amount of hacks. Not to mention Python 2.6.
Then, it has to run reliably when deployed with the latest version of every dependency and older ones, maybe up to 2 years ago or more - and your code will break in every possible way.
Posted Apr 16, 2015 3:24 UTC (Thu)
by samlh (subscriber, #56788)
[Link] (7 responses)
Only a high performance reference counting implementation can come close to a mark-sweep garbage collector, and cpython isn't one. For an interesting read: http://researcher.watson.ibm.com/researcher/files/us-baco...
> Viehland also noted that there is another scripting language that uses reference counting and is, in fact, "totally single threaded": JavaScript.
Every JS implementation I know of uses Mark&Sweep garbage collection.
Posted Apr 16, 2015 5:44 UTC (Thu)
by alankila (guest, #47141)
[Link]
IMHO threads that are designed to run in isolation are a reasonable compromise for the inherent problems of sharing complex datastructures between threads, and this design still allows you to get serious work done without paying for any GIL-like penalty, or having to lock every single thing. In this world, the threads can't communicate except by specific APIs designed for that purpose, but that is in fact good enough to get serious work done, and as a byproduct avoids data races.
Some languages such as Perl tried this already, but Perl made the fatal mistake of building their threads implementation on top of the code written to emulate fork() on Windows, which cloned the entire state of the process. This, of course, doubles every object in the heap and then starts a thread using those cloned objects. Therefore, using a thread was therefore vastly more expensive operation in terms of CPU and memory usage than starting a separate process would have been.
Posted Apr 16, 2015 23:23 UTC (Thu)
by wahern (subscriber, #37304)
[Link] (5 responses)
That paper is problematic for multiple reasons. For one thing, as we all know, algorithmic complexity is hardly the sole determinate of real-world performance. O(1) can be slower than O(2^N). More concretely, RC can be more cache friendly than tracing because you update the count only when you're actually using the object, whereas you have to go through comparatively complex machinations to approach the locality of reference with tracing More importantly, the paper shows algorithm equivalence between RC and tracing only when RC includes automated cycle collection: But which RC systems include generalized, automated cycle collection? Most usually require explicit use of weak references. (Even languages with tracing collectors still provide weak references, ephemerons, etc, for relationships which cannot be fully automated.) Finally, as Java and the JVM has proven time-and-time again, simply because something is theoretically possible doesn't magically make it solved or, if solved, practical in real, general purpose software. To be sure, the academic work is worthwhile, but it can be difficult or impossible to bridge the two domains, at least in the manner originally envisioned. I use a language with a tracing collector, Lua, every day. I love tracing collectors. But I'm tired of hearing people say that they can be magically made to be as performant, as a general matter, as other methods. Especially when other methods, including real world implementations of automated RC, rely on the programmer and their knowledge of program behavior to solve or help solve some of the stickier problems, such as cycles. In many scenarios such tradeoffs result in no-to-minimal costs with a tremendous gain in performance and implementation simplicity. Rust goes in the opposite direction entirely, though I suppose it remains to be seen how onerous it's approach is in practice. It's also worth mentioning that Lua 5.2 was given a generational collector. But the generational collector was removed in 5.3 because nobody could show that it actually improved performance in actual usage, despite theoretically and intuitively being a huge performance win. Simplicity of the VM, combined with the peculiarities of modern hardware, won out. (It's telling that Lua is significantly faster than Python, despite both being implemented in pure C, and even though Lua has arguably more complex language features: fully symmetric coroutines yieldable across nested function calls without relying on C stack tricks, full lexically scoped closures, tracing collector, etc. It does this by having a small, clean VM. And LuaJIT can be even faster--notably not when calling out to C functions bound using the regular Lua API instead of the LuaJIT-specific FFI interface--while preserving 100% API and ABI compatibility.) Plus, many GC optimizations rely on being able to move and copy objects to new locations. That poses all sorts of problems when interfacing with code outside the virtual machine, such as external C code. And those problems are compounded when you introduce share memory multi-threading. In scenarios where this is very common, such as Python, many or most optimizations might be negatively impacted or excluded entirely, or a huge shift might need to occur in the library ecosystem. Obviously there are solutions and impact-minimizing measures (please don't bother recounting them here), but they impact performance. Return to point #1 for why such basic issues easily make all the difference.
Posted Apr 16, 2015 23:32 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Posted Apr 17, 2015 0:30 UTC (Fri)
by wahern (subscriber, #37304)
[Link] (3 responses)
Fair enough. I don't use Python :)
So I guess the paper is slightly more relevant. But the comment that only a super-optimized RC framework could match a mark & sweep collector simply doesn't square with what the paper actually says. The paper only shows algorithmic equivalency between a) automated RC with cycle collection and b) tracing collectors. In fact, it tends to show that such systems should provide comparable performance when similarly optimized.
This is the Nth time I've seen that paper posted to try to argue that tracing collectors
are faster than RC. It's some kind of horrible meme. A couple of other times it's been posted to try to argue that tracing collectors are faster than manual (and minimal, in that the entire graph of objects is rarely subject to independent lifetime management) reference counting in C and similar languages with non-automated memory management. In all cases I don't even think the original posters bothered reading the paper, because the paper 1) only concludes that performance is algorithmically equivalent, and alludes to benchmarks which show equivalent performance in real-world implementations, rather than showing that tracing collector implementations tend to be faster; and 2) is irrelevant to collection frameworks that don't automate as much as tracing collectors.
Posted Apr 17, 2015 0:44 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
So in practice quite often non-atomic reference counters might be faster than simple mark&sweep GC.
Posted Apr 17, 2015 1:18 UTC (Fri)
by samlh (subscriber, #56788)
[Link] (1 responses)
I agree that mark/sweep collectors are not universally the best option, for reasons of memory overhead, non-determinism, complexity, and so forth.
However, in the case of python, I am more willing to make the claim that mark/sweep collection would be beneficial performance-wise. I am basing this on PyPy's previous experimentation with different collection algorithms, including reference counting (with cycle collection, as required by Python). Their experience has led them, like the JS folk, to use a single-generational mark/sweep collector.
Regardless of the performance, though, I doubt that cPython should ever switch, simply due to the benefits of a stable c api. The answer to lack of Python 3 adoption is unlikely to be further breaking changes... though I admit I have no idea what is the answer.
Posted Apr 17, 2015 1:26 UTC (Fri)
by samlh (subscriber, #56788)
[Link]
Posted Apr 24, 2015 23:07 UTC (Fri)
by axgk (guest, #102185)
[Link] (4 responses)
This 'unicode sandwich' paradigm, which you can't avoid without getting crazy in Py3, is why the industry can't migrate - and this is where the money is, in the end. Money, which pays developers, who, at least in their leasure time, can contribute to the language they have to use between 9 to 5.
Really, Py3 is a toy language as long as it is blatantly denying the fact, that the professional IT world is kept together by globally accepted _standards_, with datamodels and all that.
Check e.g. here http://www.ietf.org/download/rfc-index.txt and try find *any* standard which is not positional bytes based (e.g. TCP), using escape seqs or with identifier value pairs. Which are, when to be _processed_ by software and not just to be treated opaque, always, really ALWAYS ASCII (by definition of being a standard, because those are created to work globally).
Where is the unicode indirection, in e.g. the OSI model?
In other words: Value and payload encodings are _always_ known, in any standard - the codepage fubar created by the MS hack back then was really the only exception of significance I know of, where information went into bytes w/o a meaning - and its long overcome, thanks god. And was anyway relevant only for human data but not for systems.
Python, with its clean and powerful map and list operations is ideal to process itself (or at least control the processing of) standardised data - but who exactly in the professional world needs or wants a unicode indirection to do that?
Python3, it seems to me, was designed for novices, with the idea of giving them I/O libs which spit out and accept unicode objects only. So that those people don't need to comprehend why len('café') can't be 4.
We are just at the beginning of the Internet of Things era and those things don't care about unicode, really.
Unicode is NOT good for you - if you want to earn *real* money using Python.
Posted Apr 25, 2015 8:26 UTC (Sat)
by peter-b (subscriber, #66996)
[Link] (3 responses)
What the hell are you talking about? Have you ever even used Python 3?
From <https://docs.python.org/3/library/functions.html#open>:
The documentation for the "io" module <https://docs.python.org/3/library/io.html#module-io> explains very clearly that Python 3 provides raw binary IO, buffered binary IO, and text IO and that you can use whichever is appropriate.
I've written several binary network protocol implementations in Python 3 at a previous job. It's really not difficult in the slightest.
Posted Apr 25, 2015 16:50 UTC (Sat)
by axgk (guest, #102185)
[Link] (2 responses)
Peter, you involuntarily confirm my every point.
I explicitly stated that Python's list and maps in fact are perfect to process standardised industrial data. Which is pretty much *either*
1. value only, positionally standarised, like IP and/or
all with potential payload, treated opaque, forwarded to upper layers.
Right?
You worked with one or both of the first two.
But what about the third? Identifier/value based Protocols?
Pretty much ALL of them work with human readable ASCII identifiers these days.
*I* was ranting about Py3's absolute inability to work with type3, due to this crazy idea that **everything** which is not binary is to represented by a unicode API.
Check your IO module link again, everything which is not raw data is unicode in this strange world view of localised pizza delivery rest interfaces with NON Ascii identifier keys, provided you smoked good stuff, creating that API.
No - they broke that and they know it:
"What we broke is a very specific thing: many of the previously idiomatic techniques for transparently accepting both Unicode text and text in an ASCII compatible binary encoding no longer work in Python 3."
To be translated to: No more idiomatic techniques to work with standards based on ASCII text identifiers - in favour of human text only processing - where, agreed, sometimes a unicode function might be needed like upper() or counting symbols on a display.
And we learn the reason:
"WHY did we break it? (...) ASSUMING that binary data uses an ASCII compatible encoding and manipulating it accordingly can lead to silent data corruption if the assumption is incorrect." (uppercased by myself)
Sorry, Nick: There is nothing to be "assumed". Its not happening. There ARE no random smiles we need to get decoded by Py3 libs from TCP Headers and there is no Kanji in HTTP identifiers. Yes, one could send that stuff over - but it won't hit a python server but the Loadbalancer error log at max.
---
Its just one big misconception.
PS: I would really like to learn why in the world a commercial company would want you to write binary protocol parsers in Py3? I mean - why in the world do I need a binary protocol parser in the same process where I profit from a unicode API ?
Posted Apr 26, 2015 18:09 UTC (Sun)
by peter-b (subscriber, #66996)
[Link] (1 responses)
> PS: I would really like to learn why in the world a commercial company would want you to write binary protocol parsers in Py3? I mean - why in the world do I need a binary protocol parser in the same process where I profit from a unicode API ?
R&D context. Small embedded systems were performing automated data collection and sending each datum as a UDP packet. We needed to get something up and running quickly to (1) collect and archive the data into HDF5 files for later analysis and (2) piggy-back onto the protocol for on-the-fly analysis and debugging. Python 3 was the quickest and most maintainable solution. Unicode-awareness had nothing to do with the choice.
Posted Apr 26, 2015 21:33 UTC (Sun)
by axgk (guest, #102185)
[Link]
sure, so you could have done just as well with Py2.
We also do data collection from - and management of embedded devices, e.g. your home router or, if you are hip, your fridge - but centrally, for nearly 200 telcos. Some 30 Million devices hit our Python(2) servers and that pretty frequently. We currently have around 60 Python and C developers and some really expert - e.g. working around the GIL (discussed here) is no problem for us - we have Python2 processes running at 800% CPU for consolidation and all that.
Maybe, to illustrate the use case better - just as an example from my domain:
My point is that in Py3 they just differentiate TWO kinds of data: Either raw/binary OR text/unicode but NOT byte strings.
We can't work with neither of them and I wonder how could any company working with standards. We require byte strings, just as they come in (the keys are always ASCII) and we don't want/need any library decode ALL that stuff into unicode using some encoding before we can start consolidating the data. Because unicode is something which is ONLY needed for human text - but not for inter-systems communication.
That does in now way mean we hate unicode or so, we also get stuff like usernames with funny characters but we handle those non systems relevant strings just opaque, back and forth between database, Json API and so on. We just don't need / want / can decode the WHOLE payload - only because of those. Example: When doing len() on such a name value, we do want the number of bytes it takes to store it - and not the number of characters a human user would perceive on a display. Apropos: Alone RAM consumption is unbearably high if you create unicode objects out of any string.
If that still is not coherent to you, maybe Armin describes it better understandable?: http://lucumr.pocoo.org/2014/1/5/unicode-in-2-and-3/ ?
---
In Py2 you can do all three perfectly, even within one process. In Py3 you can only work with raw bytes OR with text from and for humans.
The Internet of Things requires that other string type.
Making Python 3 more attractive
Making Python 3 more attractive
There are almost no existing programs that would profit significantly from it because they don't use threads in the first place because the currently existing GIL makes that pointless on cpu bound tasks.
Maybe via JIT or maybe just by applying some of the stuff the astoptimizer package does into the core interpreter. There was also once a variant of cpython using wider bytecode which claimed to improve performance by some decent (but not amazing) amount, I wonder what happened with that.
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
Making Python 3 more attractive
"Reference counting must perform extra graph iterations in order to complete collection of cyclic data."
and
"As with tracing collection, the heart of the [RC] algorithm is the scanning
phase, performed by the function scan-by-counting() at collection time. The algorithm scans forward from each element of the work-list, decrementing the reference counts of vertices that it encounters. When it discovers a garbage vertex w (ρ(w) = 0), it recurses through all of the edges of that vertex by adding them to the work-list W. Finally, the sweep-for-counting() function is invoked to return the unused vertices to free storage."
Making Python 3 more attractive
Errr.... Python?
Making Python 3 more attractive
"This in turn allowed us to demonstrate that all high-performance garbage collectors are in fact hybrids of tracing and reference counting techniques. This explains why highly optimized tracing and reference counting collectors have surprisingly similar performance characteristics."
Making Python 3 more attractive
There's a catch, though. In real Python programs cycles are fairly rare and most of the objects are deallocated by the reference counting destructors.
Making Python 3 more attractive
Making Python 3 more attractive
PyPy uses a single nursery generation and a tenured generation.
Spidermonkey (Firefox) is has some generational scheme (I'm fairly sure nursery+tenured).
I'm unsure of how V8 and Webkit have their gc structured.
Unicode is good for you ?!
Unicode is good for you ?!
>Python distinguishes between binary and text I/O. Files opened in binary mode (including 'b' in the mode argument) return contents as bytes objects without any decoding.
Unicode is good for you ?!
2. using escape seqs or
4. identifier value based.
And, btw, you did not need to use any unicode specific API func at all to get that job done, I just know it.
We do have bandwidth, Identifiers standardised 30 years ago like .1.3.6.1.2.1.2.... are now like this (random pick): https://www.broadband-forum.org/technical/download/TR-135...
I could point to HTTP, MIME, whatever, anything on layer 5 upwards.
I mean real work, processing identifiers - and acting and consolidating values.
http://python-notes.curiousefficiency.org/en/latest/pytho...
The "very specific thing" is just the Information Processing Technology "thing", the software for the systems of this world.
Unicode is good for you ?!
Unicode is good for you ?!
But that unicode only string paradigm is sth. which really nobody of us can understand.
Devices we manage do most of the time not send their stuff like in your case as raw data (i.e. parseable by position and/or maybe with esape seqs), which you worked with as byte arrays I guess but they speak standards which are more like HTTP headers, e.g. Key-Value.
Your home router has around 2000 of those kvs on board, should you have got it from your ISP. Like SNMP - just verbose, with human readable string keys and values, followed often by opaque payload, like a firmware.
Many standards nowadays use this paradigm. Every REST/Json interface as well if not sending lists only.
That guy is pretty well known, author of Flask beyond much other stuff.