Adding Requests to the standard library
The Python Requests module is a popular alternative to the standard library's HTTP handling (e.g. urllib2). Kenneth Reitz, who is the "benevolent dictator for life" (BDFL) for Requests, came to the Python Language Summit to discuss the possibility of adding the module to the standard library. It is oft-requested, but is a bit controversial, he said. In addition, he wanted to use his slot as a forum to discuss the criteria for adding things to the standard library.
Requests has security as its top priority. It focuses on using industry best practices for things like SSL/TLS, connection pooling, encodings, headers, and so on. It removes a significant amount of the complexity for writing programs that interact with the web.
![Kenneth Reitz [Kenneth Reitz]](https://static.lwn.net/images/2015/pls-reitz-sm.jpg)
It is also the most popular package on the Python Package Index (PyPI, though Reitz used its other name: The CheeseShop), having been downloaded some 42 million times. That is more downloads than either setuptools or virtualenv. That popularity leads to frequent suggestions that Requests be added to the standard library.
The development of Requests is mostly done by Reitz and two core contributors. It has been feature-frozen for the last two years. It has a stable API and a stable development community, he said.
There are two major dependencies for Requests that would also have to be added if Requests moved into the standard library. One is chardet, which is an encoding detector that is based on Mozilla's character-recognition algorithm. It is quite useful when dealing with Unicode because servers can't always be trusted to specify the encoding correctly. It would be a good addition to the standard library regardless of what happens with Requests, he said. The other dependency is urllib3, which provides thread-safe connection pooling, file posting, and more. It is under active development and receives updates frequently.
The Requests project sees the module as "critical infrastructure for the Python community", Reitz said. For example, it is used by the pip Python package installation tool.
There are a number of arguments for inclusion into the standard library. If the Python community wants to provide libraries that embody best practices, then adding Requests would be "the right thing to do". From a sustainability standpoint, having Requests in the standard library would make it easier to get funding for the core contributors. Also, chardet would be good from a "batteries included" standpoint.
Reitz also relayed some observations that might make it difficult for Requests to be included. To start with, it comes with its own "carefully curated" (and frequently updated) bundle of certificates from certificate authorities (CAs) for SSL/TLS verification, while Python relies on the system CA bundle. Relying on the system certificate bundle would likely reduce the security of the library, he said.
Beyond that, the HTTP specifications and recommended usage change significantly over time; the Requests module keeps up with those changes. There are also situations where the project has turned around a release for a security fix in 12 hours, which would be difficult or impossible to do if it were included in the standard library.
The "biggest pitch" for Requests is that it is better than the parts of the standard library that cover the same ground. Pulling Requests into the standard library properly would require a bunch of work to integrate it and replace pieces of the standard library with parts of Requests. The project would also lose the ability for fast turnaround changes based on security problems or specification/usage changes.
He also wondered about the goals of the standard library. Now that Python ships with pip, is inclusion in the standard library really needed for modules like Requests? The "official stance" of the Requests project is that it is critical infrastructure, as he said earlier, but also that it is too critical to be included in the standard library.
That conclusion seemed largely agreeable to those present. One attendee said they would not be happy if they had to update Python to get a new Requests. Nick Coghlan noted that network security needs make the update cycle for Requests quite different from that of Python as a whole. It is also impossible to maintain network security for a release that has a "no new features" policy (e.g. Python 2.7).
There needs to be a way for new people to get the recommendations to use "external" modules like Requests, Łukasz Langa said. But there is a problem for some users (and companies), though, because they don't want to install additional, third-party dependencies, Thomas Wouters said. Requests is in a somewhat different category, though, since it is already installed for pip, Coghlan said.
Someone suggested that there be a set of modules that are vetted and endorsed by the core developers or the Python Software Foundation. Langa wondered what would happen if the maintenance stopped for an endorsed package, however. He suggested that the documentation get changed so that there is a section on deprecated modules and their suggested replacements.
There were some other concerns expressed. Glyph Lefkowitz argued that one of the big hurdles for users adopting Requests and other PyPI modules is the command line. Adding some sort of user interface to pip would help with that, he said.
Alex Gaynor noted that Python 3.4 added the asyncio module, but that Requests does not support it. He wondered how Requests could even be considered for inclusion in the standard library without asynchronous support using asyncio. Brett Cannon pointed out that there is a need for an informational PEP that describes the goals of the standard library.
Larry Hastings kind of summed up the session when he said that "batteries included may not make so much sense for everything anymore". Certainly the feeling in the room seemed to indicate that there is a mismatch between the frequency of Python releases and the needs of some modules, which means their users are better served by remaining separate.
Index entries for this article | |
---|---|
Conference | Python Language Summit/2015 |
Posted Apr 23, 2015 2:13 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
I can only expect that if Python development doesn't accelerateand requests goes in, we're going to get stuck with requests2 and the standard library version will just stagnate over time but never get removed.
Posted Apr 23, 2015 15:37 UTC (Thu)
by synacktiv (subscriber, #86420)
[Link]
Furthermore how do they decide which requests addons should be included: kerberos auth/ntlm auth/socks relay/sdch compression/etc..
Posted Apr 30, 2015 9:38 UTC (Thu)
by MKesper (subscriber, #38539)
[Link] (2 responses)
Posted Apr 30, 2015 10:15 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted May 1, 2015 12:51 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link]
Easy_install supports eggs but not wheels. Pip supports wheels but not eggs. It's rather sad.
Posted Apr 23, 2015 8:15 UTC (Thu)
by bartavelle (subscriber, #56596)
[Link] (4 responses)
Looks to me like it will be the other way around (it seems unlikely developers will update their dependencies faster than my distribution), and it introduces an extra thing to take care of when you are not using public CAs, or using a mix of public and private ones. At first glance it looks like a terrible idea.
Posted Apr 23, 2015 22:08 UTC (Thu)
by debacle (subscriber, #7114)
[Link] (3 responses)
Posted Apr 27, 2015 6:18 UTC (Mon)
by bernat (subscriber, #51658)
[Link] (2 responses)
Posted Apr 27, 2015 8:55 UTC (Mon)
by debacle (subscriber, #7114)
[Link] (1 responses)
Posted May 1, 2015 12:53 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link]
Posted Apr 23, 2015 18:33 UTC (Thu)
by pj (subscriber, #4506)
[Link] (32 responses)
If all the current batteries were turned into PyPI packages classified as 'standard', then they could all be installed with a single (perhaps new) pip command. Also, the python CLI, if invoked dynamically, could warn about old packages and describe how to update ("The python libraries on this machine are XYZ days old. Consider updating them via 'pip install --upgrade'").
And I'm sure it's not just Requests that could benefit from getting security updates - other security-oriented and crypto libraries likely could as well. Or imagine if there's a bug found in the default https server implementation - if it were bad enough, it should be backported to previous versions, etc.
Posted Apr 30, 2015 18:40 UTC (Thu)
by Pc5Y9sbv (guest, #41328)
[Link] (31 responses)
I think it is contrary to the public good to try to bypass the OS distribution package maintainers while not taking over the responsibility they have shouldered for all these years. Someone needs to do the less sexy work of back-porting security fixes and otherwise tending to the legacy software stacks out there, even when most upstream developers do not have the time nor interest to do that for each different user community with their different needs. You cannot simply state that everyone should chase a shared repository, updating bits of software every time a developer pushes out some changes. These are policy decisions, which do not have one globally correct answer.
Outside of a few high budget {S,I}aaS operations who can really curate their whole stack properly using internal engineering resources, there are really only a few choices: freeze an entire stack except for the small part you actively work on; face an endless churn where you try to track your third-party upstreams while they may abandon the versions of code you depend on at any moment when a critical problem arises; or rely on the OS distribution intermediary who can provide some happy medium where your third-party dependencies are kept patched in a backward-compatible manner.
It seems the community is groping around in the dark recently, rediscovering old black-box proprietary software delivery methods while not recognizing their drawbacks. Things like containers, statically linked applications, and other horribly stovepiped software modules are largely a regression for open source development. Unless you are one of those highly funded operations who can really curate a whole stack, these methods simply amount to abandonment-in-place of all your integrated third party elements.
Posted May 1, 2015 16:01 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (30 responses)
That's the thing which would be great to change though, to help the upstream developers maintain a set of backported releases rather than punting to the distros to duplicate all of that integration work, especially when the patches come from the upstream themselves. Cut out the need for a middleman by fixing the problems which created the need.
Then the application developers can depend on one runtime maintained by the people who know most about it rather than one fork per distro.
> Things like containers, statically linked applications, and other horribly stovepiped software modules are largely a regression for open source development.
I think this is a reaction to inadequacies to application distribution in the Linux OS model where each version of each distro provides a unique ABI, which leads to 10+ common variations that an app developer is expected to support. It doesn't matter if the different Linux OS distros are 99.999% compatible with one another, if you use some reasonable number of shared libraries it is almost certain that some two common distros will have mutually incompatible ABIs. It only takes one library difference to screw it up for your application.
> Unless you are one of those highly funded operations who can really curate a whole stack, these methods simply amount to abandonment-in-place of all your integrated third party elements.
If you are highly funded enough you can have a dedicated release team for each OS to paper over all the tiny differences which are fatal to binary compatibility and take advantage of the libraries and infrastructure that each unique OS provides, or you can pick one stack to integrate with and make it a requirement.
This comes down to the Mythical Man Month, effectiveness goes down as the amount of coordination and communication needed to do work goes up, making every application developer coordinate with every Linux OS team and try to understand all of their quirks, is a huge amount of overhead that shouldn't be discounted.
Posted May 1, 2015 18:36 UTC (Fri)
by dlang (guest, #313)
[Link]
you are blaming the distros for this, but it's really the developers that are driving this.
if you are developing a library, you should think about all your users, and not assume that everyone is always going to be upgrading to the latest version the minute you release it. you need to maintain backwards compatibility so that programs compiled against a prior version will keep working with a new version.
If you are developing an application, you need to evaluate the libraries that you choose to use, not just from the point of view of what features do they offer, but from the point of view of what their attitude towards stability is. If the library developers routinely break backwards compatibility, then they had better offer a _really_ great feature set to make up for this big drawback. And if you pick a library with a bad track record, you need to do so with the acceptance that this will cause you problems, and those problems are worth it.
Posted May 1, 2015 19:05 UTC (Fri)
by dlang (guest, #313)
[Link] (28 responses)
The problem is that most of the upstreams don't have any interest in maintaining old versions with backported fixes. It's a lot of work and they've already released the fixed version. Why should they have to backport fixes across 10 years of development just because RedHat promised their paying users that fixes would be available that long? In the case of rsyslog, "supported versions" range from v3.22 to v8.9 depending on where you are getting it from.
The people who want the backports should be the ones who pay for them (either in money or in time). the software developers should avoid making changes that break existing users when they release new versions. Nobody is perfect, and occasionally there are really good reasons for breaking this compatibility, but if it's rare, it makes it far easier for users to just run the latest version and not have to worry about backports.
Remember when Firefox announced that they were moving to single release numbers and how there was a cry that it would break all distros because the distros couldn't stick with only shipping minor updates any longer? Firefox has done a good job of not breaking things with new releases and so it's basically a non-issue today and everyone just runs a current version (and many of the distros don't try to backport things, they just ship the new version). Users are happy, thins work.
more software should be treated this way, both from a "dont' break users" development point of view, and a "ship the latest" point of view. Shipping the latest of something that doesn't have a good "don't break users" development viewpoint is irresponsible on the part of the distros
Posted May 1, 2015 19:51 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (27 responses)
Making backports on Windows is extremely easy - if you simply compile it for the oldest supported OS version. For most of libraries even Win2000-level functionality is perfectly sufficient.
On Linux this is quite a bit more complicated.
Posted May 1, 2015 20:00 UTC (Fri)
by dlang (guest, #313)
[Link] (26 responses)
you say
similar logic applies to most libraries on Linux, but if you insist on using the ones that aren't this way, who's fault is it?
Posted May 1, 2015 20:05 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (25 responses)
For example, if you want to support RHEL5 (it's still not uncommon) then you have to compile with Ye Olde Glibc.
Then you want to use libjson which is happily packaged in RHEL7 and some Debians. How would you do that?
> similar logic applies to most libraries on Linux, but if you insist on using the ones that aren't this way, who's fault is it?
Posted May 1, 2015 20:23 UTC (Fri)
by dlang (guest, #313)
[Link] (24 responses)
Posted May 1, 2015 20:26 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (23 responses)
RHEL5 was released in 2007.
Posted May 1, 2015 20:40 UTC (Fri)
by dlang (guest, #313)
[Link] (22 responses)
so which is it? the fact that software that works over such time shows that things are good, or the fact that there is some software that breaks with the last release shows that everything is horrific?
you can find examples of both on any OS you pick.
You keep claiming that the situation on Linux is horrific, we had someone post on one of these threads that they had far less trouble supporting the variations on Linux than they had on supporting the variations on Windows.
Everyone knows you have problems on Linux and not on Windows, can you accept that other people find the opposite? And can you stop screaming about this on a montly basis?
Posted May 1, 2015 20:51 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (21 responses)
Posted May 1, 2015 21:22 UTC (Fri)
by dlang (guest, #313)
[Link] (20 responses)
it will also work if it uses standard libraries that maintain good backwards compatibility (glibc, X11, etc)
Posted May 1, 2015 21:38 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted May 3, 2015 21:15 UTC (Sun)
by paulj (subscriber, #341)
[Link] (18 responses)
Newer stuff, GNOME-era on, you're sunk. You need virtualisation/containers to run your old apps on the specific distro they were compiled for.
Posted May 3, 2015 23:03 UTC (Sun)
by lsl (subscriber, #86508)
[Link]
Quite a few solid libraries in there. Of course, if you want your program to run on RHEL5 you can't use library additions introduced yesterday. That seems to be the actual problem in some cases instead of missing backwards compat. It's the same on windows though: if you use features introduced with Windows 8 your program won't run on XP.
Posted May 4, 2015 3:34 UTC (Mon)
by dlang (guest, #313)
[Link] (16 responses)
I believe that I have seen just a wee bit of displeasure at Gnome for the changes that they make from release to release ;-)
But in any case, is this the fault of the distro or the Gnome developers/maintainers? They are the ones who choose to make it so that it's hard or impossible to have both old and new versions of their libraries on the same system.
Posted May 4, 2015 3:37 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (15 responses)
Posted May 4, 2015 3:48 UTC (Mon)
by dlang (guest, #313)
[Link] (3 responses)
I don't use Gnome, but I remember people complaining about this when Gnome 3 came out.
Posted May 4, 2015 3:50 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
Do you have any specific examples?
Posted May 4, 2015 3:58 UTC (Mon)
by dlang (guest, #313)
[Link] (1 responses)
Posted May 4, 2015 4:00 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link]
Posted May 4, 2015 8:17 UTC (Mon)
by paulj (subscriber, #341)
[Link] (10 responses)
A container/chroot with the old libs is the easiest way to make that run (I tried building the old gnomemm myself from SRPMs - impossible due to build deps).
Posted May 4, 2015 10:55 UTC (Mon)
by paulj (subscriber, #341)
[Link] (9 responses)
ABI compatibility on desktop Linux - at least in the GNOME world - isn't even a joke.
Posted May 4, 2015 11:05 UTC (Mon)
by paulj (subscriber, #341)
[Link] (8 responses)
We need to get to a point where distros can guarantee forward compatibility. If a package installs on distro version N, it should install on all N+M, M>0.
Or, distros need to shrink to a core, and we look at higher-level ways of installing packages. As many of the programming languages already do for their library ecosystems - each differently though.
Posted May 4, 2015 14:33 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (7 responses)
That's backwards compatibility. Forwards compatibility is "compile on N+M and run on N". *That* is not possible without versioned symbols and something to tell the linker to blacklist certain versions of symbols so that only specific ones are used. The solutions here[1] are way too verbose and finicky for sane maintenance (so you just end up building on Debian Etch'n'half and hoping for the best).
Posted May 4, 2015 14:34 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted May 4, 2015 16:09 UTC (Mon)
by paulj (subscriber, #341)
[Link] (5 responses)
Point being, if you can't take (say) a package for SpiffyDistro 16 and install it on SpiffyDistro 20, you don't have a system that is going to be generally useful for computer users - because it isn't stable enough for vendors to target it. It might be useful for some niche subset of users.
It doesn't matter how it's done, through containers or what not, but that needs to work, if todays distro model is to survive.
Posted May 4, 2015 18:01 UTC (Mon)
by dlang (guest, #313)
[Link] (4 responses)
It's when you have things start tieing in to the Desktop Environments that things are more 'interesting'. Unfortunately this does include a large percentage of GUI apps, but not all of them, some GUI toolkits are more stable than others.
Posted May 4, 2015 18:16 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (3 responses)
Posted May 4, 2015 19:04 UTC (Mon)
by paulj (subscriber, #341)
[Link] (2 responses)
The end-user experience is that a perfectly good application disappeared.
Posted May 4, 2015 19:09 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (1 responses)
Posted May 4, 2015 19:36 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link]
Posted Apr 24, 2015 14:25 UTC (Fri)
by ber (subscriber, #2142)
[Link]
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
It's a lot of work ON LINUX.
Adding Requests to the standard library
> For most of libraries even Win2000-level functionality is perfectly sufficient.
Adding Requests to the standard library
You don't only need to compile against the older versions. You have to compile against everything old.
Let's put it this way - how do I get libffi5 on Debian 8?
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
It will work. If it is completely statically linked or bundles everything above syscalls.
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library
Adding Requests to the standard library