|
|
Subscribe / Log in / New account

The State of Python 2025

The JetBrains blog presents the results of the eighth annual Python Developers Survey, carried out in partnership with the Python Software Foundation.

This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.

Many of us in the Python pundit space have talked about Python as being divided into thirds: One-third web development, one-third data science and pure science, and one-third as a catch-all bin.

We need to rethink that positioning now that one of those thirds is overwhelmingly the most significant portion of Python.



to post comments

Is this actually reflective?

Posted Aug 19, 2025 9:31 UTC (Tue) by aragilar (subscriber, #122569) [Link] (3 responses)

One thing I notice about these kind of surveys is there seems to be no attempt at putting error bars on the values reported (and using alternative means to actually get a representative sample). Things that really stood out for me are the fact that 32% provided open source contributions (which seems like a massive number to me), the relatively high usage of PyCharm (compared with other options, and notably owned by those running the survey) and the large mismatch between conda usage and the data science usage. I suspect there is a major sampling issue (e.g. those who use conda are relatively unlikely to go to PyPI and see the banner for the survey).

Sadly JetBrains seems to be using as much more of an ad than previous years (and their choice of "talking heads" does nothing to dissuade that), which seems to not give a good impression of the PSF's involvement.

Is this actually reflective?

Posted Aug 19, 2025 20:03 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

In a less formal survey like this, error bars only capture sampling error (i.e. if we make the extremely optimistic assumption that every potential respondent has an equal probability of being selected, the sample may nevertheless be biased to some degree purely by random chance, and error bars are derived by calculating the probable range of this bias and adjusting the observed value to account for it). They do not account for systematic bias at all (i.e. cases where we did not select a proper random sample in the first place, for example because some potential respondents were less likely to participate).

In "real" polling, systematic bias is somewhat corrected for by a series of processes broadly known as "weighting." The general idea is that you look at high-quality demographic data that you have reason to believe is accurate for your intended sample space (e.g. because it came from a census, hospital records, or other reliable sources of aggregate information), compare that data against the data from your own survey, and adjust the weight given to each response until your surveyed demographics roughly agree with reality. There are numerous problems with this, and it is quite far from a silver bullet, but it is likely better than doing nothing. Weighting has error bars of its own, and for the data nerds, you probably should break those out separately in the crosstabs or raw results, but the error bars on the headline number (i.e. "candidate X leads with Y% of the vote" or whatnot) usually will account for all potential sources of bias that have been considered (or at least, all the bias that is reasonably possible to quantify, anyway).

One of the problems with weighting is that it only works if you have good demographic data to begin with. That's a somewhat believable assumption when your sample space is "the population of country X," but not when it's "everyone who writes code in Python." So it's really difficult to apply weighting to surveys like this, and my default assumption is that it has not been done.

Is this actually reflective?

Posted Aug 20, 2025 3:41 UTC (Wed) by sarahn (subscriber, #154471) [Link]

I'm also a little skeptical this survey is representative of everyone who uses python. I sincerely doubt that 86% of python users use python as their primary language.

Looking at the the venues where this survey was promoted according to https://lp.jetbrains.com/python-developers-survey-2024/#m... , my guess is a lot of people who use python for tooling were missed (including me.)

Is this actually reflective?

Posted Aug 20, 2025 9:04 UTC (Wed) by LtWorf (subscriber, #124958) [Link]

Well they don't allow to pick the IDE I use for example… I don't know if it's on purpose or to keep the list short.

Is this a survey?

Posted Aug 19, 2025 11:15 UTC (Tue) by rweikusat2 (subscriber, #117920) [Link] (13 responses)

It tries to sell "AI coding agents" for Python already on the first page. I have a general policy of never reading anything written by someone who tries to plug this because I remember the times when AI was supposed to "replace all professional drivers". Driving is vastly simpler than programming, as evidenced by the fact that much more people can do the former than people who can do the latter. Hence, the chances that the 'technologly' which couldn't do the former can certainly do the latter are exceedingly slim despite there are truckloads of thoroughly clueless people who believe "programming" is about memorizing some 20 - 30 odd syntax rules, a terrible burden they hope to be relieved from by "AI" so that they can henceforth just yell gibberish at a computer and it'll magically do what they couldn't express.

Noise filtering

Posted Aug 19, 2025 11:47 UTC (Tue) by tux3 (subscriber, #101245) [Link] (6 responses)

>Driving is vastly simpler than programming, as evidenced by the fact that much more people can do the former than people who can do the latter

Everything else aside, I think you could find much better evidence to question AI agents. For instance the recent METR randomized controlled trial (RCT) where people thought AI made them 30% faster, when in fact they were 20% slower than the control group.

Is driving vastly simpler than a grand-master level at chess, evidenced by the fact that many more people can drive? For humans, certainly. But my mid-range phone from a decade ago is much stronger still than a grandmaster. The point here is that human abilities don't correlate to computer abilities. You can't use computer performance in one human task to guess the performance the computer will have in other tasks.

What you have here is good evidence that you should *ignore* hype. Merely reacting in the opposite direction means you are still being influenced by hype. What we should do is judge new things on their own merits, regardless of the noise.

Noise filtering

Posted Aug 19, 2025 11:59 UTC (Tue) by Wol (subscriber, #4433) [Link]

And look for obvious falsehoods. I carefully didn't say lying, because most of these people are your typical advertisers - clueless about the underlying science.

But how can a statistical analysis of proximity (which is basically your LLM) achieve similar results to a whole bunch of dedicated wetware which is subjected to a whole bunch of integrity and fact checkers like your neighbourhood lion?

If an LLM doesn't recognise the signs of a hungry lion nearby, what are the consequences to it? If a human doesn't recognise the signs, it's not going to have the option of trying again ...

Cheers,
Wol

Whence this argument?

Posted Aug 19, 2025 12:33 UTC (Tue) by fishface60 (subscriber, #88700) [Link] (2 responses)

> What you have here is good evidence that you should *ignore* hype. Merely reacting in the opposite direction means you are still being influenced by hype. What we should do is judge new things on their own merits, regardless of the noise.

What you have here is good evidence that you should *ignore* fascism. Merely reacting in the opposite direction means you are still being influenced by fascism. What we should do is judge fascism on its own merits, regardless of the noise.

Apologies for the extreme analogy, I don't believe that using the various tools labelled as AI makes you a fascist despite the correlation that fascists seem to love them.

I have seen this argument twice today after having never encountered it before and I do not follow the conclusion.

Ignoring propaganda is surrendering to it because you are not the only target, and denouncement does more to combat it than calling to judge it on its own merits.

Whence this argument?

Posted Aug 19, 2025 13:05 UTC (Tue) by tux3 (subscriber, #101245) [Link]

Well, it's fine to criticize propaganda for being propaganda, and to criticize hype for being hype. But it doesn't tell you anything about the underlying thing.

Say a pharma company is trying to sell some pill; the ads that they run don't automatically mean the drug is bad. And certainly doesn't mean it's good. Sometimes they make reasonable drugs that work on 40% of people with acceptable side effect, sometimes they inexplicably push aducanumab through the FDA and you start hearing Latin chanting to the tune of "delenda est".

The takeaway isn't that we can't push back against marketing. Just that it's devoid of information. If you look at clinical studies and Cochrane reviews, you will learn something about the drug on its own merits. If you react negatively to the hype, you're following a rule of thumb that overhyped things tend to suck. That happens to be true most of the time, but at the end of the day you're still reacting to the random fluctuations of the advertising department, instead of the truth on its own merits.

Whence this argument?

Posted Aug 19, 2025 14:44 UTC (Tue) by intelfx (subscriber, #130118) [Link]

> What you have here is good evidence that you should *ignore* fascism. Merely reacting in the opposite direction means you are still being influenced by fascism. What we should do is judge fascism on its own merits, regardless of the noise.

I believe you have just Godwin’d yourself.

Noise filtering

Posted Aug 19, 2025 14:05 UTC (Tue) by rweikusat2 (subscriber, #117920) [Link] (1 responses)

> Is driving vastly simpler than a grand-master level at chess, evidenced by the fact that many more people can drive? For
> humans, certainly. But my mid-range phone from a decade ago is much stronger still than a grandmaster. The point here is
> that human abilities don't correlate to computer abilities. You can't use computer performance in one human task to
> guess the performance the computer will have in other tasks.

I was pointing at the performance of people and not computers: They failed to solve a relatively simple task computers aren't particularly suited to. Hence, why would they succeed at a much more complicated task computers aren't particularly suited to, either?

Driving is obviously also vastly simpler than approximating PI with an accuracy of 107,3741,824 digits. But that's a task computers are suited to.

As the German saying goes: „Nicht alles, was hinkt, ist ein Vergleich.“

Noise filtering

Posted Aug 19, 2025 14:21 UTC (Tue) by pizza (subscriber, #46) [Link]

> Driving is obviously also vastly simpler than approximating PI with an accuracy of 107,3741,824 digits. But that's a task computers are suited to.

Uh, no. the algorithms commonly used to derive Pi are *vastly* simpler than the multitude of parallel processes and tasks that factor into the activity "driving. Not to mention driving imposes hard real-time constraints, whereas calculating Pi does not.

Is this a survey?

Posted Aug 19, 2025 13:23 UTC (Tue) by Otus (subscriber, #67685) [Link] (1 responses)

> Driving is vastly simpler than programming, as evidenced by the fact that much more people can do the former than people who can do the latter. Hence, the chances that the 'technologly' which couldn't do the former can certainly do the latter are exceedingly slim despite there are truckloads of thoroughly clueless people who believe "programming" is about memorizing some 20 - 30 odd syntax rules, a terrible burden they hope to be relieved from by "AI" so that they can henceforth just yell gibberish at a computer and it'll magically do what they couldn't express.

The two domains are completely different. You can't fail at driving 10% or even 1% of the time without huge problems. But if you get good code 90% of the time you can work with that. (Sounds better than some developers I've worked with.)

That's not to say that you should use AI coding agents. I'm sceptical they offer much of a productivity benefit for production code yet. But getting five nines (or whatever) of reliability is crucial for driving in traffic, but not for writing code. One is an online task that has to happen real time, the other you can apply any number of checks you want to before actually using it.

Is this a survey?

Posted Aug 20, 2025 9:10 UTC (Wed) by LtWorf (subscriber, #124958) [Link]

I think most people have failed at driving by taking a wrong turn, failing to park inside a narrow spot, occasionally exceeding speed limits and all sort of things.

Is this a survey?

Posted Aug 19, 2025 14:57 UTC (Tue) by jmalcolm (subscriber, #8876) [Link] (2 responses)

I have no immediate opinion on which is more difficult, coding or driving, but popularity cannot tell us that answer.

Driving is a more common skill because driving is much more obviously desirable as a skill. We do not have every young teenager counting the days until they will be allowed to code.

I believe there is a generational shift away from driving. If fewer people get drivers licenses, it will not be because driving got more difficult.

In some ways, coding may be easier for an LLM as it is just language and there are many patterns in code. However, that does not necessarily make LLMs good at design. And even being able to reflect the average is fairly bad in this case.

Is this a survey?

Posted Aug 19, 2025 16:08 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

Practice makes perfect, or as someone (I think it was my Russian teacher) put it rather better, practicing perfection makes perfect.

Both coding and driving benefit massively from practice, both suffer badly from beginner confidence - the proportion of youngsters who crash cars, and screw up coding, is high for both.

Both suffer badly from most people believing "practice makes perfect", and only a few believing "practicing perfection makes perfect".

The trouble is AI falls extremely firmly into the "practice makes perfect" camp, ignoring the saying "only an idiot expects that repeating the same actions time after time will eventually bring about a different result".

Cheers,
Wol

Is this a survey?

Posted Aug 22, 2025 10:19 UTC (Fri) by anselm (subscriber, #2796) [Link]

Both coding and driving benefit massively from practice, both suffer badly from beginner confidence - the proportion of youngsters who crash cars, and screw up coding, is high for both.

OTOH, the universe has drastic methods of getting rid of the really bad drivers that don't work on (most) bad programmers in the same way.

As the saying goes, “there are old pilots, and there are bold pilots, but there are no old, bold pilots”. But the world is teeming with old incompetent programmers.

Is this a survey?

Posted Aug 21, 2025 6:30 UTC (Thu) by LtWorf (subscriber, #124958) [Link]

> However, vibe coding obscures the fact that agentic AI tools are remarkably productive when used alongside a talented engineer or data scientist.

I think this sentence would need a huge [citation necessary] next to it :)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds