|
|
Log in / Subscribe / Register

FOSS in times of war, scarcity, and AI

By Joe Brockmeier
February 10, 2026

FOSDEM

Michiel Leenaars, director of strategy at the NLnet Foundation, used his keynote at FOSDEM to sound warnings for the community for free and open-source software (FOSS); in particular, he talked about the threats posed by geopolitical politics, dangerous allies, and large language models (LLMs). His talk was a mix of observations and suggestions that pertain to FOSS in general and to Europe in particular as geopolitical tensions have mounted in recent months.

Leenaars began by saying that there is a lot of good open source out there, but it is not being used for good. The irony is that in trying to empower people to take control of their own computing destiny, the FOSS community has empowered the wrong people—those who would like to use software to control others. The ideals of global cooperation and reuse have enabled abuse as well.

[Michiel Leenaars]

So how did we get here? Leenaars referred back to the birth of the World Wide Web at CERN in Switzerland. The thinking was, "we should do things for the world, we should not have boundaries; let's see if we can share". Economies were booming, technology was advancing, money was being made, and parliamentary democracies were taking over. Everybody was in a positive, constructive mood. It was the "end of history", a political philosophy put forward by Francis Fukuyama in his book The End of History and the Last Man. The thesis of the book was that, with liberal democracy, humanity had reached its final form of government.

Leenaars's talk description had been shared on Hacker News well before FOSDEM; he noted that one of the comments said that it sounded like "the official obituary for the 90s techno-optimism many of us grew up on". He said that it is, in a sense.

As FOSS evolved, the community chose "dangerous allies" in the tech companies and future public cloud "hyperscalers". "We thought we could control that; it was not a realistic assumption." There was a darker narrative going on instead, he said; the US National Security Agency (NSA) was carrying out mass surveillance and spying on politicians in other countries, which came to light when Edward Snowden leaked documents that revealed the existence of those programs.

SCRAPS

Despite "this dark layer underneath", though, people, organizations, and governments in Europe were not upset enough to stop working with and trusting businesses in the US. Instead, Europe continued to depend on US tech companies, and to host its data in the public clouds anchored there. He said that Europeans felt like equals with the US, and that it was safe to trust "our friends and long-time allies" in building public clouds that it could rely on. "We can focus on our core business, and look at the total cost of ownership" instead of infrastructure.

That dependence, he said, "makes you incompetent, a victim of potential abuse". It's fine in the short term, but the pain comes afterward. If the entire European Union depends on external providers, and it does, it draws the short straw. "We don't have capacity. We are literally incompetent". CTOs were proud of "cloud-first" strategies; he proposed a different term, "strategic computer rental and anchoring to proprietary services" (SCRAPS).

Even SCRAPS are not guaranteed. Providers of cloud services can refuse to do business with an organization, or be compelled to do so. He referred to sanctions against the International Criminal Court that caused Microsoft to block the email account of the court's chief prosecutor. "We're now at the mercy of the same people who profit off of us, and they still hold the kill switch."

European people, Leenaars said, are now in panic mode and looking for government to keep society afloat. "We shouldn't have become so dependent, but that's about three decades too late". Still, many people inside governments are running toward the fire instead of away from it. He mentioned the Netherlands Ministry of Finance that has been working on a migration to Microsoft 365. The ministry has seen the whole situation, but it's put so much effort into it and has been "locked in to the same company for 50 years". A sort of Stockholm Syndrome has evolved, he said. But he agreed it has a problem with their current tools. "I filed a freedom of information request with them three months ago, and they have not been able to produce a single document". He thought it would be nice if the ministry had gained some situational awareness and would stop putting people in danger.

History did not end

The government's answer is, "let's get more European startups, lots of competitors", he said, but that is the wrong approach. "We don't need to breed more predators; we need mission-driven organizations, we need companies that are public stewards." He called for a pipeline from academia to engineering, to nonprofits and service companies that do not seek to be captive platforms. Simply having a public cloud that is owned by European businesses is not the answer if those businesses follow the same models as the US ones.

The world, Leenaars said, is in the worst shape that it's been in for decades. It turned out that history did not end after all. He talked about social media and described it as "95% FOSS and the rest is cognitive warfare". He had complaints not only about disinformation being spread online, but the short-form content that is popular today as well. Kids, he worried, were becoming dependent on short content that did not deal with complexity. "I don't fear World War III as much as I fear de-enlightenment and a subsequent second dark ages."

His next worry for FOSS was as a target for state actors in warfare. Countries are now targeting the enemy's software and devices as well as waging traditional warfare. He referenced the Lebanon electronic device attacks (dubbed "Operation Grim Beeper") carried out by Israel in September 2024; those attacks made use of pagers and two-way radios carried by Hezbollah members that had been compromised at some point in the supply chain. That had enabled Israel to eavesdrop on its targets' communications until it then detonated the devices on September 17 and 18.

He also discussed the backdooring of XZ in 2024: an attack that was conducted by "Jia Tan" after gaining trust with the original XZ maintainer over a long period of time. The average company has 25,000 software dependencies, he said, and any of them could be used to break in. There are millions of packages, and millions of people maintaining them; all of those maintainers and packages are potential weak spots. But if the new people coming in to help cannot be trusted, or if maintainers are too paranoid and chase contributors away, "we're also screwed".

Cavalry or Trojan Horse?

At this point, Leenaars said, we see horses on the horizon in the form of LLMs; is that the cavalry coming to the aid of FOSS or an army of next-generation Trojan Horses galloping through the gates of the village? The promise of LLMs is that they can take responsibility off of developers' hands, and allow organizations to focus on the core business. "That's a thing we've heard before. The product framing is super-good. Sounds so legit." He reminded the audience of the saying that there is no cloud, only other people's computers. In this context, though, he suggested: "there is no Claude, only other people's code."

Leenaars said that LLMs do a good job of some things, but claimed it was fundamentally impossible for them to do all the things they are expected to do. It is possible, he allowed, that LLM-tools could do "a lot of the janitoring we can do that humans are really weary of doing". There are, after all, many boring tasks in software development humans might like to offload. He recommended that the audience be cautious about what machines are allowed to do. Keep security in mind, and keep LLMs contained; but even then, he said he was not convinced that there was a problem that needed solving by LLMs.

Instead, if FOSS has such a large attack surface in the form of so many libraries and dependencies, trying to reduce the attack surface makes more sense than adding LLMs into the mix. It also makes sense to try to reduce maintainer burnout. He called on "people in the military who are seeing huge budgets" to spend some of that money on talented programmers who could improve FOSS and reduce its attack surface. There are billions and billions of Euros that will be invested in Europe's defenses, some of that money should be spent on FOSS. "The FOSS ecosystem should not build stuff for weapons, but should get money from people who need to defend us. We are their defense, we are their infrastructure." Europeans should be telling politicians that they do not just need to support FOSS to enable digital sovereignty, but also for defense. With that, Leenaars wrapped up the talk, without any time for questions.

Overall the talk was a bit disjointed, and Leenaars presented few concrete suggestions for the audience. But the talk seemed to resonate with the packed main room, and he touched on topics that were prevalent at FOSDEM all weekend: wariness of the changing political picture in the US, distrust of AI/LLMs, as well as a desire to reduce dependence on US companies and services.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brussels to attend FOSDEM.]


Index entries for this article
ConferenceFOSDEM/2026


to post comments

Europe needs hyperscalers (lots of them)

Posted Feb 10, 2026 21:56 UTC (Tue) by rbranco (subscriber, #129813) [Link] (34 responses)

Europe needs startups (lots of them) and hyperscalers too. Also lots of them because competition is good. Otherwise it won't be a match for the US & China. This hipster mentality - buy indie & drink a homebrew afterwards - is detached from reality and it'll be discarded soon anyway. Big brands can coexist alongside craft beer.

And this "there is no Claude, only other people's code" is just another empty phrase that some people will print in t-shirts and that's it. But is it verifiable true? Otherwise it's just copium. We create new works by recombining words according to some rules and LLM's are large language models.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 1:05 UTC (Wed) by mirabilos (subscriber, #84359) [Link] (24 responses)

Perhaps we do *not* need “hyperscalers”, after all. Or to be “a match”. Just to do our own thing.

And no, we don’t need the eso-fascist planet-burning theft machine, either.

We “create new works” by acts of human creativity. This is what needs encouragement. I’ll add a call for UBI to that; after all, pilots were successful, and partial models are now permanent (as was just announced this week).

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 10:04 UTC (Wed) by rbranco (subscriber, #129813) [Link] (23 responses)

You need hyperscalers if you want European companies to have a presence in other countries.

And WTH is "eso-fascist". I tried a Google search and you seem to repeat this everywhere. Europe does have a strain of "eco-fascism" though, seen in the likes of Extinction Rebellion with a circled hourglass that resembles a swastika. An almond plantation in California consumes 4 times more water than all US datacenters combined. Avocado farming is drying up Spain. Thanks to "AI" nuclear energy is on the table again so enough with the secular doomerism. The whole thing is overblown.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 11:56 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (16 responses)

But do we need European companies to have a presence in other countries? What we need is European companies serving European interests and values in Europe. Once *that* works, maybe exporting the result to other places where they have the same needs is valuable. Or maybe it will be taken as a way to assert a form of dominance in places that resent and have not asked for this dominance.

The things that work in Europe and that have been exported successfully somewhere else (for example good technical and sanitary standards) were done by Europeans for Europeans first. Because Europe has the money to serve its own needs when it wants to. Doing garbage because “we want a presence in other countries and those countries are more permissive than the European market” does not work for the European market and usually does not work for the export market either (who wants to import someone else’s garbage).

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 12:17 UTC (Wed) by rbranco (subscriber, #129813) [Link] (2 responses)

> But do we need European companies to have a presence in other countries?

Yes. Otherwise other companies will fill that vacuum. Autarchy doesn't work. Companies need to grow.

Europe lost a sugar daddy (the US) and got a delayed adolescence as a result. Switching to another (China) is not the solution. Europe needs to grow up.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 14:39 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (1 responses)

Growing up does not mean starting with a presence in other countries. It means forgetting about other country wishes for a time and concentrating on its own needs first to get working products and services out of the door.

Maintaining a presence for presence sake gave us spectacular turds like MicroNokia. Extrinsic is no substitute for intrinsic growth, you need a robust intrinsic backbone to survive extrinsic expansion.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 15:37 UTC (Wed) by rbranco (subscriber, #129813) [Link]

Europe wouldn't start from scratch.

And MicroNokia wasn't the end of the world. We can't predict these outcomes and we don't need to micromanage them either.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 15:29 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

That sounds an awful lot like an echo of the current US administration's "America first" rhetoric.

> Doing garbage because “we want a presence in other countries and those countries are more permissive than the European market”

Corner cutters will always exist. Alas, it has economic knock-on effects of the more upstanding entities…but the EU is "good" at regulation (certainly compared to others), no?

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 20:40 UTC (Wed) by khim (subscriber, #9252) [Link] (11 responses)

> But do we need European companies to have a presence in other countries?

There are, ultimately, only two choices:

  1. Ensue that European companies have something worth selling for other countries in exchange for raw resources (e.g. uran that's needed to drive electricity generation in France).
  2. Accept return to technologies of XIX or maybe even XVIII century that can be supported using resources that Europe have indigenously.

And if you rejecting choice #1 then you are getting choice #2 by default.

> Because Europe has the money to serve its own needs when it wants to.

Europe have papers that it calls “money”, but we are fast approaching point where the only way to ensure that something you call “money” is worth something would be raw military might and/or the things that you may exchange with others.

Europe doesn't have former (and don't have time to restore it) thus it's imperative to have latter.

> who wants to import someone else’s garbage

Well… Turkey was willing to do that, apparently — but, again, you need some money that are perceived by others as money for that.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 9:56 UTC (Thu) by kleptog (subscriber, #1183) [Link] (10 responses)

Europe's Achilles heel is that it imports ~50% of its energy needs. While we've made great strides in energy efficiency, that's not enough by itself.

> 1. Ensu[r]e that European companies have something worth selling for other countries in exchange for raw resources (e.g. uran[ium] that's needed to drive electricity generation in France).

That's not a problem. We spend ~375 billion on energy imports, while exporting ~4 trillion in goods & services, so a bit less than 10%. The problem is not being able to import energy at any price.

We don't need to open our data-centres to people from other countries.

> 2. Accept return to technologies of XIX or maybe even XVIII century that can be supported using resources that Europe have indigenously.

Or go straight to XXI century technology. Anything non-renewable is dead-end anyway, it's just a matter of time. Nuclear would be great, but right now we're installing renewables at a rate of about 1 nuclear reactor every 5 days. It's going to be some time before nuclear installations ramp up to meet that, if ever.

And of course, fusion power is only 20 years away /s

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 10:13 UTC (Thu) by taladar (subscriber, #68407) [Link] (8 responses)

Nuclear power isn't actually great at all, it needs fuel that needs to be imported from a small number of countries again.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 12:16 UTC (Thu) by anselm (subscriber, #2796) [Link] (7 responses)

Current nuclear reactors are very inefficient in the way they use their (imported) fuel. In principle, we know how to build nuclear reactors that would make more efficient use of their fuel, and/or would be able to use existing “nuclear waste” as fuel and, as a side benefit, make it less nasty in the process. The remaining challenge is to make these technologies competitive with renewables on a euros-per-megawatt-hour basis.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 13:37 UTC (Thu) by Wol (subscriber, #4433) [Link] (3 responses)

> and/or would be able to use existing “nuclear waste” as fuel and, as a side benefit, make it less nasty in the process.

Aka "fast breeders". I've also heard something about thorium burners that apparently burn up pretty much everything.

> The remaining challenge is to make these technologies competitive with renewables on a euros-per-megawatt-hour basis.

So using a fast breeder to turn U-238 into Pu-239 would be one obvious way. Processing (separating) waste into short- and long-lived radio-nuclides, and using the nasty stuff to bulk up new rods from old also seems an obvious way to me to reduce costs - store the nasty stuff inside the reactor, and break it up at the same time!

Of course, there's the political problems, and geo-stability ... the German nuclear industry as an example of the former, the tsunami and Japanese example of the latter. Then of course, we've got Dungeness (now shut down?), but the site is built on an area that was 5 miles out to sea in Roman times, and if we have global warming could very easily become 5 miles out to sea again in a pretty short time ...

Cheers,
Wol

Cheers,
Wol

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 14:59 UTC (Thu) by malmedal (subscriber, #56172) [Link] (2 responses)

> seems an obvious way to me to reduce costs

These things have been tried, they are not cheap. The reason they(I think France got the furthest) tried was to have the technology ready for when the uranium mines ran dry, not because it would immediately save money.

To run a reactor efficiently and safely you need to know exactly what the fuel is made of down to at least cubic decimeter resolution.

For instance Xenon-135 is a neutron absorber produced by nuclear reactions, when this builds up you need to increase neutron flux to compensate, however being a gas it can escape quickly if it gets the chance, so if the operators make a mistake the reactor can overheat and melt down.
(This is likely a major reason for Chernobyl)

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 15:38 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

Don't remember the technology - was it AGR? I remember our AGR reactor technology got overwhelmed by the American PWR reactors, despite being much safer.

Iirc it used deuterium (heavy water) as the moderator, so if it got hot, the moderator boiled away and the fast neutrons escaped without being captured and driving fission.

(I thought the main reason for Chernobyl was the operators disabling the safety features "to see what will happen"!)

Cheers,
Wol

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 17:38 UTC (Thu) by malmedal (subscriber, #56172) [Link]

> I remember our AGR reactor technology got overwhelmed by the American PWR reactors, despite being much safer.

Don't know if they were safer, but it's clear they were more expensive.

> Iirc it used deuterium (heavy water) as the moderator

No, graphite for moderator, CO2 as coolant. They used control rods to control the reactivity.

> "to see what will happen"

There are very conflicting stories about the details, but it is clear that it was a pre-planned test, deemed necessary to verify correct operation. It was supposed to be done by the day-shift, but it was delayed, so the unprepared night-shift got the job.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 17:46 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link] (2 responses)

The biggest problem with nuclear is that it's still a heat engine, and just building the heat engine parts of the power plant (i.e. the external steam loop, turbines, and cooling system) is more expensive than equivalent renewable energy. Even if the actual reactor were free- and they're obviously far from it- and it still wouldn't be able to compete on cost with renewable energy. That's today. Renewable energy (including battery storage) is still getting cheaper, so its competitive advantage over nuclear is only going to grow.

Of course renewable energy isn't perfect. It still has problems, and there are probably specific applications where other technologies have non-cost advantages that outweigh renewables' cost advantage. There's also no good reason to stop using nuclear plants that are already paid for and are running well. But we're now at the point that plans for new and replacement, non-renewable power plants need to include a justification for not using renewables instead.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 18:04 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

> But we're now at the point that plans for new and replacement, non-renewable power plants need to include a justification for not using renewables instead.

The justification for nuclear is "it's not carbon". So much of this carbon-neutral is exactly that - crap. When you burn anything the question should be "how long ago was the sunlight I'm releasing locked up?". Anything more that 100 years or so should be a red flag (nuclear doesn't count here - that's releasing ancient starlight :-)

Renewables, it's possibly measured in hours, which is great. Wood, of course, while not a particularly good fuel in many ways, is measured mostly in a century or two (or less). Coal and oil are adding to the CO2 burden, and from my knowledge of what's going on, I think we passed the point of no return quite a while back. I suspect Khim may be right in saying Europe will be going back to the 17th century, but on current performance I suspect the rest of the world will be joining us!

Cheers,
Wol

Maybe that's enough

Posted Feb 12, 2026 19:41 UTC (Thu) by corbet (editor, #1) [Link]

Can we agree that this has wandered pretty far off-topic, even for an article like this one? Energy issues are certainly of great interest, but we'll not solve them on LWN.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 15:39 UTC (Thu) by khim (subscriber, #9252) [Link]

It's really funny how you may say two sentences that directly contradict each other in one, single, paragraph.

> That's not a problem.

Interesting… and why is that?

> We spend ~375 billion on energy imports, while exporting ~4 trillion in goods & services, so a bit less than 10%.

So that's not a problem because currently Europe does have things to sell… okay.

> The problem is not being able to import energy at any price.

IOW: it's is a problem. And a pretty big one.

We are entering times where money are almost entirely detached from the actual things that are needed: there are mountains of money that are exchanged in the virtual world of “services” and there are real things that one couldn't buy at any price except if you have the right connections.

Things like “rare earth”. There was lot of hoopla about these in the press, lately… what is the total market of these? Go google that and compare to market capitalization of AI companies (which are using these same rare earths)… you would be surprised.

And, again, Europe either have to ensure that Europe does have “good and services” to sell — or it would find a way to live without things Europe needs.

Chances are extremely high that “XXI century technologies”, in Europe, would be identical to XVIII century technology and not anything new and exciting.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 12:16 UTC (Wed) by ballombe (subscriber, #9523) [Link] (5 responses)

There will not be any European commercial hyperscaler, because if someone want to start a new ethically-challenged business, there are better places than the UE.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 19:55 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

What exactly is unethical in "hyperscalers"? If anything, they're about as ethical as any business is. They provide a service and actually bill you for your usage.

No ads or data mining. You put in money, you get services in exchange.

And just with all other businesses, there are economies of scale. Hyperscalers have lower fixed overhead per "unit of compute". This is a _good_ thing, we want society to be more efficient and wasting less resources.

Europe needs hyperscalers (lots of them)

Posted Feb 13, 2026 21:56 UTC (Fri) by leromarinvit (subscriber, #56850) [Link] (3 responses)

There's no doubt that hyperscalers are efficient on their end of the business, which is of course why they're making lots of money per unit of whatever they sell. But even though the prevailing wisdom these days is to go with them and just pay the bill (which can sometimes be a reasonable business decision), quite a few who have done the math have ended up with the conclusion that it's cheaper to run your own infrastructure, as long as it can be utilized fully and load isn't hugely variable (and unpredictably so).

I think the complaints about hyperscalers are twofold:

  1. For one, they're (pretty much by definition) huge companies, so almost inevitably they end up doing huge company things like lobbying, tax "optimization", and so on. Exhibit A: "Do no evil."
  2. They structure their pricing in a way that it's easy and cheap to get hitched, but very expensive compared to their actual costs once you scale up later. As you say, that's about as ethical (or not) as any other business trying to increase its profits - but it can be a valid argument against using their services.

Europe needs hyperscalers (lots of them)

Posted Feb 13, 2026 22:31 UTC (Fri) by malmedal (subscriber, #56172) [Link] (1 responses)

> cheaper to run your own infrastructure,

I think the real problem is that the people controlling the money don't understand the language of the technical people
they need to run the infrastructure. In particular, they have no real way of knowing if they even are competent at their job.

That said, there are a number of objective advantages to using something like a fully managed database where you just write the SQL and it just works the same whether the query needs a single worker for a second or ten thousand workers for an hour.

One thing the EU could do would be do mandate interoperability, force the vendors to agree on common SQL, common orchestration etc.

I wouldn't hold my breath waiting for this though, just mandating USB for phones took about fifteen years...

Europe needs hyperscalers (lots of them)

Posted Feb 16, 2026 9:46 UTC (Mon) by taladar (subscriber, #68407) [Link]

There are pretty much zero advantages to being able to scale up to tens of thousands of workers for an hour for 99% of all applications since those run on a single small-ish cloud server just fine.

Also, I used to think the problem was that the people controlling the money couldn't determine the competence of the technical people but I think the evidence becomes more and more clear that the people controlling the money can't even tell if they themselves or their peers doing their job are competent.

Europe needs hyperscalers (lots of them)

Posted Feb 13, 2026 23:25 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

> quite a few who have done the math have ended up with the conclusion that it's cheaper to run your own infrastructure, as long as it can be utilized fully and load isn't hugely variable (and unpredictably so).

Sure, and that's fine. This is just normal commercial activity, you check available options and decide which one is the best for you. This also happens all the time with "classic" businesses that need to do lease-vs-own calculations or find suppliers for some components.

> For one, they're (pretty much by definition) huge companies, so almost inevitably they end up doing huge company things like lobbying, tax "optimization", and so on. Exhibit A: "Do no evil."

I actually like that. Business interests of companies should influence the lawmakers. I just don't necessarily like the _level_ of their influence.

> They structure their pricing in a way that it's easy and cheap to get hitched, but very expensive compared to their actual costs once you scale up later. As you say, that's about as ethical (or not) as any other business trying to increase its profits - but it can be a valid argument against using their services.

It really depends. I know AWS from inside out, so I can easily keep my costs way below what I'd pay for a similar service if I were to build it myself. But I agree that AWS makes it very easy to build infrastructures that just burn through money. But it's also not necessarily a sign of "evilness", especially since AWS provides tons of tools to control the costs.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 9:00 UTC (Wed) by kleptog (subscriber, #1183) [Link] (8 responses)

We don't need hyperscalers, but we do need competition. The biggest issue currently is that if you have your infrastructure setup in (for example) AWS using Terraform, it's non-trivial task to convert that to any other cloud provider. It's basically vendor lock-in, cloud-style.

Until it's easy to transport your infra between cloud providers there can never be true competition. FOSS can help there.

The most portable setup I've come across so far is having your application run under K8s, and then using the cloud-providers to supply a K8s environment. Because it's all standard you can run the same application anywhere. But of course this ignores 90%+ of the services cloud providers provide.

Europe needs hyperscalers (lots of them)

Posted Feb 11, 2026 19:33 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

With K8s it's not that hard to do the transfer. In my experience, most of the complexity happens if you want to keep your service available during the migration. If you can afford ~8 hours of downtime, a lot of things become so much easier.

Other than that, it's always the case of complex systems growing implicit dependencies where you don't expect them.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 9:52 UTC (Thu) by taladar (subscriber, #68407) [Link] (6 responses)

I haven't worked with K8S that much but my impression is that it is literally the anti-thesis of standardization, every little subsystem has multiple options that are mutually completely incompatible and often proprietary and undocumented and there isn't even any sort of default.

Even the few subsystems like storage and ingress where some vague resemblance to standardization exists are often only maybe 90% compatible. And good luck getting any kind of tooling to replicate a setup that uses the same configuration format on another provider or locally.

Europe needs hyperscalers (lots of them)

Posted Feb 12, 2026 19:03 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

The problem with K8s is that it's extremely flexible. It can be used for anything from a single-node "cluster" to deployments with millions of nodes and multiple concurrent rolling deployments.

Europe needs hyperscalers (lots of them)

Posted Feb 16, 2026 9:42 UTC (Mon) by taladar (subscriber, #68407) [Link] (4 responses)

It feels like one of those technologies where those who need millions of nodes are pushing some of their costs onto those who do not even need HA in the first place via hype to be honest.

Europe needs hyperscalers (lots of them)

Posted Feb 16, 2026 20:32 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

To an extent. K8s is a bit over-engineered in that regard. For example, it uses etcd to be resilient against the database failures for its control plane. Hardly anybody really needs this.

But it's not _all_ over-engineering, a lot of K8s complexity is justified once you get to a ~10-20 node cluster. Things like resource-based scheduling, gradual rollouts, and automatic rollbacks become important.

And just like with Linux itself, the set of features that are important to you, is often very different from the set of features that are important to me. So it's hard to create one definitive "standard" minimalistic set of features that would satisfy everyone.

Europe needs hyperscalers (lots of them)

Posted Feb 17, 2026 6:42 UTC (Tue) by zdzichu (subscriber, #17118) [Link] (2 responses)

etcd is not mandatory. On my home cluster I'm using k3s, with single PostgreSQL as a data store. Works perfectly on trash old laptops I've used as nodes.

Europe needs hyperscalers (lots of them)

Posted Feb 17, 2026 18:55 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Yes, but K3s is not K8s. And the design of K8s is severely affected by etcd limitations, even though all the large players have versions of K8s that use different storage backends.

Europe needs hyperscalers (lots of them)

Posted Feb 17, 2026 20:19 UTC (Tue) by zdzichu (subscriber, #17118) [Link]

K3s _is_ k8s – it passes conformance test suite and has CNCF's certificate. But I recognize this is a nitpicking not contributing to the discussion, so I'll yield here.

Sick of "many dependencies" framing

Posted Feb 11, 2026 7:17 UTC (Wed) by taladar (subscriber, #68407) [Link] (15 responses)

> if FOSS has such a large attack surface in the form of so many libraries and dependencies

I am so sick of the dependency problem always being framed in terms of the number of dependencies rather than the amount of code involved and how well maintained, documented, tested,... it is.

Having one million lines of code in your dependencies does not get any better if it is in two dependencies of 500k lines each, 495k of which are essentially never looked at, nobody knows how they work and no tests exist for them but nominally they are "maintained" because they are part of another project that has some other lines that are properly maintained.

All having few large over many small dependencies achieves is that you have less information about the actual state of the code you depend on.

Sure, you have to deal with fewer organizations that handle that code if you have a small number of dependencies but that only really matters if you are more concerned with the bureaucracy of pretending everything is in a great state than with the actual state of the code.

Sick of "many dependencies" framing

Posted Feb 11, 2026 7:29 UTC (Wed) by mjg59 (subscriber, #23239) [Link] (6 responses)

Not really - the more maintainers I need to trust, the higher the probability that one of them is either malicious or is compromised in some way. Personally I think this is a reasonable tradeoff, but there's still some degree of additional attack surface as a result.

Sick of "many dependencies" framing

Posted Feb 11, 2026 10:43 UTC (Wed) by farnz (subscriber, #17727) [Link] (3 responses)

The challenge is that maintainer count and dependency count are not correlated; large dependencies often have many maintainers, while it's possible for one person to maintain many small dependencies to a high standard.

Using dependency count as a proxy for something you really care about (like "how many maintainers am I trusting") increases the risk that you will consider yourself "safe" (because you're trusting a small number of dependencies) while actually being at risk (because one of those dependencies has a large number of trusted maintainers, and in the worst case, has a large number of transitively trusted maintainers hidden behind a small number of known maintainers - where you can't actually see directly that someone maintaining the printer support in GTK4 is not on the list of people you've identified as the "GTK Development Team").

Dependency count works within a single build ecosystem as a proxy, because everyone using (say) Bazel, or Cargo, or CMake, faces the same challenges integrating one more dependency - so a CMake project with 15 dependencies probably trusts more people than a CMake project with 10 dependencies. But once you start comparing across build ecosystems, you run into the issue that some ecosystems have a single maintainer comfortably maintaining 15 dependencies in usable form (split up so that you only pull in code you care about), while other ecosystems push maintainers to group together, so that each dependency represents multiple trusted maintainers.

Sick of "many dependencies" framing

Posted Feb 11, 2026 11:42 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (2 responses)

Dependency counts work because they reduce the relationship graph to something a human can understand and act on.

Software is a human creation where human relationships matter, infinite delegation chains do not work for humans and are ripe for abuse, that’s why no one is surprised when a non-software project goes overboard and the inquest shows deep levels of subcontracting, that’s why the automotive industry co-locates its suppliers as close to its own factories as possible (sometimes the next side of the road with a bridge over the road to reduce the effective distance further).

Delegation abuse breads lack of accountability. Software may be special but the humans writing software are not.

Sick of "many dependencies" framing

Posted Feb 11, 2026 12:17 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

Dependency counts reduce the relationship graph by hiding significant portions of it behind invisible delegation chains, with the very problems you cite with deep levels of subcontracting.

In the extreme case, you replace 200 visible maintainers (maintaining 500 visible dependencies - one person maintaining several related dependencies, but splitting them up for your benefit in choosing which bits of their code you use), with 5 visible maintainers maintaining 5 visible dependencies, and a total trust set on the order of 20,000 people, of whom only 200 are actively working on the parts of your dependencies that you use (but where any of them could introduce malicious code and break your trust in that dependency).

And that's why I consider dependency count a bad metric - by saying "I want a small dependency count, with a small number of top-level maintainers", you don't reduce the amount of code you depend upon, but you do incentivise those top-level maintainers to abuse delegation so that they can offer you a single large dependency that does everything you want, rather than offering 10 smaller dependencies, of which 2 are helpful, and requiring you to pick up the other 90 functions they could put in their dependency via delegation abuse from other places.

Sick of "many dependencies" framing

Posted Feb 11, 2026 13:18 UTC (Wed) by Wol (subscriber, #4433) [Link]

And this is also an argument against dynamic linking. If I dynamic-link a library, ALL that code is available to an attacker, and needs to be vetted.

Okay, it's not true of all static linking, but the linker I used many moons ago, you would link the library, and it would search the library for modules that the (partially) linked program needed, and pull in JUST those modules. It had the downside that you might need to link the same library two or three times, if the modules you pulled in linked to further modules in the same library, but it had the upside that if you only wanted a couple of modules from the library, you only GOT a couple of modules.

And by seeing which modules were linked, you knew which modules to vet and which ones to ignore.

That's the other problem with "thousands of LOC" dependencies - do you even depend on them? Would you be better off without them? Does your project even call them?

Cheers,
Wol

Sick of "many dependencies" framing

Posted Feb 11, 2026 13:09 UTC (Wed) by NAR (subscriber, #1313) [Link] (1 responses)

The larger dependencies might have more maintainers, each maintaining a specific part of the large dependency, so even though it's one library, you might need to interact with multiple maintainers. On the other hand some smaller dependencies might have the same maintainer. The number of maintainers should scale with the amount of code, not the way it's organized.

Sick of "many dependencies" framing

Posted Feb 12, 2026 10:07 UTC (Thu) by taladar (subscriber, #68407) [Link]

But the way it is organized can make it a lot more transparent or obscure who actually maintains what, which bits are actually maintained,...

Sick of "many dependencies" framing

Posted Feb 11, 2026 10:43 UTC (Wed) by Karellen (subscriber, #67644) [Link] (7 responses)

With five dependencies of 100kLOC each, you can ask "How trustworthy do these projects and their maintainers seem"?

You can look at the histories of the projects. You can see how long the projects have been going, how frequently releases are made. You can see if they just claim to follow semver, or actually do so. And, when they do make a brown paper bag release, what do they do next?

To answer your concerns, you can examine how well maintained 5 dependencies are. You can check the documentation of 5 dependencies - if it's well-written, and updated with every release? You can look at the testing infrastructure and see how the project talks about it in the forums, and see how seriously it's taken.

How do they handle security issues? Do they have a history of responding in a timely manner? Do they fix the issue, rather than trying to hand-wave it away or attacking the reporter? Do they apply security updates to LTS branches? Do they have LTS branches?

You can check the licensing. Are the licenses of these five dependencies compatible with each other, and with what you want to do?

With 5 dependencies, you can look at all of them. You can tell if they're 95% untested cruft that no-one dares touch. It's possible to find answers to all these questions.

Of course, being able to find those answers doesn't guarantee you'll like them. But you *can* know. You can assess the amounts of risk you're exposing yourself to. You can make trade-offs. You can make an informed decision about whether to use one dependency over another.

If you have 100 dependencies of 5kLOC each, I don't see how you can answer those questions in any meaningful way. I think it's more likely you stop really asking them in the first place. Or even stop considering that they are questions it's even possible to answer.

I do not understand how you might think you could have *more* information about the state of 100 codebases, than you could about 5. That just doesn't track for me.

Sick of "many dependencies" framing

Posted Feb 11, 2026 11:55 UTC (Wed) by pizza (subscriber, #46) [Link]

>With 5 dependencies, you can look at all of them. You can tell if they're 95% untested cruft that no-one dares touch. It's possible to find answers to all these questions.
>If you have 100 dependencies of 5kLOC each, I don't see how you can answer those questions in any meaningful way.

I think it's important to re-iterate that "number of dependencies" matters far more than "kLOC of code" -- eg the paperwork (not to mention the _actual_ work) that you need for CRA compliance scales linearly based on the former, but not the latter.

Sick of "many dependencies" framing

Posted Feb 11, 2026 15:23 UTC (Wed) by farnz (subscriber, #17727) [Link]

My lived experience is that you're wrong. You can't examine, realistically, how well maintained the parts of a big dependency that you care about are maintained, because the project as a whole is well-maintained, and the parts you care about may well "look" maintained because someone's doing build fixes and the like - see the HIPPI support in the Linux kernel as an example, which "looked" maintained because it was getting some fixes, but was in fact unmaintained for all practical purposes.

Remember that practically, it's not 5 dependencies of 100 kLOC each versus 100 dependencies of 5 kLOC each, but 5 dependencies of 1,000 kLOC, where you rely on 100 kLOC of the dependency, versus 100 dependencies of 5 kLOC each, where you rely on 4 kLOC of each dependency. It sure is nice that the 90% you don't use is well-documented and well-maintained, but you need to answer the question not for the dependency as a whole, but for the part you use, to have an answer that's meaningful.

Sure, it's great that all the parts shared across platforms are well-maintained, and the Apple iOS using teams make build fixes to the Android build, but if you're using it on Android, you don't want to discover that the Android build is effectively unmaintained, and has critical vulnerabilities that they're going to respond to with "eh, we don't actually care about Android that much - switch to Apple products" when you hit them.

Sick of "many dependencies" framing

Posted Feb 11, 2026 15:25 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Hmm. I think a metric that may matter more (but is far harder to measure) is how widely *used* a given dependency is ("impact"?). GTK2/3 may be *large*, but the ecosystem has largely moved on, so any problems can lay dormant for longer. Large projects are more likely to have a broad base of users. However, small dependencies can also be "as large as needed" and solve their problem well. And if the ecosystem as a whole uses it widely, any problems are *far* more likely to be noticed in a timely manner.

With tools like `crev`[1] and `cargo-vet` (and similar for other ecosystems), this feels like it'd be easier to get a grip on in a measurable way. "Just" need to find even more review time in everyone's schedules…

[1] https://github.com/crev-dev/crev/

Sick of "many dependencies" framing

Posted Feb 11, 2026 15:31 UTC (Wed) by mb (subscriber, #50428) [Link] (3 responses)

>If you have 100 dependencies of 5kLOC each, I don't see how you can answer those questions in any meaningful way.

https://crates.io/crates/cargo-vet

Sick of "many dependencies" framing

Posted Feb 11, 2026 23:07 UTC (Wed) by Karellen (subscriber, #67644) [Link] (2 responses)

Wouldn't that work just as well for a few large dependencies if, for some reason, you didn't want to do the vetting yourself though?

I still don't see how many small dependencies is an improvement.

Sick of "many dependencies" framing

Posted Feb 12, 2026 2:44 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

I think there's a higher chance that more people will review a small library than anyone will review "Qt" and give it a stamp of approval like this (beyond "it's got a lot of development behind it and Qt Company's track record is pretty good"). At which point, you're back to `cargo-vibes` instead.

Sick of "many dependencies" framing

Posted Feb 12, 2026 9:53 UTC (Thu) by farnz (subscriber, #17727) [Link]

You run into human nature again with the big dependency.

If you're using (say) Qt 7 for a Wayland application running on the Linux kernel, you don't benefit from Qt 7 having thousands of reviews scoped tightly to the Win32 code in Qt; you care about reviews of the Linux/Wayland code, not the Windows code. But the reviewers may well not bother to tell you that their review is scoped to the Win32 code - after all, they're reviewing Qt 7 as they use it, and they're assuming that you know that everyone uses Win32, because that's their life experience.

This puts you at high risk of the "thousands of irrelevant reviews" problem; you see lots of published reviews, and assume that the codebase is well-reviewed. But, in fact, those reviews cover the 80% of the codebase that everyone else uses, and not the 20% that's critical to your project.


Copyright © 2026, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds