|
|
Log in / Subscribe / Register

Debian decides not to decide on AI-generated contributions

By Joe Brockmeier
March 10, 2026

Debian is the latest in an ever-growing list of projects to wrestle (again) with the question of LLM-generated contributions; the latest debate started in mid-February, after Lucas Nussbaum opened a discussion with a draft general resolution (GR) on whether Debian should accept AI-assisted contributions. It seems to have, mostly, subsided without a GR being put forward or any decisions being made, but the conversation was illuminating nonetheless.

Nussbaum said that Debian probably needed to have a discussion "to understand where we stand regarding AI-assisted contributions to Debian" based on some recent discussions, though it was not clear what discussions he was referring to. Whatever the spark was, Nussbaum put forward the draft GR to clarify Debian's stance on allowing AI-assisted contributions. He said that he would wait a couple of days to collect feedback before formally submitting the GR.

His proposal would allow "AI-assisted contributions (partially or fully generated by an LLM)" if a number of conditions were met. For example, it would require explicit disclosure if "a significant portion of the contribution is taken from a tool without manual modification", and labeling of such contributions with "a clear disclaimer or a machine-readable tag like '[AI-Generated]'." It also spells out that contributors should "fully understand" their submissions and would be accountable for the contributions, "including vouching for the technical merit, security, license compliance, and utility of their submissions". The GR would also prohibit using generative-AI tools with non-public or sensitive project information, including private mailing lists or embargoed security reports.

AI is a marketing term

It is fair to say that it is difficult to have an effective conversation about a technology when pinning down accurate terminology is like trying to nail Jell-O to a tree. AI is the catch-all term, but much (not all) of the technology in question is actually tooling around large language models (LLMs). When participants have differing ideas of what is being discussed, deciding whether the thing should be allowed may pose something of a problem.

Russ Allbery asked for people to be more precise in their descriptions of the technologies that their proposals might affect. He asserted that it has become common for AI, as a term, "to be so amorphously and sloppily defined that it could encompass every physical object in the universe". If the project is going to make policy, he said, it needed to be very specific about what it was making policy about:

An LLM has some level of defined meaning, although even there it would be nice if people were specific. Reinforcement learning is a specific technique with some interesting implications, such as the existence of labeled test data used to train the algorithm. "AI" just means whatever the person writing a given message wants it to mean and often changes meaning from one message to the next, which makes it not useful for writing any sort of durable policy.

Gunnar Wolf agreed with Allbery, but Nussbaum claimed that the specific technology did not matter. The proposal boiled down to the use of automated tools for code analysis and generation:

I see the problem we face as similar to the historical questions surrounding the use of BitKeeper by Linux (except that the choice of BitKeeper imposed its use by other contributors). It is also similar to the discussions about proprietary security analysis tools: since those tools are proprietary, should we ignore the vulnerability reports they issue?

If we were to adopt a hard-line "anti-tools" stance, I would find it very hard to draw a clear line.

Drawing clear lines, however, is something that a number of Debian developers felt was important. Sean Whitton proposed that the GR should not only say "LLM" rather than "AI", but it should also distinguish between the uses of LLMs, such as code review, generating prototypes, or generating production code. He envisioned ballot options that could allow some, but not all, of those uses. Distinguishing between the various so-called AI technologies would help in that regard. He urged Nussbaum "not to argue too hard for something that is more general than LLMs because that might alienate the people you want to agree to disagree with." Andrea Pappacoda said that the specific technology mattered a lot; he wanted the proposal to have clear boundaries and avoid broad terms like AI. He was uncomfortable with the idea of banning LLMs, and not sure where to draw the line. "What I can confidently say, though, is that a project like Claude's C Compiler should not have a place in Debian."

Beyond terminology

The conversation did not focus solely on the terminology, of course. Simon Richter had questions about the implications of allowing AI-driven contributions from the standpoint of onboarding new contributors to Debian. An AI agent, he said, could take the place of a junior developer. Both could perform basic tasks under guidance, but the AI agent would not learn anything from the exchange; the project resources spent in guiding such a tool do not result in long-lasting knowledge transfer.

AI use presents us (and the commercial software world as well) with a similar problem: there is a massive skill gap between "gets some results" and "consistently and sustainably delivers results", bridging that gap essentially requires starting from scratch, but is required to achieve independence from the operators of the AI service, and this gap is disrupting the pipeline of new entrants.

He called that the onboarding problem, and said that an AI policy needed to solve that problem; he did not want to discourage people by rejecting contributions or expend resources on mentoring people who did not want to be mentored. Accepting AI-assisted drive-by contributions is harmful because it is a missed opportunity to onboard a new contributor. "The best-case outcome is that a trivial problem got solved without actually onboarding a new contributor, and the worst-case outcome is that the new contributor is just proxying between an AI and the maintainer". He also expressed concerns around the costs associated with such tools, and speculated it might discourage contribution from users who could not afford to use for-pay tools.

Nussbaum agreed that the cost could be a problem in the future. For now, he said, it is not an issue because there are vendors providing access for free, but that could change. He disagreed that Debian was likely to run out of tasks suitable for new contributors, even if it does accept AI-driven contributions, and suggested that it may make harder tasks more accessible. He pointed to a study written by an Anthropic employee and a person participating in the company's fellows program, about how the use of AI impacts skill formation: "A takeaway is that there are very different ways to interact with AI, that produce very different results both in terms of speed and of understanding". He did not seem to be persuaded that use of AI tools would be a net negative in onboarding new contributors.

Ted Ts'o argued against the idea that AI would have a negative impact:

Some anti-AI voices are concerned that use of AI will decrease the ability to gain seasoned contributors, with the implied concern that this is self-defeating because it restricts the ability to gain new members in the future. And you are now saying we should gate keep contributors that might be using AI as being unworthy of contributing to Debian? I'd say that is even more self-defeating.

Matthew Vernon said that the proposed GR minimized the ethical dimension of using generative AI. The organizations that are developing and marketing tools like ChatGPT and Claude are behaving unethically, he said, by systematically damaging the wider commons in the form of automated scraping and doing as they like with others' intellectual property. "They hoover up content as hard as they possibly can, with scant if any regard to its copyright or licensing". He also cited environmental concerns and other harms that are attributed to generative AI tools, "from non-consensual nudification to the flooding of free software projects with bogus security reports". He felt that Debian should take a clear stand against those tools and encourage other projects to do the same:

At its best, Debian is a group of people who come together to make the world a better place through free software. I think we should be centering the appalling behaviour of the organisations who are pushing genAI on everyone, and the real harms they are causing; and we should be pushing back on the idea that genAI is either a social good or inevitable.

There was also debate around the question of copyright, both in terms of the licenses of material used to train models, as well as the output of LLM tools. Jonathan Dowland thought that it might be better to forbid some contributions now, since some see risks in accepting such contributions, and then relax the project's position later on when the legal situation is clearer.

Thorsten Glaser took a particularly harsh stance against LLM-driven contributions, going so far as to suggest that some upstream projects should be forced out of Debian's main archive into non-free unless "the maintainers revert known slop commits". Ansgar Burchardt pointed out that would have the effect of banning the Linux kernel, Python, LLVM, and others. Glaser's proposal did not seem particularly popular. He had taken a similar stance on AI models in 2025; he argued most should be outside the main archive, when the project discussed a GR about AI models and the Debian Free Software Guidelines (DFSG). That GR never came to a vote, in part because it was unclear whether the language would forbid anti-spam technologies because one could not include the corpus of spam used as training data along with filters.

Allbery did not want to touch on copyright issues but had a few words to say about the quality of AI-assisted code. It is common for people to object to code generated by LLMs on quality grounds, but he said that argument does not make sense. Humans are capable of producing better code than LLMs, but they are also capable of producing worse code too. "Writing meaningless slop requires no creativity; writing really bad code requires human ingenuity."

Bdale Garbee seconded that notion, and said that he was reluctant to take a hard stance one way or the other. "I see it as just another evolutionary stage we don't really understand the longer term positive and negative impacts of yet." He wanted to focus on long-term implications and questions such as "what is the preferred form of modification for code written by issuing chat prompts?" Nussbaum answered that would be "the input to the tool, not the generated source code".

That may not be an entirely satisfying answer, however, given that LLM output is not deterministic and the various providers of LLM tools retire models with some frequency. A user may have the prompt and other materials fed to an LLM to generate a result at a specific point in time, but it might generate a much different result later on, even if one has access to the same vendor's tools or models to run locally.

Debian isn't ready

It is clear from the discussion that Debian developers are not of one mind on the question of accepting AI-generated contributions; the developers have not yet even converged on a shared definition of what constitutes an AI-generated contribution.

What many do seem to agree on is that Debian is not quite ready to vote on a GR about AI-generated contributions. On March 3, Nussbaum said that he had proposed the GR "in response to various attacks against people using AI in the context of Debian"; he felt then it was something that needed to be dealt with urgently. However, the GR discussion had been civil and interesting. As long as the discussions around AI remained calm and productive, the project could just continue exploring the topic in mailing-list discussions. He guessed that, if there were a GR, "the winning option would probably be very nuanced, allowing AI but with a set of safeguards".

The questions of what to do about AI models in the archive, how to handle upstream code generated with LLMs, and LLM-generated contributions written specifically for Debian remain unanswered. For now, it seems, they will continue to be handled on a case-by-case basis by applying Debian's existing policies. Given the complexity of the questions, diverse opinions, and rapid rate of change of technologies lumped in under the "AI" umbrella, that may be the best possible, and least disruptive, outcome for now.



to post comments

Preferred form of modification

Posted Mar 10, 2026 14:55 UTC (Tue) by kleptog (subscriber, #1183) [Link] (18 responses)

> ""what is the preferred form of modification for code written by issuing chat prompts?""
> Nussbaum answered that would be ""the input to the tool, not the generated source code"".

I thought: that can't be right. Then I looked at what he actually said:

> Assuming we are making the hypothesis of an LLM that is packaged in Debian and used as part of the build process of the package (so it's a build-dependency, and does not require internet access during build), how is that different from the 'bc' source package's use of flex/bison to generate C source files[0], or the 'swiglpk' source package's use of swig?

There is absolutely no way LLMs are useful for such a scenario. That's like for every build sending a task to Mechanical Turk to code and using the response blindly. LLMs are by their nature non-deterministic (though I guess you could turn down the temperature). That makes it not comparable to flex/bison.

Preferred form of modification

Posted Mar 10, 2026 15:42 UTC (Tue) by geofft (subscriber, #59789) [Link] (8 responses)

In the sense of bit-for-bit reproducibility, yes. But in the sense of human understanding, I think it is actually like flex/bison: if you want to understand what a parser is doing, you're going to have a better time looking at the highest-level inputs instead of the C code. Or for autoconf, which I'm more familiar with: it's always a better time editing configure.ac and rerunning autoconf than editing ./configure directly, and also this remains true even if you're on a different version of autoconf and your regenerated ./configure has a whole bunch of uninteresting changes, because the intent of the two generated ./configure files is the same.

The term "preferred form of modification" is from the GPL, and is intended to protect the four software freedoms, specifically, the freedom to study and improve the software, and I think it should be interpreted in that context. By the word "modification" it implies not trying to regenerate anything exactly. I think it's a natural extension to reproducible builds to desire that a small change to the sources produces a correspondingly small change in the binary, but that is not a requirement for the sort of reproducibility you want for automated builds, and it's quite common (especially with compiler optimizations, etc.) for this not to be true already.

For the goal of bit-for-bit reproducibility, I wonder if you can do something like check in both the input and output of the LLM as well a proof that the output was generated from the given neural network and given inputs, which probably just takes the form of the RNG bitstream and the specific order of evaluation (even if you use a DRBG to deal with the randomness, my understanding is that operating on differently-shaped hardware with different parallelism is going to trigger some chaos theory in the outputs of a neural network). Apparently it is also more efficient to verify matrix multiplication than to actually perform it (Freivald's algorithm). This might be both too much data and too much computation to be practical at the moment, but maybe it's what we do many years in the future.

Preferred form of modification

Posted Mar 10, 2026 16:30 UTC (Tue) by neggles (subscriber, #153254) [Link] (3 responses)

Running the same LLM with the same prompts and the same RNG seed on the same device type will always produce the same output, so there's that.

Preferred form of modification

Posted Mar 10, 2026 16:35 UTC (Tue) by koverstreet (✭ supporter ✭, #4296) [Link] (2 responses)

No, it won't. You can run an LLM that way - temperature = 0 - but you generally don't want to. Like in many other algorithms, introducing some stochastic noise often produces better results.

Preferred form of modification

Posted Mar 10, 2026 16:54 UTC (Tue) by geofft (subscriber, #59789) [Link]

There's a difference between setting the temperature to zero, i.e. deterministically taking the most-likely token (basically changing softmax to regular-old deterministic max), and using a PRNG with a deterministic seed with a non-zero temperature, which will sometimes take less-likely tokens but will make that decision in the same way for every re-execution of the same network (with the same hardware, resources, etc.) with the same seed. I agree that setting the temperature to zero is probably not what you want.

Preferred form of modification

Posted Mar 10, 2026 17:34 UTC (Tue) by phm (subscriber, #168918) [Link]

No, it won't. You can run an LLM that way - temperature = 0 - but you generally don't want to. Like in many other algorithms, introducing some stochastic noise often produces better results.
Given the same settings (temperature, model, RNG seed) an LLM will produce the same results. Here are some example sessions run with llama.cpp (LLM is mdradermacher's quantization i1 of Apertus 8B, abliterated) on a Thinkpad.

t420:~/llama.cpp/build/bin$ ./llama-cli --temp 20 --seed 12345 $LLM

> Hello!

Hello! It's great to connect with you. If you have a question or a [^C]

# Running the same command again:

t420:~/llama.cpp/build/bin$ ./llama-cli --temp 20 --seed 12345 -m $LLM

> Hello!

Hello! It's great to connect with you [^C]

./llama-cli --temp 20000000 --seed 12345 -m $LLM

> Hello!

Hello! Welcome to the SwissAI assistant service. What can I help [^C]

Preferred form of modification

Posted Mar 10, 2026 19:50 UTC (Tue) by ptime (subscriber, #168171) [Link] (3 responses)

Flex/bison involve a deterministic mapping between the high level abstraction and generated code. LLMs do not.

Preferred form of modification

Posted Mar 10, 2026 20:17 UTC (Tue) by geofft (subscriber, #59789) [Link] (2 responses)

I don't think this comment responds to any of the things I said about determinism, nor does it take into account any of the existing comments about how deterministic execution of LLMs is entirely possible.

Preferred form of modification

Posted Mar 11, 2026 0:15 UTC (Wed) by ptime (subscriber, #168171) [Link]

It might be possible, just like it might be possible to take the tires off a bike and put furniture casters on instead, but the nondeterminism is why people want to use the LLMs in the first place.

Preferred form of modification

Posted Mar 12, 2026 3:46 UTC (Thu) by gf2p8affineqb (subscriber, #124723) [Link]

But determinism isn't the only point. The point is that other tools have formal semantics, and that changes have a predictable local effect. Compare that to LLM where the semantics are "whatever it outputs" and no one can predict how the output changes in response to a change in input.

Preferred form of modification

Posted Mar 10, 2026 16:56 UTC (Tue) by excors (subscriber, #95769) [Link] (6 responses)

> LLMs are by their nature non-deterministic (though I guess you could turn down the temperature).

I think that's not really true: the LLM basically does a deterministic computation of next-token probabilities, biases the probabilities based on the temperature parameter (low temperature exaggerates the high-probability tokens), then uses a PRNG to make a weighted selection of a single token to add to the prompt, and repeats. Some APIs (though not all) let you select the PRNG seed, in which case the output is usually reproducible with the same initial prompt and seed and other parameters. It seems they often have optimisations that can break the deterministic part, but that's just a quality-of-implementation issue, it's not part of the nature of LLMs.

But (as I understand it) coding assistants are not just an LLM with an input string and an output string. They mix multiple LLM sessions (including LLMs to generate prompts for other LLMs) with external tools (web search, filesystem access, etc) and with user input in a complex feedback loop. "The input to the tool" is not meaningful or useful for modification - that'd be like distributing an image as a list of Photoshop commands and brush strokes applied to a blank canvas.

Preferred form of modification

Posted Mar 10, 2026 19:31 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (1 responses)

Unfortunately, while this is *mostly* true, the probabilities may suffer from numerical stability issues. Each probability falls out of a lengthy sequence of matrix multiply etc. operations. Any of the following changes may invalidate existing seeds:

* Changing whether or not -ffast-math is enabled.
* Compiling with -ffast-math under a different compiler (or different version of the same compiler)
* Linking against a different BLAS library (or different version of the same BLAS).
* Compiling on any platform that fails to uphold IEEE 754 (other than as a result of -ffast-math). Cursory Googling suggests that "modern" GPUs "generally" uphold IEEE754.
* Changing whether or not the hardware can do fused multiply-add (FMA), and/or whether or not the BLAS is smart enough to take advantage of it.

Probably there are others as well, these are just the obvious ones.

TL;DR: Seeds are reproducible assuming we're talking about a specific binary running on specific hardware. In all other cases, you have to audit a lot of miscellaneous stuff to ensure reproducibility.

Preferred form of modification

Posted Mar 10, 2026 21:53 UTC (Tue) by excors (subscriber, #95769) [Link]

That seems no different to any other reproducible builds - you have to use largely the same compiler and compiler flags and hardware architecture etc. (And don't use -ffast-math in any case, because it's explicitly documented as producing incorrect output.)

I think the tricky part of non-determinism in LLMs is there's insufficient synchronisation in their parallel GPU code, so the order of some non-associative FP arithmetic depends on their dynamic load balancing and the GPU's non-deterministic thread scheduling. That's a deliberate tradeoff of performance against reproducibility, and they could choose to implement it the other way if they cared. (https://thinkingmachines.ai/blog/defeating-nondeterminism... has some plausible-sounding discussion of the changes needed to make it deterministic.)

Preferred form of modification

Posted Mar 10, 2026 21:21 UTC (Tue) by kleptog (subscriber, #1183) [Link]

> But (as I understand it) coding assistants are not just an LLM with an input string and an output string.

Right, because directly using the output of an LLM is like recording the output of someone's talking after asking them a question. You get the requests for clarification, side-tangents, ums and ahs, etc. You don't record someone talking, you ask them to write down their answer and let them refine it a few times. And you record that.

There are frameworks that do this, where you have the output stream of "talking" from the LLM and it also has an editor where it can write code, and rewrite it as required. But that's just making the LLM one cog in a larger machine, which makes the whole discussion about only LLMs kind of pointless.

For me LLMs feel like what happens in my mind between forming a thought and opening my mouth to produce grammatically correct sentences to convey a thought. They are a State -> Words transformer, that's all.

Preferred form of modification

Posted Mar 11, 2026 14:06 UTC (Wed) by udorsch (subscriber, #169676) [Link]

> "The input to the tool" is not meaningful or useful for modification - that'd be like distributing an image as a list of Photoshop commands and brush strokes applied to a blank canvas.

That's pretty much exactly how vector graphic formats work and vector graphics are highly efficient, widely used, and in many aspects superior to raster graphics (which would correspond to distributing the result not the input).

Preferred form of modification

Posted Mar 12, 2026 3:48 UTC (Thu) by gf2p8affineqb (subscriber, #124723) [Link]

> list of Photoshop commands and brush strokes applied to a blank canvas.

It is not! That would be a formally defined system like SVG. There is no formal definition for the semantics of an LLM, which make the input impossible to reason about or change in a predictable fashion.

Preferred form of modification

Posted Mar 13, 2026 5:34 UTC (Fri) by sramkrishna (subscriber, #72628) [Link]

It's non-deterministic because the training dictates the choices. If you the same LLM gets updated then the behavior of the LLM can possibly change. If you change the LLM then that will also change. Almost always, you will need extra prompting to add more guide rails. This is one of the weaknesses because you never know what the change will do without very extensive testing. You're going to need a lot of expertise here to do that. I reckon that is not going to be cheap.

Preferred form of modification

Posted Mar 12, 2026 6:51 UTC (Thu) by AdamW (subscriber, #48457) [Link] (1 responses)

I've seen various instances of people fiddling with stuff along these lines, and I think in every case, they fairly rapidly reached the conclusion "it's much better to use an LLM to generate deterministic code to generate the output than it is to call an LLM to do each instance of generation".

Preferred form of modification

Posted Mar 13, 2026 14:10 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Yes. I was using an LLM to help categorize a trove of email backlogs and having it write scripts to do the evaluation which it (and I!) can then use really saves on tokens.

"massive skill gap"

Posted Mar 10, 2026 14:59 UTC (Tue) by pizza (subscriber, #46) [Link]

> AI use presents us (and the commercial software world as well) with a similar problem: there is a massive skill gap between "gets some results" and "consistently and sustainably delivers results", bridging that gap essentially requires starting from scratch, but is required to achieve independence from the operators of the AI service, and this gap is disrupting the pipeline of new entrants.

This unfortunately matches my experience, both professionally and as an F/OSS maintainer. Except "massive gap" is arguably better expressed as a "impassible abyss"

Reasonable policy, a lot of people are still gaining experience with this stuff

Posted Mar 10, 2026 16:10 UTC (Tue) by koverstreet (✭ supporter ✭, #4296) [Link] (3 responses)

There's been a lot of people freaking out about "AI slop", but it's important to remember that we had problems with bad, low effort, driveby commits before LLMs came along :)

LLMs will (try to) do exactly what you ask them; if you rush for the first thing that builds and runs, don't be surprised if you're left with a mess afterwards :)

But, if you work with them iteratively - same as you would with a human contributor - making sure you've got a solid plan and design, cleaning up as you go, you can get very high quality work done with an LLM writing all the code.

You do have to spend some time getting to know their strengths and weaknesses. LLMs are still pretty weak at the design aspect - reasoning that requires thinking about an ambiguous problem from multiple angles and layers of abstraction. But once the problem is understood, they're quite fast and high accuracy at the implementation.

Not that much has really changed about programming. Someone's still got to be putting the effort in on all the boring code hygiene and engineering standards stuff - making sure stuff stays organized, documented, tested :)

Reasonable policy, a lot of people are still gaining experience with this stuff

Posted Mar 10, 2026 19:44 UTC (Tue) by LtWorf (subscriber, #124958) [Link] (2 responses)

Humans submitting realistic looking garbage aren't that many and will get bored. Machines doing that with budgets of several thousands dollars might be similar in principle but certainly not in the amount of damage they can do.

It's like comparing bow and arrow with a machine gun.

Reasonable policy, a lot of people are still gaining experience with this stuff

Posted Mar 10, 2026 21:23 UTC (Tue) by Wol (subscriber, #4433) [Link]

What's worse, they compared humans working together to achieve an aim, and it only took a couple of saboteurs to completely ruin things.

Iirc they were reconstructing shredded Stasi documents, and found that a bunch of dedicated people could do a pretty good job, treating it like a jigsaw. But it only took one or two people deliberately muddling things up to make the others give up.

So if we've got big ABNIs muddling things up, the web is going to be pretty much destroyed in fairly short order. If we're lucky, they're going to make a big enough mess, quick enough, such that they'll go bust and disappear off the scene. The worry is they'll just become false prophets and conspiracy theorists, on a large enough scale to truly mess things up right royally.

Cheers,
Wol

Reasonable policy, a lot of people are still gaining experience with this stuff

Posted Mar 11, 2026 2:18 UTC (Wed) by koverstreet (✭ supporter ✭, #4296) [Link]

Honestly, I think people can use a good wake up call and stress test every now and then. Remember University of Minnesota?

People have a real tendency to get lax on engineering standards - "it's fiiiine, I know how everything works" - and get sloppy over time. *cough Linux kernel automated testing* - people like to go on autopilot, and that works until it doesn't.

But if you're using your head, LLMs are a godsend for all that tedious janitorial work that everyone likes to let slide.

And if you're doing all the work keeping the codebase documented, tested, organized, and you're migrating to Rust and maybe even thinking about formal verification, you'll find that the idiots with chainsaws have a hard time doing real damage.

Yup

Posted Mar 10, 2026 17:47 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

> "Writing meaningless slop requires no creativity; writing really bad code requires human ingenuity."

In other words: "Artificial intelligence is no match for natural stupidity".

Refusing to decide is still a decision

Posted Mar 13, 2026 14:37 UTC (Fri) by LionsPhil (subscriber, #121073) [Link]

This, unfortunately, means LLM commits will continue to land, and when it comes around to asking the question, it will be "well, it's too late now, we'd have to remove this and this and that and the other if we prohibited it".

...which arguably already happened this time, with the comment about the kernel, Python, et. al.


Copyright © 2026, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds