A malicious Pidgin plugin
It went unnoticed at the time that the plugin was not providing any source code and was only providing binaries for download. Going forward, we will be requiring that all plugins that we link to have an OSI Approved Open Source License and that some level of due diligence has been done to verify that the plugin is safe for users.
Posted Aug 26, 2024 22:27 UTC (Mon)
by python (guest, #171317)
[Link] (29 responses)
Posted Aug 26, 2024 23:01 UTC (Mon)
by pizza (subscriber, #46)
[Link]
The only way to guarantee that is for the publisher to generate the binaries themselves, ideally in an environment that lacks external network access.
No source means no binaries can get created. Which is a problem for non-F/OSS stuff but that's probably not a use case that Pidgin cares about.
(This is the approach F-Droid takes, for example..)
Posted Aug 26, 2024 23:28 UTC (Mon)
by KJ7RRV (subscriber, #153595)
[Link] (6 responses)
Posted Aug 26, 2024 23:43 UTC (Mon)
by shironeko (subscriber, #159952)
[Link]
Posted Aug 27, 2024 1:23 UTC (Tue)
by kazer (subscriber, #134462)
[Link] (4 responses)
Posted Aug 27, 2024 8:05 UTC (Tue)
by aviallon (subscriber, #157205)
[Link] (3 responses)
Posted Aug 27, 2024 9:50 UTC (Tue)
by intelfx (subscriber, #130118)
[Link] (1 responses)
Posted Aug 27, 2024 11:02 UTC (Tue)
by jhe (subscriber, #164815)
[Link]
Posted Aug 28, 2024 21:16 UTC (Wed)
by Heretic_Blacksheep (guest, #169992)
[Link]
For reproduceable builds you need the original source code to be reviewable, all subsequent patches, verification nothing was altered in transit, someone to actually verify that there's no shenanigans in all that source code (they had to read it AND understand it, without making mistakes due tiredness or too much caffeine!), then compile it, then verify that that binary did indeed result from the source code blob and that the compiler itself didn't alter it, verify included external libraries didn't alter the function or object meanings. This is why "just use Y" recommendations aren't useful here. FAIK, no single distribution does all of these steps because it's too (skilled) labor intensive. Otherwise you have a problem where a binary object file is "reproducible" but that term is completely meaningless because no one knows if there's malware in the original source code or the compiler's output.
Isn't constructive paranoia grand? That doesn't even get into this problem - what if the reproducibility tech has been suborned?
Posted Aug 27, 2024 10:13 UTC (Tue)
by atnot (subscriber, #124910)
[Link] (20 responses)
The fact that a pidgin plugin that you can just install off of the official site is even capable of taking screenshot and logging keys is what's actually alarming and irresponsible to me, the fact that this actually happened is just an inevitability.
Posted Aug 27, 2024 10:45 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (7 responses)
Posted Aug 27, 2024 15:59 UTC (Tue)
by gnoutchd (guest, #121472)
[Link] (6 responses)
That sort of sandboxing is a long-term goal of Flatpak, and IIUC the file-dialog APIs of the major toolkits already support their "portal" mechanism for controlled file access. I believe there are similar mechanisms to control screenshot/screencast access on Wayland desktops.
That said, for various reasons many important applications are not meaningfully sandboxed yet (for which they've been harshly criticized). Also, as others have discussed, it's not clear if that kind of sandboxing would have helped in this case.
Posted Aug 27, 2024 17:31 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
The trouble here is that what you want to permit is for this plugin to take screenshots and transmit them over the network to the places the user intended them to be sent to, but not transmitted over the network to places the user did not intend them to be sent to. And that's a really hard problem to solve in a sandboxing mechanism - if the screenshots can't leave your system, then it's secure, but not useful, while if they can leave the system, how do you distinguish "going to the IM contact the user intended" from "going to the IM contact the plugin author has secretly chosen to send things to"?
This is a general problem in securing networked compute - how do I permit non-malicious communication, without permitting malicious communication - and it's arguably impossible to solve, since it requires understanding not just what is happening, but also what the user expects to happen.
Posted Aug 27, 2024 19:08 UTC (Tue)
by gnoutchd (guest, #121472)
[Link] (1 responses)
That said, a good Flatpak-like sandbox should limit what a keylogger could collect, and IMHO that's pretty valuable already.
Posted Aug 27, 2024 19:44 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
Even a high-level message-sending API is impractical here - I, as the malicious attacker, come up with a plausible reason for the plugin to send messages to an account I control and that I also use for data exfiltration. Similar applies to anti-keylogging protection; I come up with a good reason why I "need" the extra permissions to collect all keypresses, not just a shortcut key that triggers the screen capture, and many people will click OK blindly to get the plugin to function.
That said, having a sandbox that makes the attacker's life harder is a good thing - it increases the chance that someone will think "that's weird" (as Andres Freund did when SSHing to a test box consumed a full CPU core for 0.5 seconds) and investigate, catching the malicious code.
Posted Aug 28, 2024 7:45 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (2 responses)
Posted Aug 28, 2024 18:11 UTC (Wed)
by smcv (subscriber, #53363)
[Link] (1 responses)
Flatpak is federated, so you don't *have* to use Flathub (you could download your apps from a different repository), but adding a repository as a source of apps does involve extending some trust to the curators of that repository - not as much as with a traditional packaging framework like apt/dpkg, but some.
The user also has the opportunity to look at what the app is asking for[1] and accept or refuse installation accordingly, again similar to the way some of the more "static" permissions are handled by mobile app-stores.
Of course, if an app legitimately needs a "powerful" permission like screen-sharing in order to do its job, then no amount of review or sandboxing can prevent it from exercising that permission in ways that you didn't want it to, so a Flatpak-style sandbox would not protect you from this particular malware. An app that is malicious or compromised can do anything that its sandbox allows, and its sandbox needs to allow everything that was necessary for it to work as designed; even if the user is asked for permission every time, a competent malware author will disguise their request for access as a request for something that is legitimately needed, for example only starting to exfiltrate screen contents when legitimate screen-sharing functionality is activated.
Also, Flatpak's approach to packaging is focused on apps, more than plugins for those apps; and there is typically no privilege boundary between an app and its plugins, because in a typical plugin architecture they're loaded into the same address space as the app itself and can arbitrarily overwrite its memory. That's another reason why Flatpak-style sandboxing would not have helped a whole lot in this scenario, where a non-malicious app (Pidgin) loads a malicious plugin and becomes compromised as a result. If there is no security boundary within the app, a sandboxing framework like Flatpak can't magic one into existence.
---
[1] assuming a high-quality UI; GNOME Software has this, KDE Discover probably does too, and the flatpak(1) CLI has it but only in a developer-oriented format
Posted Aug 29, 2024 7:11 UTC (Thu)
by ejona86 (subscriber, #43349)
[Link]
... the user is trained to ignore the red security badge. Earlier this year I did a survey of 53 app permissions[2] and only 5 had a useful sandbox, assuming --share=network doesn't have the abstract unix socket escape. It appeared Arch may allow the trivial escape, which would mean only one app had a useful sandbox on Arch[3].
I know the portals are still improving. But it seems a serious failure to think Flatpak provides security above apt/dpkg in practice, or to think it or apps will soon improve enough to matter.
1. https://ejona.ersoft.org/archive/2024/03/02/flatpak-perms...
Posted Aug 27, 2024 11:27 UTC (Tue)
by pizza (subscriber, #46)
[Link] (10 responses)
"Taking screenshots and logging keys" is a fundamental capability of _any_ X11 application.
Posted Aug 27, 2024 12:04 UTC (Tue)
by atnot (subscriber, #124910)
[Link] (9 responses)
A plugin system's scope should be limited to the application by default, with well defined, permissioned interfaces for interacting with the rest of the system. Loading an .so file off of the internet in user-facing application in this day and age is just reckless. Especially when those users have been trained by every other software to install things off of things that look like app stores without having to worry too much about it compromising their *entire system*.
Posted Aug 27, 2024 13:15 UTC (Tue)
by pizza (subscriber, #46)
[Link] (8 responses)
Either it has _full_ access to X, or it has _no_ access to X. There is no middle ground.
Meanwhile, as others have pointed out, this plugin was explicitly billed as a screen sharing tool. So even if it was technically possible to restrict access to the screen and/or input events (eg through use of Wayland instead of X), the user would have granted those permissions anyway, as that's the reason they installed the plugin in the first place.
Heck, even if this plugin's source code was provided and Pidgin compiled the binaries itself, it would have taken a careful manual audit of the source code to discover the plugin was sharing with an additional party.
Posted Aug 27, 2024 15:04 UTC (Tue)
by atnot (subscriber, #124910)
[Link]
Yes. And it doesn't need access to X. Just wrap the parts you want to expose to plugins through the plugin API. Just like web browsers do, or even xdg portals on linux already do for things that aren't X.
> as others have pointed out, this plugin was explicitly billed as a screen sharing tool
Browsers are also screen sharing tools. But they have technical controls in place such that random websites in the background can't just start recording your screen. You have to authorize it every time, choose what you want to share and then you get a persistent screen element alerting you to the fact that your screen is being recorded.
In the rare case where you want it to do things outside of that sandbox you have to install a separate program that communicates with the browser, making it very obvious that you're breaching a trust boundary. Compromise through malicious browser plugins has plummeted since the removal of flash.
> even if this plugin's source code was provided and Pidgin compiled the binaries itself, it would have taken a careful manual audit of the source code to discover the plugin was sharing with an additional party.
Yes, which is why you need to set things up such that this is not straightforwardly possible to do. Pidgin had zero of the layers of swiss cheese that could have prevented this and the results are predictable.
Posted Aug 27, 2024 15:39 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (4 responses)
There is a middle ground, of nested X11 servers. The plugin has full access to an X11 server run by Pidgin (such as Xvfb or Xephyr), and Pidgin takes responsibility for communication between the nested X11 server and the host server, copying data back and forth as needed to make the nested server work the way the user expects.
In this case, though, (and as you point out) X11 is a red herring; what was needed was a way to restrict the plugin's access to networking (including IM networks) so that it couldn't share data it had grabbed with permission with anyone other than the people the user expected it to be shared with.
And reproducible builds is a side-show there - it's no help to anyone if the plugin reproducibly sends keylogger and screenshot data over IM or over a separate IP socket, rather than only doing it on the officially blessed version. You need either auditing of the built version to be sure it's not doing something odd, or sandboxing that results in something odd being noticed. After all, that's basically how the xz vulnerability got noticed - Andres Freund was running sshd in a sandbox setup where the CPU time taken to log in over SSH was visible to him, and trying to untangle that oddity led to the backdoor.
Posted Aug 27, 2024 16:39 UTC (Tue)
by pizza (subscriber, #46)
[Link] (3 responses)
Which still wouldn't have accomplished anything useful given the nominal "it can't share your screen without having access to your screen" purpose of this plugin.
I suppose this is an additional sign that F/OSS applications are finally becoming meaningful targets for ne'er-'do-wells. Unfortunately most F/OSS doesn't have the (non-pittance &| sustainable) revenue streams pay for the necessary (and perperual) hardening/re-engineering efforts.
Posted Aug 27, 2024 17:29 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
It could (but practically probably wouldn't) have defanged the keylogging part, but not the screen-sharing part. Given that the plugin didn't need to access keystroke data, Pidgin could have simply not sent keystrokes into a nested X11 server. But malicious people aren't stupid, and I suspect they'd have come up with a plausible reason why they needed keystroke access (hotkey to trigger screen capture, for example), breaking the sandbox.
Posted Aug 28, 2024 7:51 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Aug 28, 2024 8:04 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
Indicators help, but again are often ignored if the application has a good enough explanation for why it needs more permission and the indicators on all the time. I already see users ignoring the "microphone in use" and "camera in use" indications on Android + iOS because something they've installed tells them to ignore those indications, and I suspect that the same would be true on Linux.
And time-limited permissions runs into the basic problem with all requests to a user for permissions; the permission is one the application needs to do its job, and thus it's relatively easy to socially engineer users into allowing the permission for longer than needed, or to allow it indefinitely, citing nebulous "bugs in the framework" that require it to be indefinite in order to do the job you're using the app for.
Posted Sep 24, 2024 8:22 UTC (Tue)
by daenzer (subscriber, #7050)
[Link] (1 responses)
For the screen sharing part, sure.
However, I don't know of any (generally-available) mechanism which would allow a Wayland client to receive keyboard input made while none of its own Wayland surfaces have input focus. (There's a portal for global shortcuts, that can't be abused for key-logging though)
Even for Wayland surfaces created by the Pidgin process, if the plugin didn't know their protocol IDs, it would be more difficult for the plugin to access them and their input via the Wayland protocol than it is via X (which makes it trivial really). Might not be impossible though.
Even via Xwayland, in a Wayland session this plugin could only log keystrokes meant for other X clients, not for native Wayland clients, let alone for compositor-internal dialogues.
I'll have to remember this example for next time someone like birdie claims "the Wayland restriction on keyboard input is pointless, there are no key-loggers taking advantage of X".
Posted Sep 24, 2024 8:30 UTC (Tue)
by daenzer (subscriber, #7050)
[Link]
Posted Aug 27, 2024 11:38 UTC (Tue)
by docontra (guest, #153758)
[Link]
A quick duck duck go search yielded that the plugin in question was for screen-sharing; not much you can do with sandboxing to prevent it from taking screenshots and sending them somewhere else. PS: From the byline of the plugin's site (as seen in Duck Duck Go, link is dead and I didn't bother to search for it on Wayback Machine): "Encryption - You can trust that only you & your buddy will be able to see shared screen packets, thanks to LibOTRv4-powered encryption". Yeah, about that...
Posted Aug 27, 2024 23:11 UTC (Tue)
by rcampos (subscriber, #59737)
[Link]
I know the code is not there, but if anyone analyzed the binary or something, I'm very curious to know about.
Posted Aug 28, 2024 5:13 UTC (Wed)
by mb (subscriber, #50428)
[Link]
Reproducable builds
Reproducable builds
Reproducable builds
Reproducable builds
Reproducable builds
That does mean you need to have setup where you can confirm every component same way.
Reproducable builds
You get the reproducibility of everything almost free.
Reproducable builds
Reproducable builds
Reproducable builds
Reproducable builds
Reproducable builds
Flatpak sandboxing
Sandboxing this malice is not easy
Indeed, but we can at least sandbox the keylogger
Perfect sandboxing probably impractical here - but let's not let perfect be the enemy of better
Flatpak sandboxing
Flatpak sandboxing
Flatpak permissions do not provide informed consent
2. https://ejona.ersoft.org/archive/2024/03/03/flatpak-perm-...
3. Feel free to blame the apps or distros, but the problem remains
Reproducable builds
Reproducable builds
Reproducable builds
Reproducable builds
Nested X11, sandboxing this type of malice
Nested X11, sandboxing this type of malice
Tiny improvement, not a fix
Tiny improvement, not a fix
Screen capture in use indications
Wayland does help for the key-logging part
Wayland does help for the key-logging part
Reproducable builds
how it worked?
Have Pidgin developers been compromised?