Some donation data prompts are nasty
Some donation data prompts are nasty
Posted Jul 25, 2024 21:54 UTC (Thu) by pizza (subscriber, #46)In reply to: Some donation data prompts are nasty by decaffeinated
Parent article: Lessons from the death and rebirth of Thunderbird
Yeah, how dare Thunderbird require the same information as any other online financial transaction?
"Payment Processing: When you purchase something via a Mozilla website, contribute funds or make donations, you will send payment through one of our third-party payment providers: Stripe, Apple Pay, PayPal, Venmo or Google Pay. Mozilla receives a record of your account (including your billing address and the last four digits of your payment method) and (where relevant) the status of your account’s subscription; we may also receive your name, mailing address, and/or email address. This data is used for payment processing, fraud detection and record-keeping purposes. "
Posted Jul 25, 2024 22:13 UTC (Thu)
by decaffeinated (subscriber, #4787)
[Link] (13 responses)
If some entity wants my home address for a donation, they're going to get garbage.
Posted Jul 25, 2024 22:43 UTC (Thu)
by pizza (subscriber, #46)
[Link] (12 responses)
They don't care about the *home* address; the *billing* address of the account is all that matters.
Your credit card merchant will give you better [1] rates with verified addresses because it greatly cuts down on fraud.
For charities this doesn't matter as much (as they usually operate under an "any money is better than no money" attitude and most have a very high fundraising overhead) but when you're trying to run an actual business, the rate difference can mean the difference between profit or loss.
Meanwhile, US-based entities are legally required to not do business with BadPersons and BadPlaces, and have to be able to prove they undertook at least a minimal amount of diligence should the government come knocking.
Posted Jul 26, 2024 11:41 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (11 responses)
I see this mostly on US sites. Everywhere else when I use my credit card it just asks for a 2FA (which is usually an app on your phone, or SMS), which also cuts down on fraud and doesn't require telling the payment processor where I live.
Posted Jul 26, 2024 12:26 UTC (Fri)
by pizza (subscriber, #46)
[Link] (10 responses)
News flash: Organizations have to follow local laws and regulations, and those tend to be quite strict (and voluminous) when money is involved.
> when I use my credit card it just asks for a 2FA (which is usually an app on your phone, or SMS)
That partially [1] addresses the fraud aspects; it doesn't address the "doing business with a sanctioned entity and/or country" aspects.
[1] SMS is nearly worthless as a 2FA mechanism, Putting aside fundamental flaws in its signalling protocol, if someone steals your phone (or force-ports your phone number), you're pretty much screwed, as countless cryptobros have discovered. App-based 2FA fares little better.
Posted Jul 30, 2024 6:45 UTC (Tue)
by LtWorf (subscriber, #124958)
[Link] (9 responses)
How?
If someone steals your phone you have got the same problem.
Even worse actually because you can report a SIM card as stolen and deactivate it, while you can't do that with a seed and a clock.
Posted Jul 30, 2024 15:17 UTC (Tue)
by DanilaBerezin (guest, #168271)
[Link] (8 responses)
Posted Jul 30, 2024 15:22 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
You don't even need to do a SIM swap; if you have sufficient access to the SS7 signalling network, you arrange for all SMS to a given number to route via your systems. And as they're unencrypted, you get to inspect the contents before forwarding them to the original recipient.
This particular hole is going to go away eventually - once there are no more 2G or 3G networks anywhere in the world, nobody will consult SS7 systems as part of SMS handling - but not in the next decade or so. LTE and later standards can avoid this particular hole, because they can do everything via IMS (over IP), which has been secured a lot better than SS7 was (SS7's "security" is "only trusted telcos have access - and no-one working for a telco would ever do a bad thing").
Posted Jul 30, 2024 15:47 UTC (Tue)
by paulj (subscriber, #341)
[Link] (6 responses)
The time interval is almost always easily obtainable, in rare cases it is not, there's a few common values. Current time is known.
Posted Jul 30, 2024 16:10 UTC (Tue)
by pizza (subscriber, #46)
[Link] (5 responses)
If someone steals or otherwise gains access to your phone when it's not locked (and sometimes even if it is locked if you have sufficient resources to expend on unlocking it quickly) they typically will get full access to your 2FA/TOTP client _and_ the communication channels that are typically used to reset credentials.
(Most "2FA clients" don't have any access control, such as an additional PIN. Worse, some are effectively "new account sign-in request, tap here to grant access" tissue paper)
In other words, when the 2FA device is the also the communication device, you've reduced your 2FA effectively to 1 (if not 0) FA for many attack scenarios.
(Granted, this is more of a problem for the device owner should it get damaged, lost, or stolen -- how do they regain legitmate access? And how can that necessary backchannel not become the weak attack vector?)
Posted Jul 30, 2024 16:21 UTC (Tue)
by paulj (subscriber, #341)
[Link] (1 responses)
However: FreeOTP+ lets you set "authentication", which means you must pass system authentication (e.g., system PIN unlock, or whatever you have configured) to open the app. If you are diligent about swiping-away/closing FreeOTP+ once you're done with it, this can give an additional layer of protection from general-case phone-stolen-while-unlocked.
I assume anyone with TOTP codes protecting anything important is using an app with such security, and has it enabled.
Posted Jul 30, 2024 17:54 UTC (Tue)
by mb (subscriber, #50428)
[Link]
I don't use any of the "normal" apps.
TOTP is trivial to implement in a few dozen lines of Python code:
You can quickly write an authenticator with any additional access control and security guarantees that you want. (or just use mine ;-)
And an attacker probably won't know that it's there, if you wrote it by yourself.
Posted Jul 31, 2024 13:12 UTC (Wed)
by kleptog (subscriber, #1183)
[Link] (2 responses)
The threat model is someone has downloaded or somehow otherwise captured a whole lot of usernames/passwords and is trying them on all sorts of websites. For that 2FA works perfectly because they don't have your phone. They don't even know who the users are so couldn't find the phone even if they wanted to.
Against targeted attacks 2FA is obviously less useful, though still a step up from the example that started this conversation, which is asking people to include their zipcode when using a credit card.
You don't need to outrun the leopard, you just need to be faster than the next person.
(The bank's 2FA does require a biometric or separate pin code to unlock.)
Posted Jul 31, 2024 13:31 UTC (Wed)
by somlo (subscriber, #92421)
[Link] (1 responses)
Keeping the specific threat model in mind is important, and unfortunately underrated. When we lose track of that, we end up looking for *perfect* security that's somehow also palatable to the average normie user, which so far hasn't happened.
It's important to distinguish between the zombie that's chasing after *you* specifically, in which case you need to prepare by focusing on Rule #1 (Cardio) -- vs. a bear that's just chasing after *lunch*, in which case outrunning the poor sod next to you is perfectly adequate. :)
I find this very insightful on the topic: https://scholar.harvard.edu/files/mickens/files/thisworld...
Posted Jul 31, 2024 15:14 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
It's also worth being realistic about the outcome of defending against a specific threat; I can promise you now that if a sufficiently capable bad actor has taken me and my family hostage, and is going to kill us all if we don't give the bad actor everything they need to get into my accounts, that they're getting what they ask for, because the consequence of not giving them everything is bad enough that I don't want to risk it.
There is, of course, a relevant XKCD comic about this, with the bad guys not giving up because the computer security is too good, but instead assaulting the computer owner to get access, and we forget that observation at our peril.
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
SS7 network and SMS hijacking
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty
>and has it enabled.
https://github.com/mbuesch/pwman/blob/master/libpwman/otp.py
Some donation data prompts are nasty
Some donation data prompts are nasty
Some donation data prompts are nasty