|
|
Subscribe / Log in / New account

RFC 7258

The Internet Engineering Task Force has adopted RFC 7258, titled "Pervasive monitoring is an attack." It commits the IETF to work against pervasive monitoring (PM) in the design of its protocols going forward. "In particular, architectural decisions, including which existing technology is reused, may significantly impact the vulnerability of a protocol to PM. Those developing IETF specifications therefore need to consider mitigating PM when making architectural decisions. Getting adequate, early review of architectural decisions including whether appropriate mitigation of PM can be made is important. Revisiting these architectural decisions late in the process is very costly."

to post comments

RFC 7258

Posted May 13, 2014 18:07 UTC (Tue) by danielpf (guest, #4723) [Link] (2 responses)

Excellent work, trying to explicit what means pervasive monitoring is a necessary step to better fight it.
Amazing how Edgard's actions have fallouts that are as well pervasive but not named for well understood reasons.


RFC 7258

Posted May 14, 2014 7:22 UTC (Wed) by rvfh (guest, #31018) [Link] (1 responses)

By Egdard I suppose you mean Edward Snowden :-)

RFC 7258

Posted May 14, 2014 10:18 UTC (Wed) by ballombe (subscriber, #9523) [Link]

Shhh do not blow his secret identity.

Some of us don't want to live in an armed camp

Posted May 13, 2014 19:56 UTC (Tue) by BrucePerens (guest, #2510) [Link] (19 responses)

I would prefer the pervasive monitoring to the steps that IETF will lead us down in fighting it, which lead to a completely locked-down internet. The natural follow-on to end-to-end security is to secure the browser by requiring properly signed queries using keys granted only to well-known browsers and their properly-identified users. This makes it very easy to discriminate against Open Source browsers and operating systems.

Some of us don't want to live in an armed camp

Posted May 13, 2014 20:16 UTC (Tue) by luto (guest, #39314) [Link] (2 responses)

I don't understand why this would follow. To me, it seems natural to use something like DNSSEC as the root of trust here, possibly along with some mechanism to make sure that forged DNSSEC results don't start showing up.

Some of us don't want to live in an armed camp

Posted May 14, 2014 5:47 UTC (Wed) by Lennie (subscriber, #49641) [Link]

Yes, that is what the IETF has been working towards.

Adoption hasn't been great though. Some of the deployment hurdles should improve over time though.

Anyway, I don't see how improving privacy has anything to do with changing how certificates are handled.

These are separate things.

Some of us don't want to live in an armed camp

Posted May 14, 2014 14:04 UTC (Wed) by jch (guest, #51929) [Link]

Perhaps a better example would be tcpcrypt, an extension to TCP to transparently enable opportunistic encryption in unsecured application-layer protocols (such as HTTP):

http://tcpcrypt.org/

Yes, it originally came from academia, but work on it is being done at the IETF now.

--jch

Some of us don't want to live in an armed camp

Posted May 13, 2014 20:29 UTC (Tue) by wahern (subscriber, #37304) [Link]

Not necessarily. Most pervasive monitoring is passive, not active. That means you need encryption, not necessarily authentication.

Once we have pervasive encryption in place, if the attackers (governments, criminal organizations) up their game, then we can huddle and figure out what our next steps are.

If we can successfully move key publishing into DNS, then we can keep the status quo in terms of centralization of power (DNS is already centralized, unfortunately), but get the benefit of pervasive authentication. No regression necessary.

Ultimately we can never be truly safe from targeted attacks. But it's a false dilemma to say we must remain as vulnerable as we are now, or we have to give up other substantial freedoms. We can have our cake and eat it too, as long as we work on developing open, simple, easily deployed software and protocols.

Some of us don't want to live in an armed camp

Posted May 13, 2014 20:29 UTC (Tue) by proski (subscriber, #104) [Link]

I think IETF has something different in mind. For example, changing IPv6 addresses.

Also, I guess IETF won't approve unencrypted protocols anymore unless they operate within a subnet (ARP, DHCP) or deal primarily with public information (DNS, BitTorrent).

Some of us don't want to live in an armed camp

Posted May 13, 2014 22:37 UTC (Tue) by Lennie (subscriber, #49641) [Link]

They committed them selves in this technical plenary and in November last year:

http://www.youtube.com/watch?v=oV71hhEpQ20#t=23m23s

Here is the voting (in the IETF traditions using humms):

http://www.youtube.com/watch?v=oV71hhEpQ20#t=148m15s

There was a IETF meeting earlier this year and the first draft documents which were used to start discussions were also created. Privacy was a topic in the discussions of newer protocols like HTTP/2 and WebRTC.

Some of us don't want to live in an armed camp

Posted May 14, 2014 0:36 UTC (Wed) by dkg (subscriber, #55359) [Link] (9 responses)

I would prefer the pervasive monitoring to the steps that IETF will lead us down in fighting it
Really? The steps that we need to take to prevent pervasive monitoring will lead to a freer society than we would otherwise have, and there is no requirement that all communications must be signed by some sort of proprietary keying scheme, or even that any particular client will need to identify themselves. There is strong support for maintaining endpoint privacy within the IETF. Please read RFC 6973, Privacy Considerations for Internet Protocols: it's not perfect, but the issues are being taken seriously.

Some of us are working to defend against both pervasive monitoring and any sort of fully-authenticated-only internet. Please join us. The choice is not between living in an armed camp and living in a police state. We need technical mechanisms and social and political pressure to keep the net open and free for everyone, without giving the folks who control the network the ability to conduct experiments in social control on a global basis.

Accepting pervasive monitoring as a fact of life is not the right answer for a better world.

Some of us don't want to live in an armed camp

Posted May 14, 2014 0:45 UTC (Wed) by josh (subscriber, #17465) [Link] (8 responses)

> Some of us are working to defend against both pervasive monitoring and any sort of fully-authenticated-only internet.

Can you clarify what you mean by this? "fully authenticated only" would be a feature, as long as the authentication is not tied to any real-world identity.

Some of us don't want to live in an armed camp

Posted May 14, 2014 10:28 UTC (Wed) by farnz (subscriber, #17727) [Link] (4 responses)

The problem with "fully authenticated only" is that you can't allow people to create IDs arbitrarily; if you do that, I can create and use a different identity for every unit of work I do, and I become effectively anonymous.

Once you've limited IDs in some way, it's impossible to prevent a sufficiently motivated attacker from tying your set of IDs to your real world identity. This only needs to happen once, and then there is no way for you to untie yourself.

Some of us don't want to live in an armed camp

Posted May 14, 2014 11:04 UTC (Wed) by renox (guest, #23785) [Link] (1 responses)

> I can create and use a different identity for every unit of work I do, and I become effectively anonymous.

1) most people wouldn't bother to do so.

2) even with several IDs, you're only effectively anonymous if you've been very cautious to ensure that any persistent identifier which can be linked between the IDs: cookies, IP address, either is changed each time you switch ID or isn't sent through the network.
Otherwise the use of different IDs only create a false sense of anonymity (of course that depends on who is monitoring).

Some of us don't want to live in an armed camp

Posted May 14, 2014 11:23 UTC (Wed) by farnz (subscriber, #17727) [Link]

If the goal is "fully authenticated", that means that no-one can engage in the attack on authentication I suggested; you've reduced the proposal from "fully authenticated", to "mostly authenticated, except where the attacker is a meticulous and careful worker".

Now, this may be good enough - it's certainly good enough for Facebook and Google, to take two examples - but it's not "fully authenticated". You can be anonymous and use Facebook, with careful use via Tor and lots and lots of accounts, it's just too challenging for most people to bother about.

Some of us don't want to live in an armed camp

Posted May 14, 2014 15:57 UTC (Wed) by josh (subscriber, #17465) [Link] (1 responses)

> The problem with "fully authenticated only" is that you can't allow people to create IDs arbitrarily; if you do that, I can create and use a different identity for every unit of work I do, and I become effectively anonymous.

That was exactly my point. Why can't you allow that? Depending on the mechanism to mint new IDs, and the value of the authentication, that may still provide an advantage over unauthenticated connections, while not mandating a real-world identity for everything.

Some of us don't want to live in an armed camp

Posted May 14, 2014 16:40 UTC (Wed) by farnz (subscriber, #17727) [Link]

If you allow that, anyone who cares enough can create unlimited authenticated IDs for themselves, and use one for each transaction. This means you're not really authenticating malicious users, only users who are either incompetent or friendly; competent evil becomes effectively anonymous by using a new identity for every transaction. Note that you can completely automate this if you so desire (Tor does for tracking by IP address, for example).

If you limit the number of IDs a single real-world entity can use in some way, you lose anonymity - I can, given enough motivation and money, tie together all your permitted identities, and link them to a single real-world entity. At that point, all your pseudonyms trace back to you.

Put slightly differently; the value in an identity is the ability to use the historic behaviour of that identity as a predictor for the future behaviour of that identity. If you simply use identities to give advantages to well-behaved individuals, then you make monitoring easier - I have to continue to use my "good" identity, otherwise you treat me as one of the οἱ πολλοί instead of a good citizen. If you also use identities to help reduce the impact of bad actors, you have to do something to stop me from creating a new identity per bad action I wish to take; otherwise, I can have my "good" identity for when I'm being a "decent citizen", and a collection of "bad" identities for when I'm being a troublemaker.

Some of us don't want to live in an armed camp

Posted May 14, 2014 13:03 UTC (Wed) by dkg (subscriber, #55359) [Link] (2 responses)

> Can you clarify what you mean by this? "fully authenticated only" would be a feature, as long as the authentication is not tied to any real-world identity.
You're advocating for pseudonymity, instead of anonymity. There are cases where robust pseudonymity can useful, but there are also many situations (including quite common ones) where one party to the communication actively wants to remain entirely anonymous to the other.

Consider the example of your web browser connecting to an advertising network server to pull ads (if you're elite enough to have disabled advertising in your web browser, consider someone you care about who still browses an ad-infested web). Let's assume a modern web where all connections use HTTPS. Your browser wants to ensure that the remote party is properly authenticated (after all, it is injecting content into the web page you're viewing, you want to be sure that no one unauthorized gets a chance to do that). But now consider the other direction of authentication. If the connections to the advertising server are pseudonymous but linkable, then the advertising network can quickly build up a profile of who the user is, even if it is not explicitly tied to the user's name or legal identity.

For example, if advertisements from this server are regularly tied to a specific local newspaper, and the custom ads it is prompted to serve the user are frequently about certain topics, the advertising network can get a sense of who the user is: age, gender, location, interests, etc. If the user was not pseudonymously authenticated to the advertising network's server, it would be much harder for the server to build a robust profile of the user.

I'd prefer to live in a world where powerful computing machinery wasn't trying to track me all the time, so in some sense i can appreciate the sentiment of Bruce's original comment. But that ship has clearly sailed; the surveillance machinery is in place (and growing), and there's no going back to the friendly 'net of the mid-90's where it was mostly just individually-administered servers whose job was simply to provide information to anyone who came asking. So we need to protect both data confidentiality and anonymity in our protocols. Leaving traffic in cleartext without integrity protection on the public network leaves everyone vulnerable to powerful mechanisms of social control. Let's fix that and not stick our heads in the sand.

Some of us don't want to live in an armed camp

Posted May 15, 2014 10:28 UTC (Thu) by etienne (guest, #25256) [Link] (1 responses)

> the advertising network can quickly build up a profile of who the user is

That is why you do not want to disable advertisement of the WEB browser: as long as they send me ads about "courses to become a plumber", I feel like anonymous.
Sometimes it also make me think I am in the wrong job too: a plumber will get paid whatever the task he does, and have the protection of the law: it can be changing a gasket to better fit one pipe to another, it can be tighten a bolt which has become loose after much use, it can be opening a tap to flush air from the central heating system... Try to get paid to do the equivalent job on a GPL software...

Some of us don't want to live in an armed camp

Posted May 15, 2014 14:40 UTC (Thu) by ortalo (guest, #4654) [Link]

Yep. So do I. But a decently secure "Internet of People" would allow us to design something much much much better than coins or bitcoins for ensuring such a pay.
Honestly, if it weren't for these additional mouths the wife insisted to create, I would have been working for a while on removing money altogether from this world. Sincere and accurate comments like yours have been upsetting me for too long (not too speak about the shared feeling about plumbing).
Let's control all these leaks and grab back the control of the flow... (especially before getting too much off topic).

Some of us don't want to live in an armed camp

Posted May 14, 2014 0:44 UTC (Wed) by josh (subscriber, #17465) [Link]

That sounds more like the W3C and its "standard" interfaces to talk to non-standard DRM modules and remote attestation mechanisms.

From the IETF, I could imagine seeing a standard for pervasive end-to-end cryptography, but not a standard for locked-down browsers.

Some of us don't want to live in an armed camp

Posted May 14, 2014 9:01 UTC (Wed) by gdt (subscriber, #6284) [Link] (1 responses)

See the full video at around 1h04m for the proposed work program. I didn't see any reference to a "keys granted only to well-known browsers and their properly-identified users", rather the emphasis was on opportunistic encryption due to the often-observed complexity of issuing certificates and federated authentication.

Some of us don't want to live in an armed camp

Posted May 15, 2014 14:59 UTC (Thu) by ortalo (guest, #4654) [Link]

I have yet to see the whole video, but I don't see why both scenarios should be opposed. Anonymity or partial anonymity (pseudonymity?) is certainly desirable in some situations (transportation, random chatting, browsing, some spending), but pretty good identification is also in others (some other spending, working, etc.) and as good as possible authentication in some cases (law enforcement, voting, birth, etc.).
Both requirements exist and should be targetted in my opinion to fully secure "Internet".
By the way, imagine the flexibility and convenience of such security systems: in some cases you do not need any authentication, you are guaranteed strong authentication properties in other cases...
That could extremely interesting to use if only the properties had been not set up upside down. Up to now, what we can get is strong authentication for simple text chat exchanges between students (if they are taught to use PGP), stupid username/password auth. for public service (in most "advanced" countries) or even voting (if you live in the unfortunate country) and hash-based tracking for advertisement; while we should certainly somehow expect opposite efforts.

And, by the way, if money cannot buy us security, let's forget about... money.

RFC 7258

Posted May 13, 2014 22:21 UTC (Tue) by raven667 (subscriber, #5198) [Link] (13 responses)

It's kind of sad we have to go down this road, all this security engineering against pervasive monitoring is an added cost to doing business, double-cost actually because we are also paying _for_ the pervasive monitoring we are paying to engineer around. We should also work in taking control of the organizations which are doing the monitoring and shutting it down so that the environment is less hostile in the first place. The pervasive monitoring is more a people problem than a technology one.

Of course there will always be some small amount of monitoring which goes on, lawful or not, but the highly-resourced, pervasive monitoring should be stopped in preference to working around it.

RFC 7258

Posted May 13, 2014 23:20 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link] (8 responses)

As long as there are organizations that can benefit from pervasive monitoring, there will be an incentive to create it. We're most worried about governments doing it now, but there is plenty of incentive for businesses and criminal enterprises to get in on it. I'd rather engineer around it now than discover in a few years that my ISP has been monitoring everything I do and selling my personal information to the highest bidder.

RFC 7258

Posted May 14, 2014 3:34 UTC (Wed) by raven667 (subscriber, #5198) [Link] (7 responses)

A lot of organizations have the incentive to create bad things they benefit from, that's why we have laws and audit standards, to try and detect this some percentage of the time and drive the risk up until it is greater than the reward. I'd rather knock this stuff down using regulation, laws and audit so that we don't have to each, individually and together, spend all our time doing security dances rather than whatever productive labor we actually want to do. It's all loss prevention and not value creation.

RFC 7258

Posted May 14, 2014 8:41 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

Since Snowden pretty much proved the NSA was not bothering with network interception when they could just ask Google or Facebook, this RFC will change nothing data collection side (it will change a lot of things on the leeway *you* have to scrub what Google or Facebook wants you to see).

RFC 7258

Posted May 14, 2014 9:17 UTC (Wed) by Seegras (guest, #20463) [Link] (1 responses)

As long as certain governments ignore their own constitution, which might say something like this:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

which explicitly forbids wholesale surveillance of all _people_ (note that this does not say "citizens", this really means all people); we can only conclude that these laws and regulations are useless, and those criminal organizations and governments will go on with their pervasive monitoring.

RFC 7258

Posted May 14, 2014 14:04 UTC (Wed) by raven667 (subscriber, #5198) [Link]

The US Constitution is just a piece of paper without people who believe in what it describes and enforce that standard of behavior. No law is worth anything without a credible threat of enforcement.

RFC 7258

Posted May 14, 2014 13:45 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link] (1 responses)

It is illegal to break into people's houses and steal their stuff, and we have police departments to enforce those laws. Wise people still invest in locks and security systems. It's one of the things we do to drive up the cost of theft.

RFC 7258

Posted May 14, 2014 14:12 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Sure, but beyond some fairly simple door locks you run quickly into the land of diminishing returns in most locations. If your neighborhood is particularly bad you might invest in window bars but you are not going to clad your house in steel for example. Those are just static defenses as well, a captital expense, computer security tends to add operation expense to every operation and reduces the utility of the machine in a way that door locks and window bars do not reduce the utility of a house (well maybe it would be more analagous if you put deadbolts and biometrics on every interior door, that would be hella annoying).

RFC 7258

Posted May 16, 2014 11:54 UTC (Fri) by ortalo (guest, #4654) [Link] (1 responses)

In my opinion this view is a little misleading, real security is not about dancing at all.
Many laws, audit and standards already exist. But laws, audit and standards cannot do everything. It seems to me the current state of affair demonstrate it pretty blatantly.
Some things need to be made impossible, not only forbidden. Revolutions occur specifically in order to bring the system in such a satisfying state, generally by eliminating those who manipulate the rules to change the definition of "forbidden" to match their interests. However, even with such a momentum (which generally does not last long), now in the digital world we do not even state clearly which mechanisms are really expected to bring satisfying security properties for the society we desire (here).
There is at least some enlighting hope in the IETF reaction: we knew and they state that publicly available cryptographic mechanisms are part of the solution, and we also know now that the IETF is an acceptable body to work on the technical part of the problem.

(BTW, personnally, I was extremely happy to rediscover that the IETF could be such a trustable organization. A huge thank you for any of those belonging to it: past and present!)

RFC 7258

Posted May 16, 2014 18:20 UTC (Fri) by raven667 (subscriber, #5198) [Link]

> real security
> Some things need to be made impossible, not only forbidden.

Real security is a platonic ideal, like a frictionless surface, which doesn't exist in the real world.

> Many laws, audit and standards already exist. But laws, audit and standards cannot do everything. It seems to me the current state of affair demonstrate it pretty blatantly.

There are many laws and audit steps which could exist but do not, such as data retention requirements which forbid service providers from keeping profiling information, and there are many laws which do exist but for which there is no credible threat of enforcement, like the 4th Amendment in the US.

RFC 7258

Posted May 14, 2014 0:48 UTC (Wed) by josh (subscriber, #17465) [Link]

Pervasive monitoring should be stopped, but regardless, we also need to take the technical measures to block it and all other security holes. Among other things, traffic that isn't encrypted end-to-end is a bug.

Even if we thought we'd stopped pervasive monitoring by policy, we should still have protocols that prevent it.

RFC 7258

Posted May 14, 2014 6:42 UTC (Wed) by Lennie (subscriber, #49641) [Link] (2 responses)

The problem is, these organizations that are doing the monitoring might not be under your control.

For example you might be in the west and the monitoring is done by the Chinese.

RFC 7258

Posted May 14, 2014 12:27 UTC (Wed) by niner (subscriber, #26151) [Link]

Ironically it is much more likeley:
* to be Chinese
* to be monitored by the West
* that all of the above is true

RFC 7258

Posted May 14, 2014 13:58 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Sure, but you are responsible for the agencies run by your government, if you set the standard in your local area that pervasive monitoring is not tolerated, and make a serious effort to enforce your standards, you have better chance of throwing attackers out of your system and you can make a more credible case when asking that they stop attacking you.

RFC 7258

Posted May 13, 2014 23:32 UTC (Tue) by xtifr (guest, #143) [Link]

So does this mean that I can now complain that Facebook (to pick a random example) is not RFC7258-compliant? :)

RFC 7258

Posted May 14, 2014 2:30 UTC (Wed) by pabs (subscriber, #43278) [Link]

RFC 7258

Posted May 14, 2014 8:37 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (2 responses)

The problem is in the details. What you and I understand by “no monitoring” is no data collection. What the Internet giants that pay most IETF workgroup permanent participants mean is:
1. pervasive monitoring Google, Facebook, etc side
2. generalised encryption so they only share this data with their own trusted/paying parties (they don't collect data to let it collect dust you know)
3. every possible form of lockdown so other parties get no chance to modify network flows (like scrubbing ads, removing malware, blocking spam via proxy systems or extensions, etc). In their ideal world the central site controls the client 100% and the user is a passive helpless potato couch (that is what chromebooks are about). After all, it worked for DVDs “you are not allowed to skip ads, that is an attack on my revenue”…

Read the RFC. It says plainly “I don't care how I screw users as long as I can preserve protocol sanctity” several times.

I hope it gets a spike in the hearth and people move to constructive approaches. This text puts way too much power émiter provider side.

RFC 7258

Posted May 14, 2014 16:07 UTC (Wed) by dkg (subscriber, #55359) [Link] (1 responses)

The provider side already has too much power. You're absolutely right that people need to think much more carefully than they currently do about who they entrust their data to. And we need better end-user tools that help people to be aware of these relationships and risks and allow them to manage them more cleanly.

But that doesn't mean that fixing the protocols is unimportant. If we don't fix the protocols, then anyone sitting on the major network backbones and inspecting/tampering with traffic can perform surveillance and censorship in addition to the providers.

RFC 7258

Posted May 14, 2014 18:26 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

But they don't want to fix protocols. They want to lock them down. A protocol that permits you to install an ad blocker, for example, is an insecure protocol in their view. To scrub ads you need to access traffic and no mater how you indicate your consent “you may have been tricked” so it must be prevented at all costs. With this logic any processing brick not under the provider control may be compromised (and, you know, users are sheep/idiots that can't be trusted to make any decision) so in the end you give providers all the power with the best intentions.

They live in an uncompromising manichean world. And they don't want to look closely at the provider side because it is their living. This kind of blind idealism never produced anything but pain.

None of the companies that Snowden indicated are going to lose sleep with this RFC. Why should they? Instead of protecting users from them, it anoints them as all-powerful arbiters.

RFC 7258

Posted May 14, 2014 9:58 UTC (Wed) by jb.1234abcd (guest, #95827) [Link]

Protocols, tracking ip addresses, encryption, authentication, and other technical problems and means to overcome them are just that, technicalities.

The real problem is an economic and political model that is based on pervasive monitoring.

"Stalker Economy" Here to Stay
https://www.schneier.com/essay-467.html
"... Surveillance is the business model of the Internet ...".

IETF - Internet Engineering Task Force.
"The mission of the IETF is to make the Internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the Internet."

Well, is IETF between a rock and a hard place ?


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds