|
|
Subscribe / Log in / New account

The perils of federated protocols

The perils of federated protocols

Posted May 19, 2016 3:29 UTC (Thu) by josh (subscriber, #17465)
Parent article: The perils of federated protocols

A better example of a successful federated protocol would be the modern web. Usable by any browser and any web server, pages regularly make connections to third-party servers, and despite all that, web technologies move forward rapidly. And rather than holding each other back, multiple competing browsers spur each other on, and many web authors rapidly adopt new technologies as they become available.

Perhaps the vendor of a successful centralized technology just can't imagine anyone else doing what they do and keeping up with them, or doesn't want anyone else to start imagining that. If you look at other developers as those who help drive you forward and vice versa, rather than looking down on other developers as as those who hold you back, then openness, collaboration, and federation start making a lot more sense.


to post comments

The perils of federated protocols

Posted May 19, 2016 5:34 UTC (Thu) by wahern (subscriber, #37304) [Link] (17 responses)

That's a really good point. However stunted and slow the evolution of HTTP has been, the browser environments have evolved quickly. And though complex websites might not render correctly, or at all, on the vast majority of browsers (e.g. think Lynx, Netsurf, etc), all that matters are the browsers from the big 3 or 4: Microsoft, Mozilla, Google, and Apple. And for the most part, at any one point in time it has been one of those vendors leading the way forward.

A new federated framework could pick up steam as long as there was one or a few primary implementations that the vast majority of people used. Those implementations don't have to become bogged down in interoperability as long as there was some kind of tacit understanding by users that interoperability problems were the fault of marginal implementations dragging their feet, as is the case with browsers. An end-user might not have any understanding of the technology, but the vast majority still understand the concept of switching client software. However slow, that's a significantly faster process in reality than administrators upgrading backend software, despite the fact there are far fewer backend systems.

I think the underlying mechanism here relates back to the end-to-end principle. Put as much of the logic as possible at the end nodes, keeping the transport layer as simple as possible. That was one of the flaws of XMPP, IMO. Too much logic exists in the server-side code[1]. But server-side software is much less responsive to user complaints. Corporate and ISP IT departments aren't especially well-known for quickly upgrading servers to support fancy new users features, whereas users will naturally migrate to client software providing the better experience.

OTOH, designing protocols and architecting software which minimize dependencies on intermediate nodes is very difficult. It's just too easy to put logic in the middle, especially when you're on a time crunch. And if you're furthering proprietary interests, well then it's a no-brainer.

[1] For example, I never understood why anybody ever thought it was a good idea to use out-of-band channels for XMPP file transfer and voice, or to use in-band channels which relied on server support. I understand performance concerns, but it was destined for failure rather than failure only being a possibility. Those decisions created dependencies that required a substantial number of server systems upgrading, and upgrading responsively in step with user preferences. The only way intermediate nodes get upgraded like that is when they're centrally controlled.

When XEPs emerged which were more server agnostic, there was no predominate client side software which carried the day. Google's Jingle failed, I would argue, because support was never added to libpurple. If Google was serious about it, they would have added it to libpurple, or forked it and took the lead in pushing features to XMPP clients. Though, my knowledge of the history of this is limited so I'm probably missing critical details. My analysis may be factually wrong, but I think my point is valid.

The perils of federated protocols

Posted May 19, 2016 6:16 UTC (Thu) by smcv (subscriber, #53363) [Link]

> I never understood why anybody ever thought it was a good idea to use out-of-band channels for XMPP file transfer and voice

The reason Jingle does that is precisely to route around the servers: any server that doesn't break the most basic level of extensibility (passes unknown client-to-client messages through unaltered) does not need changes to support Jingle. Google designed it like this for the reasons you describe: whenever XEPs required special server-side support, most servers didn't implement that in practice, leaving those XEPs unavailable even in clients that theoretically supported them.

Unfortunately, Facebook's and MSN's "XMPP" bridges didn't have even that level of extensibility: they dropped messages they didn't understand, even if by design it didn't need to understand them (because they were bridging into an internal protocol that didn't have a corresponding concept). As a result, "works on any server" became "works on any server except Facebook's and MSN's".

The perils of federated protocols

Posted May 19, 2016 11:58 UTC (Thu) by khim (subscriber, #9252) [Link] (15 responses)

However stunted and slow the evolution of HTTP has been, the browser environments have evolved quickly.

If you compare it to non-existent development of SMTP or decades-long process of switching to IPv6 then yes, sure. If you compare it to other competing, non-federated technologies… then it's slow as snail. What have we gotten on the web lately? WebGL, WebRTC, SPDY/HTTP/2… and now there are “exciting” new features: low-level bytecode, GPU computation with WebCL (or maybe with compute shaders?)… they were discussed years ago and are still not usable on the web.

Compare that development to the development of non-federated platforms: when old API is no longer suitable it's replaced quickly (both Metal and Vulkan were introduced and implemented in a span of about one year), and political discussions don't bogge down development (think NaCl/asm.js/WebAssembly fiasco: Android and even laggard Windows Phone 8 have gotten a way to run fast, native code in a couple of years—while “quickly evolving” world of web browsers ws left behind…

Sorry, but development of web browsers shows just how right Marlinspike is: federated worlds could exist—but only if non-federated alternative is unusable. Internet and web have won not because they were federated, but because they were large: they just had more users than AOL or CompuServe.

When you need federated protocols and clients/servers to reach out to billions of users then such protocols win by default. When you have an ability to develop non-federated solution… well, it's not even a contest.

In hardware world federated solutions often win because they spread development cost (if one company develops non-federated solution and dozen or hundreds of companies develop a federated one then sheer money power often prevail) but in a world of software this factor does not work: companies could just pool their resources and develop one single solution instead.

The perils of federated protocols

Posted May 19, 2016 12:09 UTC (Thu) by pizza (subscriber, #46) [Link] (1 responses)

> Internet and web have won not because they were federated, but because they were large: they just had more users than AOL or CompuServe.

They didn't start out large. The question you should be asking is how/why they became large, given their early disadvantages.

And that answer is ... federation.

The perils of federated protocols

Posted May 19, 2016 18:11 UTC (Thu) by khim (subscriber, #9252) [Link]

They didn't start out large. The question you should be asking is how/why they became large, given their early disadvantages.
And that answer is ... federation.

That's right answer to the wrong question. Of course federation makes it possible to build large system and, indeed, when large system couldn't be monolithic they become federated. Even today there are many systems which are federated: not just ISPs, but cellular networks, railroads, airlines and many other systems are federated today! Heck, you could even find popular federated systems developed in XXI century (here is one, e.g.).

But they all share one important quality: they have some kind of “ceiling”. Some reason which limits growth of unfederated alternative. It may be technical reason (AOL/Compuserve growth hit the limit when it reached US borders: it was impossible to provide cheap enough access to people in Asia or Europe because intercontinental phone calls were incredibly expensive), it may be non-technical reason (RIAA and MPAA made sure that there would be no huge torrent sites with millions of user thus we've naturally gotten DHT), but if there are no “ceiling” then there are no reason for the federation. It's more cumbersome and thus less attractive solution, it's only chosen by users out of necessity, not out of desire.

The perils of federated protocols

Posted May 20, 2016 8:28 UTC (Fri) by niner (subscriber, #26151) [Link] (7 responses)

Odd, I cann use WebGL, WebRTC and HTTP/2 just fine while I can use neither Metal nor Vulkan right now. Your examples seem to express the opposite of what you intended.

The perils of federated protocols

Posted May 22, 2016 16:54 UTC (Sun) by jospoortvliet (guest, #33164) [Link] (6 responses)

I guess the point parent wanted to make is that while webGL, WEBRTC etc are already very old they still don't work everywhere while Vulkan, not more than a year old, is announced to be supported almost everywhere. I agree with you that doesn't mean it IS. Time will tell if it will go as expected...

The perils of federated protocols

Posted May 22, 2016 17:35 UTC (Sun) by flussence (guest, #85566) [Link] (5 responses)

I remember all the hand-wavy hype about GPGPU and OpenCL when AMD opened up all the R600 hardware docs. Ten years later, still no sign of it ever becoming usable. One day Vulkan might work, but I'm not holding my breath for it.

The perils of federated protocols

Posted May 22, 2016 19:34 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

I've done some OpenCL on R600 hardware. It works, but is not complete by any means. This was probably 2.5-3 years ago now.

The perils of federated protocols

Posted May 23, 2016 14:16 UTC (Mon) by khim (subscriber, #9252) [Link] (3 responses)

Ten years later, still no sign of it ever becoming usable.

Define “usable”, please. All the HPC is built today around GPGPU and similar architectures (things like Xeon Phi, it's used on your mobile phone (to process photos in real-time and do other compute-intensive things) and so on.

The fact that Linux distros (and desktop in general) are no longer in the center of this development is unfortunate (especially since lots of development is based on Linux), but it does not mean that all that development have just stopped and disappeared.

One day Vulkan might work, but I'm not holding my breath for it.

Vulkan works today (although not that many apps use it), Metal is used by real apps, too (look here - there are links to appstore where they could be found). Sure, you couldn't use it on your device if your insist on 100% OS but most users out there don't care and use it just fine.

And while WebGL is usable today don't forget that we are talking about technology which is almost two decades old (DirectX was released in 1996 and OpenGL is even older then that).

Sure, you could counter that other unfederated APIs (things like Glade, e.g.) have died—but there are emulators and people still play these games.

My point was that most platforms had 3D by the end of XX century while Web needed another decade before it got it and, ironically enough, exactly when web finally, finally, arrived on that decade-old platform the rest of the world already moved and shifted to significantly different 3D API! When would web have things like Metal, Renderscript or Metal? My guess is: most likely answer is “never”—and if Android will, indeed, arrive on desktop even WebGL and WebRTC could become unavailable eventually (since people would use native apps instead of webapps for things like videocalls or maps)—although that is not guaranteed, freeze a-la SMTP is more likely.

The perils of federated protocols

Posted May 23, 2016 15:52 UTC (Mon) by pizza (subscriber, #46) [Link]

> Define “usable”, please. All the HPC is built today around GPGPU and similar architectures (things like Xeon Phi, it's used on your mobile phone (to process photos in real-time and do other compute-intensive things) and so on.

I think it's fair to say that GPGPU is only now becoming "usable" without relying on (highly) proprietary software stacks.

The perils of federated protocols

Posted May 24, 2016 22:43 UTC (Tue) by flussence (guest, #85566) [Link] (1 responses)

> Define “usable”, please.
In the way VDPAU is today. Able to do more than run the demo/test code shipped with Mesa. Reducing power consumption by offloading work to an appropriate device instead of increasing it by being dead weight to compile.

> things like Xeon Phi, it's used on your mobile phone
I don't think my phone has a 300W processor (it seems to cope with image/photo editing fine regardless). Did you mean to say Someone Else's Computers? Those kind of services are best enjoyed as schadenfreude.

The perils of federated protocols

Posted May 25, 2016 1:24 UTC (Wed) by nybble41 (subscriber, #55106) [Link]

>> things like Xeon Phi, it's used on your mobile phone
> I don't think my phone has a 300W processor

Indeed, a mobile phone with a 300W Xeon Phi processor would last about five seconds before either draining the battery or setting itself on fire, whichever comes first. Possibly both.

There should have been a closing parenthesis after "Phi". That was meant to be read as "GPGPU is used on your mobile phone".

The perils of federated protocols

Posted May 20, 2016 8:36 UTC (Fri) by paulj (subscriber, #341) [Link] (3 responses)

IPv6 was slow to be rolled out not because of federation, but because of second-system-syndrome effects that led IPv6 designers to ignore backward compatibility.

SMTP hasn't developed much because it's very mature, and basically does what's needed. The maturity of SMTP hasn't stopped development at higher layers above SMTP. Also, if you want to blame SMTP for identity and abuse issues, no one has solved those any better in any other protocol that couldn't also be applied to SMTP. SMTP is actually wildly successful, because it is "federated", distributed and decentralised.

The perils of federated protocols

Posted May 29, 2016 23:47 UTC (Sun) by HelloWorld (guest, #56129) [Link] (2 responses)

So how come I *never* get any spam on WhatsApp and dozens to hundreds of spam emails every day?

The perils of federated protocols

Posted May 30, 2016 1:17 UTC (Mon) by Fowl (subscriber, #65667) [Link]

WhatsApp spam exists. (I've received it)

The perils of federated protocols

Posted Feb 6, 2019 9:18 UTC (Wed) by jond (subscriber, #37669) [Link]

Is that figure the spam you receive after filtering, or before?

The perils of federated protocols

Posted May 24, 2016 7:35 UTC (Tue) by micka (subscriber, #38720) [Link]

I wonder how you define the federatesd/centralized aspect when writing about graphical API that are purely local. Unless you're thinking about something like rendering distributed on multiple computers? I don't think metal/vulkan does that.

The perils of federated protocols

Posted May 19, 2016 6:06 UTC (Thu) by roc (subscriber, #30627) [Link] (3 responses)

Quite so.

Marlinspike carefully phrased his mention of HTTP to dance around the fact that HTTP/2 is being deployed right now (and experimental predecessors have been deployed for years). That section of his post is deliberately misleading.

He's right that decentralized evolution imposes costs, including delays. But he's not right that centralized always wins.

The perils of federated protocols

Posted May 19, 2016 11:58 UTC (Thu) by khim (subscriber, #9252) [Link] (2 responses)

It's a bit dishonest, but not by much. By your own admission: HTTP/2 is being deployed right now and experimental predecessors have been deployed for years.

Basically it's shows that federated world could be moved along — if you are willing to spend about 10x more resources and accept about ⅒ of development speed.

The perils of federated protocols

Posted May 19, 2016 12:47 UTC (Thu) by hkario (subscriber, #94864) [Link] (1 responses)

we already had a web with just one client that actually worked: IE6

did we learn nothing?

The perils of federated protocols

Posted May 19, 2016 17:11 UTC (Thu) by khim (subscriber, #9252) [Link]

did we learn nothing?

Sure. The lesson is obvious: no matter how dominant is your platform if you stay dormant for years sooner or later someone will bypass you.

The web which we enjoy today is result of Microsoft's attempt to rebuild it today: architecture astronauts have won and instead of quickly adding features to MS IE which would make a breakout attempt impossible Microsoft decided to rebuild everything from scratch

The end result was something years later, with reduced functionality and insane resource consumption.

This gave chance to the Firefox/Safari/Chrome—but also gave developers of these monsters a false sense of security: they decided that since Microsoft was stupid all other contenders for the “try before you buy” app deployment platform will be just as stupid. The height of folly is, of course, stillborn Firefox OS but I think that the ball was lost when Mozilla decided that it could afford to dictate the rules to app developers: “it's my way or the highway”… most developers have chosen the highway… well some have picked some other highway, but almost everyone left anyway…

Some still believe that they will return, but I seriously doubt it: Apple and Google are not like Microsoft (at least not yet), they iterate fast and already made web development mostly irrelevant. I fully expect to see regression of web platform in the next few years—it'll be interesting to see how this process will look like.

The perils of federated protocols

Posted May 19, 2016 9:05 UTC (Thu) by mjthayer (guest, #39183) [Link] (1 responses)

The web is a server-to-client protocol, not peer-to-peer. What would the equivalent be for messaging? An app in which I type a message and a receiver and which tries all known protocols and servers to try to deliver it?

Actually, why not? Security and secrecy would be acceptable if both parties wanted it, and you would have the convenience of having one app for all communications. Not much lost: if your partner does not value secrecy, the best protocol in the world will not stop them republishing your message over a different medium.

The perils of federated protocols

Posted May 19, 2016 9:19 UTC (Thu) by josh (subscriber, #17465) [Link]

> The web is a server-to-client protocol, not peer-to-peer.

The web is client-to-many-servers, not just client-to-one-server. And with WebRTC, the web also supports peer-to-peer.

The perils of federated protocols

Posted May 19, 2016 12:08 UTC (Thu) by smoogen (subscriber, #97) [Link]

These things seem to come in waves. Most of the reasons Federation of protocols in RFC's came up was due to the fact of the many walled gardens of proprietary mainframes kept various people from being able to inter-operate. This was the 'cloud' of the 1960's and 1970's where it was really convenient to be able to do something with someone else on a similar IBM 7030 but woah if you tried to communicate with the guys down the hall on the PDP-8 or the Burroughs. And then it became more important when various 'features' you had your system built around changed underneath you. So that feature you had come to rely on for communication on the 360 ? It doesn't work the same way or is a paid upgrade on the 370.. However there was a lot of ability to also innovate and figure out what features were useful and which weren't during this time also.

I expect that once Google does a spring cleaning, figures out a way to charge for certain features that make using their closed garden useful, or it turns out the metadata being shared was a useful sidechannel for the real communication.. then there will be a push for a federated protocols. By that point hopefully the things that people know will be useful or not are known.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds