Resisting the centralization of network infrastructure
GnuPG maintainer Werner Koch presented the Sunday keynote at GUADEC 2016 in Karlsruhe, Germany. He used the session to push back against the present trends in Internet architecture, which he called a reversion to the centralization of the mainframe computing era.
Network evolution
The talk began with a quick history lesson. In the early days,
computers were all custom-built machines of which there were but a
few, and all of them resided in laboratories. Importantly, there was
no communication between machines. Standardization efforts in the
1960s (beginning with IBM's System/360) made it possible to
communicate between machines, but only by physically transporting
paper punch cards from one to another. The only networks of computers
were those for large projects like NASA's moon program and the civil
aviation infrastructure.
"In the Seventies, things got more interesting with the 'minis'," he said. Physically smaller machines like the PDP8 meant that smaller groups could share a machine, which led to Thompson and Ritchie's development of Unix. Networking at that time relied on leased lines and used store-and-forward protocols like UUCP. Then, with the advent of TCP/IP, he said, came the really interesting developments. The proliferation of PCs and workstations meant a lot more networked communication was needed, so new protocols followed.
But the Eighties were marked by a competition between the centralized mainframe model and the decentralized Unix model. Ethernet and local-area networks allowed PC and workstation vendors to offer stiff competition, but mainframe companies had their successes as well. It was only the invention of the World Wide Web in the Nineties that broke the stalemate permanently, he said. There was no longer any denying that having all machines interconnected was the best approach—and the web at that point was fully decentralized; anyone with an IP address could run a server.
And, of course, many people did, which led to the search-engine industry. Before Google, AltaVista was the dominant search engine and indexed everything on the web. But this millennium, after the dotcom bubble burst, things changed. The most important change, Koch said, was that search engines stopped being in the "search" business, which is to say that they stopped existing to point users to information elsewhere.
Instead, they shifted into the "user profiling" business. The search engine became a central place for information, rather than a portal to somewhere else. That led to the engines filtering the information they return on a per-user basis, storing user data centrally, and retrieving information from the users themselves.
Search engines are certainly not alone in the user-profiling game (other notable players include social-media services), Koch added, but the crux is that the Internet has shifted back to a model where there are just a few sites that host information, and users connect to them directly. Ultimately, he said, these services "trick the users into believing there is an Internetwork, but in reality users are connecting to a few data centers." It is more like CompuServe and similar dial-up services from the Eighties than it is the original World Wide Web.
The state of services
Koch then examined the most popular network services of today and pointed out ways in which recent changes have returned users to a dependence on centralized servers.
The first he looked at was the web. Most sites today run JavaScript programs on the user's browser, but those sites increasingly use JavaScript served up from a central service rather than hosting it locally. As a consequence, the site does not function without the services of api.google.com or some other server; offline usage no longer works, and users not using JavaScript lose access to content.
Even more troubling is the fact that server security certificates are no longer stored locally on users' machines. Instead, they are stored within the browser itself and they are updated at will by Google, Microsoft, Apple, and Mozilla. While this development was a necessity caused by the failed certificate authority model, it is still harmful. The browsers "phone home" regularly, and what is in theory a decentralized service is, in practice, fully centralized. The solution, he said, is for server operators to host their own JavaScript—or avoid it altogether.
Email is older than the web, so its design is older, too. It is fully decentralized and resistant to disruptions in connectivity for hours or even days at a time. This is because email was designed from the start to support multiple routing options. But, today, most email really goes through only a few providers (primarily Google). The practical problem is that these providers impose rules (such as server blacklisting and disabling Sender Policy Framework support) that interfere with others' ability to run their own services. The alleged reason for this is to stop spam but, Koch said, that is not true. Large-scale analysis by the mail services makes it easier to detect and block spam, without hampering interoperability. The solution is simply for users to host their own mail on their own boxes, he said. "Take back your mailbox."
The keyservers that support GnuPG and other OpenPGP implementations are another service that has slipped away from decentralization. Originally, keyservers were a loosely coupled network of independent machines. There have been moves toward a centralized design using services like keyserver.net and pgp.com, although most have failed. Today, the new attempt is Keybase.io, which many users like for its convenience (linking PGP keys to social media accounts). But it fundamentally violates the end-to-end privacy principle of PGP by binding keys to privacy-invading services. Periodically, he said, proposals pop up to implement "validating" PGP keyservers—but none of them work in a decentralized fashion. He urged users to stand up against all attempts to centralize PGP.
Finally, he looked at federation in general. Mail servers have more and more difficulty interoperating, he said, and XMPP has "lost its track" and is being replaced by centralized systems like WhatsApp and Signal. He encouraged developers to make federation a priority and to design for it from the beginning.
The state of the desktop
Koch then looked at desktop environments—although, he said, "I mainly have questions." These days, he continued, online services are constantly on our (and our users') minds. But developers need to be asking themselves how hard it will be to get rid of the data that has been relayed to these services. Similarly, desktop projects need to ask whether or not users can effectively use their system when offline. "If the network goes out, can you still work with your data?"
He pointed out a few network services essential to desktop projects like GNOME, starting with package updates. Who is in control of the update channel, he asked? Are the updates properly signed, and is the update system easy to use? Can the desktop be used on an air-gapped computer, with "swapping around USB sticks" as the only available method to access files?
There are positive signs that some network services take decentralization seriously, he said, like the Briar mesh network and the GNUnet peer-to-peer network. Desktops and other projects should support use of these types of services, he said. The online connections you have are not under your control, he concluded; you must be prepared for their disruptions and not find yourself relying on an Internet of centralized services.
In the brief question-and-answer period at the end of the talk, Koch added that he thinks SMTP needs to be replaced (and that it will be, eventually), but that none of the proposed replacements has so far solved the spam problem. That makes them non-starters, even if they improve on other features, like "trust on first use" encryption.
He also responded to a question about the use of JavaScript and remote services for other applications (specifically, for GNOME Maps and its remote map tile service support). JavaScript is acceptable for some applications, he said, but not where a user's personal data is involved. There are good alternatives to webmail, he said, such as Mailpile, and almost all of the page styling people do with JavaScript today could also be done using CSS.
Koch told another audience member that isolating browser tabs in separate processes improves on security, but that it does not solve the webmail problem because users must still place trust in their browser. He thinks it likely that GnuPG will see progress on convincing applications to support end-to-end encryption, but even then, "that doesn't solve the infrastructure problem. Someone has to set it all up."
Koch stayed after the session and answered questions for a number of people about GnuPG and security. The talk itself had other benefits, including re-igniting a discussion over how GNOME should address security and privacy. That discussion continued in the lightning talks, unconference sessions, and birds-of-a-feather days; more information from those meetings is still to come.
[The author would like to thank the GNOME Foundation for travel assistance to attend GUADEC 2016.]
Index entries for this article | |
---|---|
Security | Internet |
Security | Privacy |
Conference | GUADEC/2016 |
Posted Aug 18, 2016 9:21 UTC (Thu)
by eru (subscriber, #2753)
[Link] (70 responses)
That is not really a feasible solution for most people. It requires setting up an always-on (or "mostly-on") server with a public address. Apart from being technically challenging for most non-IT people, it also requires a peaceful spot with a good net connection for the server, or else paying for a virtual host, which costs something like $15/month for even a low-end option.
Posted Aug 18, 2016 11:18 UTC (Thu)
by robbe (guest, #16131)
[Link] (9 responses)
And if the IoT hype is only half true, we will soon be surrounded by stuff that is capable of running a mail server (and capable of being a spam source, no less).
Posted Aug 18, 2016 19:46 UTC (Thu)
by reedstrm (guest, #8467)
[Link] (6 responses)
Haven't you followed the trends? Desktop computer sales are going the way of the land-line phone. Most "normal" people are using a large phone, a tablet, and/or at most a laptop for their computing needs. The thing most likely to be on and internet connected is their game/entertainment console, since the "off" button on those is really only a "sleep". Should that be running email?
Posted Aug 18, 2016 20:26 UTC (Thu)
by excors (subscriber, #95769)
[Link] (5 responses)
Posted Aug 18, 2016 23:37 UTC (Thu)
by bfields (subscriber, #19510)
[Link] (3 responses)
Posted Aug 18, 2016 23:58 UTC (Thu)
by Jonno (subscriber, #49613)
[Link] (2 responses)
Not really, most home routers lack the storage necessary to host even a single mailbox. My personal IMAP server serving 4 people uses 2.4 GiB of storage, while most home routers feature 4-128 MiB of storage.
Posted Aug 19, 2016 3:14 UTC (Fri)
by JanC_ (guest, #34940)
[Link] (1 responses)
Posted Aug 19, 2016 13:30 UTC (Fri)
by corbet (editor, #1)
[Link]
Posted Aug 19, 2016 6:08 UTC (Fri)
by eru (subscriber, #2753)
[Link]
Not sure if it is sensible to run an important server on a device that can be lost, stolen, or dropped and crushed under a bus. In many places, the cost of mobile data would also be a problem.
Posted Aug 22, 2016 12:08 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (1 responses)
Posted Aug 22, 2016 13:20 UTC (Mon)
by pizza (subscriber, #46)
[Link]
I've been self-hosted for nearly 20 years now. It started out pretty simple, but over time, not so much.
Keeping my mail services going doesn't require much attention (or money) on an ongoing basis, but starting from scratch today is another matter.
On a pure cost basis, I've paid a lot less to self-host email than I'd have paid to 3rd party providers, especially given that the hardware (and bandwidth) cost is shared with other stuff I would still have to self-host.
Posted Aug 18, 2016 11:52 UTC (Thu)
by guus (subscriber, #41608)
[Link] (15 responses)
A fixed domain name and a public (but not necessarily static) IP address is required. I don't know how easy that is. In the EU, you usually get a public IPv4 address with your cable/xDSL, although normally you have to pay extra to get a fixed IP address. You can get dyndns service for free, or pay $10 to $15 a year for a domain name (I can recommend Gandi). But the problem is usually that even if you have a public IP address, some ports are blocked, notably incoming traffic to port 25 (SMTP). Many DNS providers however usually have some form of email forwarding service, that comes for free with a domain name, that you can use to get around this problem.
Of course, if you are behind a carrier-grade NAT or can't/won't run a server at home, there is indeed the option of using a VPS. For VPSes, the main cost drivers are memory (both RAM and disk storage) and IPv4 addresses. You can do a lot with a VPS with only 256 MB RAM and a few GB of storage, but indeed you won't find much lower than $15/month if you want your own IPv4 address. But there are also so-called NAT VPSes, where you share an IP address with many other VPSes. You have some statically assigned ports that you can use, but of course not the ones you want to use (25, 80, 443). They usually do provide forwarding services though, so you can tell them which domain you use for email, and you point your MX to their servers, and they'll have the email forwarded to your VPS. The same goes for websites. These NAT VPSes are available for $10 a year.
As always, the problem is human nature. While it is perfectly possible and not even that hard to set up your own infrastructure, most people do not want to deal with it at all and will just go for the easiest solution. There are projects like FreedomBox that try to make it as easy as possible to have your own infrastructure running on a small computer, but even that is too difficult.
But the main problem is not the non-tech-savvy people, it's actually that a lot of tech-savvy people even do not want to take a little bit of time to set such a thing up. Yes, it takes some time to configure things the first time, but it is certainly not hard. Once set up, it takes very little maintenance. And you'll have learned some things while doing it that might even enhance your CV.
Posted Aug 19, 2016 0:35 UTC (Fri)
by bronson (subscriber, #4806)
[Link] (3 responses)
It's very hard and shockingly time-intensive to run your own mail server, even for tech-savvy people familiar with system administration and networking. I switched my mail server to mail-in-a-box because I just couldn't keep up with the ongoing changes to SPF, DKIM, DNSSEC, RBLs, and software upgrades. And all that is on top of monitoring the server's logs and send failures (Yahoo, you really suck).
And I don't even bother with webmail! Providing a more complete experience with server-side contacts, mobile clients, custom filters, 2factor, automatic expiry, a proper Sent Mail folder, reactive spam filtering, etc... Good luck with that. You'll never finish, no matter how much time you can sink into it.
If you run your own mail server, pray tell your setup?
Posted Aug 19, 2016 7:36 UTC (Fri)
by guus (subscriber, #41608)
[Link] (1 responses)
> If you run your own mail server, pray tell your setup?
I am running Postfix, with Postgrey to do greylisting. Postgrey is already doing 50% of the spam filtering. I used to use RBLs, which were even more effective, but unfortunately the most useful ones shut down. The rest of the spam filtering is done with Spamassassin, but that is started by individual user's mail filters. Spamassassin is also quite effective, and when it lets something true it is a single press of a button in mutt to let Spamassassin learn that an email is spam.
> I just couldn't keep up with the ongoing changes to SPF, DKIM, DNSSEC, RBLs,
Apart from some useful RBLs disappearing, and having to set SPF records on my mail domain names once, I don't seem to need to keep up with anything, at least not more than once in half a year or so. I also don't have to monitor the logs. The few times I do it's mostly because something is stuck in the greylist and that normally solves itself by being more patient.
> And I don't even bother with webmail! Providing a more complete experience with server-side contacts, mobile clients, custom filters, 2factor, automatic expiry, a proper Sent Mail folder, reactive spam filtering, etc...
Hm, maybe this is the difference? I have the mail delivered to a Maildir, and use mutt to read it directly from there. I'm not running an IMAP or webmail server to access it. If I want to access my email remotely, I just SSH into the server.
Posted Aug 19, 2016 8:03 UTC (Fri)
by mbunkus (subscriber, #87248)
[Link]
There are several ways to deal with this, e.g. the mailing list manager can send from "Your Name via Mailing List <mailing-list-manager-address@domain>" instead of your address. But a lot of mailing lists don't support such a feature or aren't configured for it.
Another way would be for you to include the outgoing mail servers used by those mailing lists in your own SPF record.
The problems don't stop with mailing lists which you might not even use. Even worse are cases you cannot do anything about such as when a receiver has set up forwarding on the mail server level.
Think of Postfix' virtual alias maps. For example, on the main mail server for $customer the $customer has set up a forwarding to a private account with something like "jdoe@custom.er jdoe@custom.er,john.doe@googlemail.com". What now happens when you send to jdoe@custom.er is: 1. $customer's mail server may check your SPF record and see that it's good. 2. $customer's mail server stores one copy and forwards a second copy to Google Mail. 3. Google Mail checks your SPF record (because the sender of said email is still you) and see that the server the mail originates from ($customer's server) is not listed in your SPF record.
That second mail is now silently stored in Google's spam folder, and there's nothing you can do about it. You're at the mercy of everyone else setting up their forwardings in a way that's SPF compliant.
The error reporting is pretty bad, too. For the example above you might only note there's something wrong when $customer comes to you and says "hey, your mails are all marked as spam!"
The situation gets even more complicated when you add DKIM to the mix. Similar problems arise. Basically both schemes rely on everyone having their systems set up correctly, which is a practical impossibility.
Posted Aug 20, 2016 5:20 UTC (Sat)
by gfa (guest, #53331)
[Link]
Posted Aug 19, 2016 6:03 UTC (Fri)
by eru (subscriber, #2753)
[Link] (3 responses)
Actually, I agree getting started is not necessarily hard, but that is really only a part of the problem. Others in the thread have pointed out problems with ensuring delivery, and keeping out of spam filters. I would add the constant maintenance burden of keeping the system up to date and safe. I used to rent a virtual host to play with, and was amazed at the constant barrage of break-in attempts I saw in the logs. None succesful, as far as I know, but that is mainly because I had no services running, apart from ssh. I realized that if I ran something interesting on the server, I would have to dedicate some time every day, 7 days a week to check for anomalies and ensure the kernel and servers have security patches applied. As I hadn't time for that, and had no particular projects in mind at the time, I gave up with that virtual server.
The "typical user" would probably forget about his private mail server as long as it worked somehow, meaning it would eventually be doing hidden work for others than him, and/or spying on his mail. Home routers and IoT devices clearly have a similar "negligence problem", but they are usually more limited than the mail server would be. But I'm pretty sure a significant number of home routers are currenly pwned and part of malware networks. Most users never update their software, and none of the models I have used have automatic updating.
Posted Aug 19, 2016 6:10 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 19, 2016 7:57 UTC (Fri)
by guus (subscriber, #41608)
[Link] (1 responses)
Then I would really suggest you use a distribution that automates this for you as much as possible.
If I look in the logs, I also see attacks everyday. However, SSH (coupled with something like denyhosts) and Postfix stand up very well against them. I've only had a break-in once, and that was because I had a user on the server that had a common nickname and unfortunately had used a very weak password. I never had any break-in through a bug in a daemon in the ~18 years or so that I've been running my own mail/web/IRC/shell server. I'd like to chalk that up to the automatic updates.
Maybe I'm just lucky? I don't know. But I still think you should at least try to set up your own infrastructure if you have the possibility.
Posted Sep 5, 2016 13:33 UTC (Mon)
by mgedmin (subscriber, #34497)
[Link]
(I filed a bug. It was closed as WONTFIX.)
Posted Aug 19, 2016 19:48 UTC (Fri)
by Felix (guest, #36445)
[Link]
Not sure if I took your statement out of context but at least in Europe a VPS (KVM based) with a public ipv4 IP (+ IPv6 of course) is a lot cheaper than USD 15 per month. For example a VM with 1 CPU core, 1 GB RAM + 25 GB storage costs about USD 5,25 at Hetzner. OVH is even cheaper than that (although with less storage included).
I would advise against sending mail directly from any "dial up" account (even with a fixed ip address). These are *very* likely blocked by the big mail providers. Delivering email to big providers is not trivial but as a start you pretty much need an IP address inside a "real" data centre.
Posted Aug 22, 2016 19:41 UTC (Mon)
by ibukanov (subscriber, #3942)
[Link] (5 responses)
Posted Aug 22, 2016 20:44 UTC (Mon)
by lsl (subscriber, #86508)
[Link] (4 responses)
Posted Aug 22, 2016 21:15 UTC (Mon)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
Posted Aug 22, 2016 21:47 UTC (Mon)
by bronson (subscriber, #4806)
[Link] (2 responses)
Will be interesting to see if this changes.
Posted Aug 23, 2016 4:19 UTC (Tue)
by nybble41 (subscriber, #55106)
[Link]
I expect that the various blacklists that currently work on individual IPv4 addresses will eventually start tracking /64 blocks for IPv6, since that is the smallest block which is intended to be assigned to an end user. At that point having all your server customers sharing a single /64 will be a liability, even if they only use one address each. Any one of them misbehaves (or is simply misconfigured) and all your them end up on the blacklist together.
Also, you might not need "thousands" of IP addresses for a single low-end server, but it wouldn't be unreasonable to use at least a couple of them to run multiple instances of the same service on a standard port, or to give a container its own public address.
Posted Aug 27, 2016 18:18 UTC (Sat)
by flussence (guest, #85566)
[Link]
Or simply wants "net.ipv6.conf.all.use_tempaddr=2" to work, so that people up to no good in the same server rack can't trivially correlate an outgoing request to http://distro.example.net/security-backports/ with an nmappable IP…
Posted Aug 18, 2016 13:23 UTC (Thu)
by jrigg (guest, #30848)
[Link] (4 responses)
Another reason this isn't a feasible solution for most is that the large web mail providers are increasingly rejecting any mail that doesn't come from their own or another large provider's servers. I run my own server hosted in a large data centre in London. It's not on any blacklist that I've checked, but I've had to resort to using web mail much of the time. The mail I send from my own server is typically received by the destination server but then disappears silently before reaching the addressee's inbox. I've seen comments from others around the web that suggest this is not uncommon.
Posted Aug 19, 2016 3:21 UTC (Fri)
by JanC_ (guest, #34940)
[Link] (3 responses)
Google, Hotmail, etc. will happily accept without errors and then drop your mail if you don't have them configured.
Posted Aug 19, 2016 3:47 UTC (Fri)
by bronson (subscriber, #4806)
[Link] (2 responses)
Still, if you don't send much mail, and someone marks one of your messages spam, your entire domain can get in trouble quick. Good luck getting back off their secret blacklists.
Deliverability is hard!
Posted Aug 20, 2016 5:20 UTC (Sat)
by gfa (guest, #53331)
[Link]
After 1 - 2 years of continue usage of the Japan server gmail/hotmail started to take our emails, I'd say you need to do your homework (SPF, DKIM. PTR records, etc) and wait some time, people sending big volumes of email do something similar called IP warming.
I'm not talking about big volumes, 10 - 20 emails/day, an small company and personal email servers. SPF, DKIM, PTR records, etc.
Posted Aug 25, 2016 11:33 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
I'm on Freegle (the UK version of Freecycle). I use a throwaway yahoo address, because Freegle are hosted on yahoo. Yet ALL my freegle mail is spambinned :-(
I use thunderbird, so I just tell it to download the spambin every day or so, but it's bl**dy annoying. AND YOU CAN'T TURN THE DAMN SPAMFILTER OFF!!! This account receives no spam whatsoever, which means the spam-filter is scoring a 100% ham-hit :-(
Cheers,
Posted Aug 22, 2016 16:39 UTC (Mon)
by n8willis (subscriber, #43041)
[Link] (38 responses)
Making it easy remains an important problem space, and his references to projects like Mailpile demonstrate that he's aware work needs doing.
Nate
Posted Aug 27, 2016 0:45 UTC (Sat)
by Garak (guest, #99377)
[Link] (37 responses)
Posted Aug 27, 2016 0:58 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (36 responses)
It's quite simple - running a home server requires A LOT of work to do backups, provide uninterrupted power and so on. Why would you want to do that?
Posted Aug 27, 2016 6:34 UTC (Sat)
by Garak (guest, #99377)
[Link] (9 responses)
Perhaps you just did a horrible job of making the point that there are also countless reasons to choose not to operate a server from home. But most of us here have the basic understanding of what a server is generally and what it might be useful for.
One very specific reason to operate a mail server at home in the U.S. is because at least historically (still?) there was some law (that the NSA must have known was horrific all along but kept silent about) stating that email left on a remote server more than 60 or 180 days was granted a lesser status than 4th ammendment 'papers'. Not to mention how generally the 'third party doctrine' involving the third party servers gives less legal privacy protection to data than if it physically resides in your home. AFAIUI data on a computer in your physical residence has always enjoyed the strictest possible 4th amendment protections.
The next most noteworthy reason I would add, would be to exercise free speech on the internet, without being under the auspices of any unnecessary gatekeeper. E.g. the person who should decide what speech of mine is publishable on the internet is me, not an employee of twitter or google. Obviously if I publish a bounty on someone's head, the police should pay attention to it, or anybody that complains about it and come and arrest me. But if I want to publish a video of me burning in a fireplace some book I legally purchased, I ought to be able to do that, even if it offends an entire religion.
It seems to me that in order for free speech to exist in any meaningful way on the internet, all endpoints must not be forbidden from operating servers. Otherwise we see things approaching the old ways where massive corporations were the ultimate gatekeepers deciding which speech was published and to how wide an audience. In this day and age, every ordinary internet user ought to be able to decide that something they have to say and want published to world is able to be published to the world. The simplest way I know how to do that is to host my own LAMP server. I don't see another way realistically.
I don't know about other countries, but with Russia's recent history of oppression of journalism, I doubt there are many people interested in pushing free speech boundaries from there. I'm not saying the U.S. is some free speech panacea. In fact, I kind of get the feeling that keeping the power of home server operation out of the hands of the masses is precisely how establishment forces in the U.S. maintain much of their power in the face of the 'disruptive' technology of the internet.
Posted Aug 28, 2016 6:51 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> I don't know about other countries, but with Russia's recent history of oppression of journalism, I doubt there are many people interested in pushing free speech boundaries from there.
Posted Aug 29, 2016 21:39 UTC (Mon)
by bronson (subscriber, #4806)
[Link] (7 responses)
You claim there are countless reasons to run your own server and it's so easy... so why isn't everyone doing it?
Posted Aug 29, 2016 23:50 UTC (Mon)
by Garak (guest, #99377)
[Link] (6 responses)
Posted Aug 30, 2016 3:06 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (5 responses)
What does that even mean? Do you or don't you want to run a mail server? Because obviously that's allowed by the TOS.
Since you're advocating running a server at home, what do you think of your internet provider's TOS? I doubt it's any better.
> more computers along the communication path between you and your intended audience.
um... Have you ever run traceroute?
Posted Aug 30, 2016 4:03 UTC (Tue)
by Garak (guest, #99377)
[Link] (4 responses)
Posted Aug 30, 2016 4:55 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (3 responses)
> I'm guessing amongst those pages of ToS are various catch all clauses that reserve for them the right to cease doing business with you, if for instance you engage in extremely controversial free speech (e.g. flag/bible burning videos).
Maybe, same as your home DSL line or any other ISP connection. It's standard boilerplate. They all have intentionally vague terms that let them shut you down if you're being a nuisance. It's because of the immense variety of spammers and abusers out there.
Here's the thing: they don't want to shut you down. Go ahead and host your flag/bible burning videos. Unless you get yourself DDoSed, they will be happy to help you do it.
Running your own server at home probably won't get you better ToS, but you also won't have enough upstream bandwidth to cause much of a nuisance.
>>> more computers along the communication path between you and your intended audience.
There are probably no less than six or seven computers on the communication path between you and your intended audience, and maybe far more. If you're both hosting in the same ISP, that number might be as low as three. Hosting at home only makes that number go up.
Posted Aug 30, 2016 6:34 UTC (Tue)
by Garak (guest, #99377)
[Link] (2 responses)
Think about it like this- The most atomic form of communication on the internet might be a trivial sending of the string "Hello World" via a simple standard tcp/ip listen/bind/bla/bla. Imagine that is your atomic unit of free speech. Communicating a string to a requesting audience. How can adding an extra colocated (virtual) server subtract hops from that atomic transaction? The answer is that it doesn't, it adds hops.
As for all the ToS discussion, please don't misunderstand me. I understand full well that I could intentfully simply ignore my ISPs ToS and run my server just fine. In truth, I'm not all that big a bible/flag burner, but I'm big on demanding that I have the freedom to do so. And not the freedom to "get away with it" under the radar of my ISP, but the freedom to do so with no shadow of impropriety over the transaction.
Likewise, when an innovator is developing a new client/server (~or~) p2p FOSS app, one is at a tremendous advantage if there is no implicit shadow of impropriety hanging over end users utilizing that app.
At the end of the day, there is no differential resource burden on the ISP depending on whether my utilization was as a client sending and receiving 1kb each way, versus if I was a server doing the same thing with the same other end/edgepoint. This is why it bothers me that ISPs impede such communications. I think it has become a standard practice, because the general case is that people who use their bandwidth as a server, tend to be commercially profitting from that bandwidth moreso than those who use the precise same up and down bandwidth as a client. As such, the ISPs are effectively taking a 'cut' of presumed profits by charging more. Which in a libertarian capitalist sense is fine, but thats the point at which 'free speech on the internet' clearly becomes 'as much free speech as you can afford'. Which is a sad evolution of the internet as a platform for free speech, from where it was evangelized early on.
If it weren't for the curious subject matter overlap with the Hillary Clinton home email server, I'd wonder why I don't see many other people understanding my points. But I think people are hesitant to discuss this openly for political reasons. (I certainly don't want, nor expect Trump to win, but I suppose it's still theoretically possible). Because I don't think it's difficult to consider this problem from a free speech perspective, asking the simple question "Is there any kind of hard-line free speech on the internet in any truly new way, or is the internet merely a technological evolution of the old media printing press technology, and a new generation of media barons who can fight their battle of ideas by buying ink by the barrel (or transit by the terrabyte)... The U.S. FCC's Network Neutrality makes it sound like the internet is a revolutionary new com tool that somehow provides "free speech". I want some of that good stuff, and I don't know how else to try to dilineate it's existence via the internet other than how I have been. And I challenge anyone else to demonstrate how there is free speech on the internet without the 'right' to operate a server from your access point. Just think about good old Alice and Bob wanting to communicate personal messages back and forth. If neither is allowed to host a server of any kind, that constraint necessitates that they utilize someone else's server. That someone else now has the power to limit those personal messages (free speech). My solution is simple- get "someone else" out of the picture, or rather, don't have an environment where Alice and Bob have no other choice than to bring "someone else" into their communication path.
Posted Aug 30, 2016 14:37 UTC (Tue)
by bronson (subscriber, #4806)
[Link]
If you're looking for an ISP with zero restrictions (and it sounds like you are), you'll never find it. Just like, if you're looking for a street corner with zero free speech restrictions, you'll never find it. There are too many people who want to abuse the privilege.
The rules must allow for the abusers to be shut down.
Posted Aug 30, 2016 15:13 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
But note that there are always restrictions on your freedom to act - even in your part of the USA, there are things that I might think are a good idea as a matter of political speech that are prohibited outright.
For example, I might think that recreating Joseph Standing's mission to Georgia as it ended in 1879 is a good way to remind Americans of how their state governments do not always protect freedom of religion - however, to do that would be illegal, and I would not be permitted to speak out against state governments in that manner.
Posted Aug 27, 2016 6:56 UTC (Sat)
by Garak (guest, #99377)
[Link] (25 responses)
Posted Aug 27, 2016 7:35 UTC (Sat)
by micka (subscriber, #38720)
[Link] (22 responses)
And a laptop battery is not equivalent to an ups. Does the laptop battery power the DSL modem? the router ? Does it provide network access (one year ago, workers cut the optical fiber with an excavator, the whole neighbourhood was cut from internet for 3 days).
I'm winding down my dependency on the services I serve from home, wastes too much energy (literally) and pours it in the immediate environment. It's 35 degrees this week and I have no way to get rid of the excess heat. I set it up to be a low power computer at the time, now I will do it again on an ARM or other thingy. But I will still not set up a mail server.
Posted Aug 27, 2016 8:28 UTC (Sat)
by Garak (guest, #99377)
[Link] (21 responses)
Posted Aug 28, 2016 7:00 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (20 responses)
Posted Aug 28, 2016 20:11 UTC (Sun)
by Garak (guest, #99377)
[Link]
Posted Aug 28, 2016 20:57 UTC (Sun)
by Garak (guest, #99377)
[Link] (18 responses)
Posted Aug 29, 2016 7:03 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (17 responses)
Obviously, if you don't get over-enthusiastic about a God-given right to run private email server then you are probably Hitler who wants to put everyone in Gitmo.
Posted Aug 29, 2016 18:16 UTC (Mon)
by Garak (guest, #99377)
[Link] (16 responses)
Posted Aug 29, 2016 21:29 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (15 responses)
Unless you make home hosting easy and practical it won't be used in any significant roles by a significant amount of users. And I just don't see how it can be practical. There are actually more important projects worth pursuing (like https://matrix.org/ ) that can actually make an impact.
And on a sidenote, the rants like the one above is a major reason why people think that techies are completely clueless.
No, Virginia, there is no Santa Claus and your triple-encrypted PGP signed network-of-trust email server won't promote free speech in any meaningful way.
Posted Aug 29, 2016 22:22 UTC (Mon)
by Garak (guest, #99377)
[Link] (14 responses)
Posted Aug 29, 2016 22:35 UTC (Mon)
by micka (subscriber, #38720)
[Link] (11 responses)
Does it exist in many countries ? In just one ? I sure don't see it here. Does it exist just in _your_ country. If so, is it really an impediment to development, testing and deployment of new solutions/evolution ?
Posted Aug 29, 2016 22:51 UTC (Mon)
by Garak (guest, #99377)
[Link] (10 responses)
Shortly after my 50 plus page complaint to the FCC in 2012, a family in Utah (the other major GFiber deployment at the time) got their small children to hold some picket signs, within 48 hours GFiber relaxed their ToS language to allow 'non-commercial' server usage. Which actually feeds my anti-commercial-competitive suspicion of motivation on their part.
As for really being an impediment? I'd sure say so. I can easily enough shell out more money to my ISP or colo or vps provider, but all receipients of any FOSS solutions I develop would need to do likewise. That's enough of a hurdle I think to slow development to a relative crawl. Also going with a colo or vps introduces a new layer of free speech limiting terms of service / gatekeeper being added to the equation. But many other developers no doubt have less concern there.
Posted Aug 30, 2016 7:00 UTC (Tue)
by micka (subscriber, #38720)
[Link] (9 responses)
How would you conclude, at this point, that it's even marginally an impediment to the creation of these programs when maybe 1/30th of the world population has this problem (and for most of them, don't care about it)?
Ah, and also, the US are not the center of the world.
Posted Aug 30, 2016 7:28 UTC (Tue)
by Garak (guest, #99377)
[Link] (8 responses)
Posted Aug 30, 2016 14:51 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (7 responses)
Posted Aug 30, 2016 16:20 UTC (Tue)
by Garak (guest, #99377)
[Link] (6 responses)
Posted Aug 30, 2016 17:05 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (5 responses)
The rest of the world has already passed the US by in broadband speeds, mobile coverage, and IPv6 adoption. Perceived ToS limitations seem pretty small in comparison.
Posted Aug 31, 2016 12:25 UTC (Wed)
by pizza (subscriber, #46)
[Link] (4 responses)
Not quite so sure about lumping things in together there.
High broadband speeds are generally available to most of the population. Unfortunately due to the lack of meaningful competition in most markets, it'll cost you.
Mobile coverage is primarily a matter of population density. These days, unless you truly live in the middle of nowhere, you're fine. (FWIW, I recently purchased some property that qualifies -- but even there there is good mobile coverage but just not with my current carrier)
When it comes to *fixed* IPv6, Comcast alone puts the US into the top tier of IPv6 deployment. Mobile IPv6 is similarly carrier dependent, but at least two of the national carriers here support it across their entire footprint.
ToS limitations are not "perceived" but actual; In my case, without paying about double the residential rates, I'd be categorically forbidden from running a server of any kind, no option of a static address, and port 25 and 80 blocked upstream.
Posted Aug 31, 2016 14:15 UTC (Wed)
by bronson (subscriber, #4806)
[Link] (3 responses)
> High broadband speeds are generally available to most of the population
Only by the old definition. 4MB download is NOT broadband, no matter what the FCC says.
Besides, I wasn't contesting that. I was just saying relative to the world, it looks like this: https://en.wikipedia.org/wiki/List_of_countries_by_Intern...
Mobile coverage is the same story -- it only seems good if you don't travel: https://opensignal.com/reports/2015/09/state-of-lte-q3-2015/ We've been catching up in the last year but we still have a long way to go.
If you have data that shows otherwise, please share!
You're right about IPv6 -- I was laboring under antiquated information. I'm happy to no longer worry about this one.
Posted Aug 31, 2016 15:55 UTC (Wed)
by pizza (subscriber, #46)
[Link] (2 responses)
That data shows a pretty wide gap between *average measured* vs *peak measured* speeds. Honestly I'm not sure how useful the latter is -- For example, Singapore utterly dominates peak speeds but on average ranks lower than the US -- but the Q3 2015 data does show that the 80% of the US's population has at least 4Mbps downstream -- but given that the average measured is over 12Mbps, there's a substantial part of the population that has much higher speeds. Those numbers have only improved since then -- the Q1 2016 shows average connection speed is 15.3Mbps, peak is 67.7Mbps, Measured 4/10/15Mbps penetration is now 85.7/56.7/35.1%, a substantial improvement for six months -- and even that data is still nearly six months out of date.
That supports my point that higher speeds are usually _available_ to most of the populace in the US, but are often priced beyond what most folks would consider affordable or worthwhile. Meanwhile, elsewhere in the world, those same higher speeds are not only available but far more reasonably priced.
Of course, what's not mentioned in any of these metrics is the *upload* speed, which is far more critical to running a server.
Posted Aug 31, 2016 18:37 UTC (Wed)
by bronson (subscriber, #4806)
[Link]
Agreed, things are getting better, but we're still disturbingly slow in comparison to the rest of the world.
> higher speeds are usually _available_ to most of the populace in the US, but are often priced beyond what most folks would consider affordable or worthwhile
That's always true everywhere. You can pay for a dedicated satellite link if you want. It's another way of saying higher speeds aren't really available, right?
Posted Sep 1, 2016 5:47 UTC (Thu)
by Garak (guest, #99377)
[Link]
Posted Aug 30, 2016 1:37 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Aug 30, 2016 2:06 UTC (Tue)
by Garak (guest, #99377)
[Link]
Posted Aug 29, 2016 9:30 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (1 responses)
Posted Aug 29, 2016 18:18 UTC (Mon)
by Garak (guest, #99377)
[Link]
Posted Aug 18, 2016 10:15 UTC (Thu)
by szbalint (guest, #95343)
[Link] (2 responses)
I'll cite Moxie Marlinspike's article about this as he puts it better than I ever could: https://whispersystems.org/blog/the-ecosystem-is-moving/
Posted Aug 21, 2016 19:46 UTC (Sun)
by debacle (subscriber, #7114)
[Link] (1 responses)
Like almost everybody in the intersection of free software, open communication, and privacy, I read the linked article back then. And I am very grateful for it, because it reinforced my opinion, that to me decentralisation is an essential value, just as is software freedom.
A centralised system is practically not free anymore, because the user can't change or improve it. They may have the source code, but it could as well be closed. It is TIVOized by infrastructure. This is different in distributed or federated systems. You can change the code of your MTA, and as long as its dialect of SMTP is still understood by its peers you will get along with it. You can write your own XMPP server, and even have private extensions, and we still can chat with each other.
The authors excuse, that you have the source code, so that in case of "horrible changes" you can run their own alternative instead, does not hold. In case of a centralised service, all users would have to find the same trigger value of "horrible".
There are other points, the author missed. E.g.: Centralised services are much more prone to DoS or shutdown-by-law than federated services (XMPP) or fully distributed ones (Ring). Same goes for meta-data. It is much harder to fetch everything in federated and distributed systems than in centralised ones.
Talking about a bad example of a centralised service: Signal! It has good encryption, which is surely nice, esp. if you are Edward Snowden. To me it matters more, if I can use it on my free operation system (here: Debian) without using a phone number as identifier, which is a not so clever idea, IMHO. How could users possibly improve this aspect of Signal?
Let's put it that way:
Crypto-strength vs crypto-weakness is not the only consideration, or even the main one that should matter.
Posted Aug 22, 2016 5:46 UTC (Mon)
by liw (subscriber, #6379)
[Link]
Posted Aug 18, 2016 23:05 UTC (Thu)
by ssmith32 (subscriber, #72404)
[Link]
Posted Aug 22, 2016 20:27 UTC (Mon)
by apiontek (guest, #106869)
[Link]
There's no reason that everyone should have to run their own mailserver, any more than everyone should have to grow their own food or sew their own clothes. Skill specialization is an important part of human life. Sharing of resources is a fundamental human strength.
The problem is the economics of advertising. Why would anyone design for federalizing when that means some eyeballs don't need your service? The incentive of advertising economics is visitor lock-in. And if you choose another route, you have to charge for the service you offer people, or starve. The problem is capitalism.
*Maybe* with a basic income, and/or a 20-hour work week, stuff like that -- *maybe* there'd be a significant increase of the technically inclined running federalized servers, for fun, for family & friends, offering some disruption to the predominant model. Maybe.
Posted Aug 25, 2016 16:13 UTC (Thu)
by curaga (guest, #106812)
[Link] (1 responses)
Posted Aug 26, 2016 6:15 UTC (Fri)
by zdzichu (subscriber, #17118)
[Link]
The solution is simply for users to host their own mail on their own boxes
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Your home router would be sufficient to run a basic mail server, wouldn't it?
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Running spamassassin on a home router could be a bit challenging, though, even if you get past the storage issue. SA can be a bit of a hog...
Resisting the centralization of network infrastructure
Their phone is a computer that's turned on and connected to the internet 24 hours a day [...]
Resisting the centralization of network infrastructure
I guess less and less people (at least in the first world) have those machines turned on and connected. And even of those who have it - how many have fault tolerant setups? Multiple disks in RAID, regular backups in safe location, etc? One wouldn't want to lose 2 years of e-mails due to an untimely black out or bad sectors on an HDD. I'm not arguing it's impossible to achieve - but it is complicated and costly.
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
A personal server running an email server, website, IRC etc. does not require much. [...]
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
I'm running Debian stable with the unattended-upgrades package installed.
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
I had to send all my mail for those domains to the US before submit to hotmail/gmail.
Resisting the centralization of network infrastructure
Wol
Resisting the centralization of network infrastructure
hillary-like personal email server appliances
hillary-like personal email server appliances
There are no such limitations in countries like Russia or France. Yet nobody runs home servers.
hillary-like personal email server appliances
hillary-like personal email server appliances
My point is that there's nobody in Russia forbidding you to run your own mail server. Yet pretty much nobody does.
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
> I'm guessing your $3.50 endpoint plan involves free speech limiting terms of service,
It means that I was speculating as to which service your $3.50 plan was. Having to speculate, I presumed something like, oh lets just say AWS/linode/whatever. I'm guessing AWS, and I recall linode, have many pages of terms of service as part of renting a virtual server from them. I'm guessing amongst those pages of ToS are various catch all clauses that reserve for them the right to cease doing business with you, if for instance you engage in extremely controversial free speech (e.g. flag/bible burning videos). But by all means, lets end the speculation- which specific $3.50 service are you talking about?
What does that even mean? Do you or don't you want to run a mail server? Because obviously that's allowed by the TOS.
Since you're advocating running a server at home, what do you think of your internet provider's TOS? I doubt it's any better.
Yes, that is sort of what this entire debate from my side has been about. My belief remains that the FCC's Network Neutrality uses the right language to foster the internet as a platform for free speech (not letting the network provider favor one type of usage over another, at least anything based on type of speech, i.e. flag/bible burning vids).
> more computers along the communication path between you and your intended audience.
Yes, your point being?
um... Have you ever run traceroute?
hillary-like personal email server appliances
>> um... Have you ever run traceroute?
> Yes, your point being?
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
It's quite simple - running a home server requires A LOT of work to do backups, provide uninterrupted power and so on. Why would you want to do that?
And I get that this must be a troll, but I don't mind feeding this one- You can buy a UPS or a laptop with a battery (effectively a UPS) for less than $100USD. I'd guess no small percentage of commenters here have one of those lying around, quite possibly collecting dust. I'd also guess that no small percentage of commenters here would disagree with your characterization of the level of work that backups require. You'd be amazed at the number of ways that computers can help you automate tasks. Good grief.
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
Keep in mind, that a personal mail server is nigh indistinguishable from a spam bot. And it looks like people prioritize spamless email over being able to run a private server.
For my own amusement, let me paraphrase almost as incendiarily as I can-
"Keep in mind, that a purple person walking down the street with a toolbelt including a hammer, though they be only a carpenter, is nigh indistinguishable from a person who recently bludgeoned to death someone with a hammer. It looks like the orange people prioritize a harmonious society with more easily investigatable murders over allowing purple carpenters to walk down the street while wearing their toolbelts"
Alternately, if the crime of littering was your highest priority, you could merely imprison anyone who had the physical capability to litter. But we don't do that. As a general rule, humanity seems to have found a balance where giving everyone liberty, including the liberty to commit crimes, outweighs the alternative.
I suppose your argument, is that you, and the rest of the world are perfectly fine with a world where the internet does not involve anyone operating a server from home. Myself, I'm not fine with that at all, because it sounds like a blueprint for how tyranical authoritarians can turn the 'disruptive' technology of the internet, into something where they get to maintain their position as tyrannical gatekeeper over all the new communication the new internet tech allows.
Not cool.
OMG, forget the murderers, I think yesterday I was the victim of receiving an unsolicited email. Stop the presses, re-open GITMO, break out the black hoods. Jesus, overconcerned about the harms of spam much? Oh, but you say that the environment of spam allows even more nefarious crimes to go on. I'm sure the gubernments that managed to whack bin laden couldn't ever track down those spammers and bring them to justice. It's all lies.
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
Unless you make home hosting easy and practical it won't be used in any significant roles by a significant amount of users. And I just don't see how it can be practical.
To be blunt- I think you lack imagination. You clearly see no merit in my theory- that the double/triple-price "business class / server allowed" impediment to development, testing, and deployment of new solutions/evolutions, is very precisely the one and only real impediment to "making home hosting easy and practical".
We disagree, that's fine, it happens.
hillary-like personal email server appliances
I don't know if it's the case in one, some or many countries, and I'm not sure you know either.
hillary-like personal email server appliances
hillary-like personal email server appliances
As far as we both know now, there is one country in the world where some of the people can't do it.
Maybe there are other places, but as for now, we don't know.
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
hillary-like personal email server appliances
ToS limitations are not "perceived" but actual; In my case, without paying about double the residential rates, I'd be categorically forbidden from running a server of any kind, no option of a static address, and port 25 and 80 blocked upstream.
Indeed. As far the nuance between 'actual at the business contract / terms of service' level, and 'actual at the ISP gateway filtering' level, I merely opine my original point that such difference simply makes or breaks large swaths of potential competitive solutions.
Of course, what's not mentioned in any of these metrics is the *upload* speed, which is far more critical to running a server.
And here I'll try to bow out of this debate with a final compulsive pedanticism- Not 'critical'. As far as free speech is concerned, as long as there is enough bandwidth for plenty of text communication, that has tremendous utility sans any large amounts of upload bandwidth. Certainly with every increasing order of magnitude of upload bandwidth, your server/s can do more interesting things. Text, then gaming, then voice, then video, then high def, then etc... But never forget, even limited to 56kbps, you can engage in some amazing levels of liberating free (text) speech on the internet. Kids probably take that stuff for granted these days.
Get off my virtual lawn.
hillary-like personal email server appliances
hillary-like personal email server appliances
I certainly don't have any UPSs around at home. 15 years ago I did have an old 486 server running at home. In those days only dial-up internet connection was available, charged by the minute. One small configuration error on my part led to an uninterrupted two weeks long connection and a telephone bill more than half of my monthly wage. Definitely wasn't fun. It is just one of the things that could go wrong. Running safe, secure and highly available service at home is complicated, otherwise the wages of sysadmins wouldn't be at the level they are now. It's so much easier to access gmail.com.
hillary-like personal email server appliances
hillary-like personal email server appliances
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
Resisting the centralization of network infrastructure
SSL certs
SSL certs