Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Posted Feb 8, 2011 20:02 UTC (Tue) by maniax (subscriber, #4509)Parent article: Moglen on Freedom Box and making a free net
Posted Feb 8, 2011 20:08 UTC (Tue)
by ejr (subscriber, #51652)
[Link]
Posted Feb 8, 2011 20:34 UTC (Tue)
by fuhchee (guest, #40059)
[Link] (5 responses)
Perhaps you'd consider UUCP or BGP routing as a positive example.
Posted Feb 8, 2011 20:59 UTC (Tue)
by maniax (subscriber, #4509)
[Link] (3 responses)
There are such protocols (like BATMAN) that can create such mesh networks, but they still fall short of that many nodes. Not to mention the whole hell of assigning IP addresses (it would work with v6 and SLAAC, but ipv4/rendezvous and the similar things will be a problem), level of trust for neighbors, etc., etc. In the end what we would need is BGP's policy routing with zero need for configuration and convergence times that are less than a minute, which might be just too hard to do.
(and there's a reason I say this has to be automatic, just think of a bunch of average users doing policy routing and the resulting mess)
Posted Feb 8, 2011 21:26 UTC (Tue)
by JoeBuck (subscriber, #2330)
[Link] (2 responses)
As it got easier to get on the real Internet, sites with only UUCP connectivity could get MX records for mail delivery from the Internet with normal domainized email addresses instead of the fake UUCP domain, and they only needed a path to a "smart host" to get their mail sent.
Posted Feb 8, 2011 21:29 UTC (Tue)
by fuhchee (guest, #40059)
[Link]
Posted Feb 8, 2011 21:32 UTC (Tue)
by JoeBuck (subscriber, #2330)
[Link]
Usenet had a "cancel" control message, allowing any user to delete a message. It was completely insecure, but it was the only thing that kept Usenet alive once the spammers discovered it. If cancels were made cryptographically secure, there would need to be some mechanism to control spam or vandalism.
Posted Feb 9, 2011 7:49 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
As for full mesh networks, it seems that they are not possible technically for large uses.
Posted Feb 8, 2011 21:33 UTC (Tue)
by sspr (guest, #39636)
[Link] (7 responses)
Look at routing of car traffic: if highways are lacking (or are obstructed), the secondary roads get congested quickly. Or would people settle for a 'degraded' network in difficult times. Maybe starting to offer telephony over meshes could be the offer that pulls people over the line to buy these devices.
I was also thinking that this must be running ipv6, but as I understand, ipv6's routing is very hierarchically and designed to fit todays global routing infrastructure, not meshing.
Posted Feb 8, 2011 21:58 UTC (Tue)
by kleptog (subscriber, #1183)
[Link] (6 responses)
Additionally, by grouping a bit based on upstream providers it avoids people having to maintain routes for entities on the other side of the world. Instead you can have a general "north-america thataway" route.
I think that for mesh-networking you practically can't avoid having everybody know about everybody else. Unless you can find a way to based addresses that encode a location somehow. Suppose you assume that the mesh is approximately a 2D space, you could somehow make addresses correspond to your location. Then each node only needs to know exactly how to route to the nodes nearby and faraway nodes can be intelligently "guessed".
The trick would be to make the addresses an emergent property of the network. But there's no way to deal with "wormholes" which would be a (wired) connection which connected some point to some faraway point without passing though the nodes in between.
I'm sure this is an area that needs a lot of research.
Posted Feb 9, 2011 1:38 UTC (Wed)
by imitev (guest, #60045)
[Link] (5 responses)
http://en.wikipedia.org/wiki/List_of_ad_hoc_routing_proto...
I worked on routing and security in ad-hoc network 10 years ago, when it was still "academic bleeding-edge" (but was a technology already well known to military: I remember one application where they would drop many sensors from a plane and these sensors would manage to communicate with each other and route back data to a central station).
The key thing is that nodes do not know the full network topology, only their first neighbors (ie, in the radio range perimeter). Then, if you want to add efficiency to the routing protocol, nodes can also discover 2nd range neighbors (neighbors of your neighbors), and even 3rd range but then it's not very efficient: you begin to have a lot of traffic only to update each node database when nodes move in and out of radio ranges.
Then, routing has a lot to do with graph theory, only that you just know your neighbors. So based on some logic you decide to send to the neighbor you think is the best, this neighbor does the same, etc. You have to record the path in the packet, to avoid loops, and know which nodes to avoid if the packet gets back to you. There's also a handful of other problems (eg. rogue nodes dropping packets, ...) that makes all of this a poor-performance network.
For sure a lot has changed since 10 years, but by that time, "real-world" application that would Just Work were with static ad-hoc networks, while dynamic ones were just a geek/academic research area with too many shortcomings. So, yes, a static ad-hoc network could be used in a city, but then you still need to get "out" of the city and for that you'll need uplinks to internet, and these can be switched off (not so easily if - say - you use several satellite uplinks).
Posted Feb 9, 2011 6:32 UTC (Wed)
by jmorris42 (guest, #2203)
[Link] (4 responses)
Were any of those (or any since) designed to work in an environment where all of the nodes weren't ultimately under one command and control authority? As in, sure you can airdrop a load of self organizing sensors or repeaters IF they are all equipped with common keys. But what if the op force gets their mitts on one for a few days? Detecting a node sending bogus (hacked or just malfunctioning) route information doesn't sound easy even in a controlled situation like an airdropped network, doing it in a 'download this .deb and install it on your plug computer' setting is a nightmare.
Making a network of semi-reliable links work isn't that hard, see FTN networks from the pre-Internet BBS world for just one example. Making them work in a real time Internet like environment is a little harder. Making them work without any central authority (even IANA can become a central Internet chokepoint in any situation longer than a few weeks) at all hasn't been done yet. Then add the design requirement that it stand up to rogue nodes, spam gangs, criminal organizations trying to hijack the network and/or individual nodes for identity theft, credit card harvest, etc. and finally the dedicated physical and R&D might of nation state actors is asking a bit much of a rag tag band of hackers. A way needs to be found to level the field a bit. We probably need a nation state or other similar empowered actor involved in the design, they have the resources.
> these can be switched off (not so easily if - say - you use several
I can't believe anyone clueful enough to have written the rest of the post added that. Sat links are the easy chokepoint if one of the very few nation states that operate the birds become involved. For example, just how many ground control stations are responsible for every sat with a footprint covering Egypt? In the case of the US I'd bet ~100% of Internet or video capable birds with a footprint on the US are ground controlled from CONUS.
Yes it would work in the current case of Egypt since one could probably find a bird willing to sell you airtime as all of the other Middle Eastern despot regimes haven't circled their wagons and there are probably a couple operated from Europe, Israel, India or the US with footprints covering some or all of Egypt. But depending on them is only a false sense of security since if the poo were ever truly in the fan, the exact sort of situation this sort of planning is intended for, it will fail.
Posted Feb 9, 2011 7:11 UTC (Wed)
by imitev (guest, #60045)
[Link]
I still don't think that was a clueless example.
How to cross international borders: cables, or radio (or people moving their arms with pre-agreed movements, or sending Morse code, but let's say that's not convenient). Forget about cables since they're so easily cut/switched off by any government with any decent control of their borders. That leaves us with radio, which is (just) a bit harder to cut off. But, either you have a PtP wifi link crossing the border from one country to the other, or you have full ad-hoc connectivity, meaning a very dense node network even up to the border, or, you use satellite links. All examples assume that the other side of the link is in a friendly country.
You understood that, since you mention you have to rely on an operator covering many countries which are not necessarily so easily shut down by request of a single country (see for instance [1] for the whole Europe). And yes, you're right it's not suitable for large countries like US, Russia, ..., but they're not the only countries out here.
Maybe your post and mine differ in that you're thinking about a 100% failsafe network, while I'm only looking at how to use existing and available technology to mitigate the risk of a global government "switch".
[1] - http://www.tooway.com
Posted Feb 9, 2011 8:44 UTC (Wed)
by elanthis (guest, #6227)
[Link] (2 responses)
Not necessary. That's the thing about software. Or math. You don't need lots of expensive resources. You just need a sharp mind and some inexpensive commodity resources.
Remember, all software can be "executed" by a human with pencil and paper. Even full .h264 movie decoding could be done by hand on paper, if you give a human the algorithm in English, a stream of paper tape with the raw digits of the encoded movie, and a stream of paper tape to write the decoded frame color values to. It'll be god-awful slow, but it's 100% possible.
Wireless networks can be replaced with people using mechanical means of communication. Displays can be replaced with paint and canvas. Input devices can be replaced with vocal cues or hand gestures. Storage can be replaced with paper or stone. Compases and gyroscopes can be replaced with simple mechanical devices. Even complex electronic interfaces like a GPS can be replaced by an external GPS unit with a digital display read by the human "interpreter" of the code.
This is different than physics, in which the laws and math we are given come not from thought but from observation, with increasingly accurate physics formulas requiring increasing intricate and expensive equipment (like the LHC) to observe things we can't otherwise see.
It's also different than engineering, in which the thoughts cannot be executed by a human. Even if you think up the perfect catapult design, at some point you need to gather the wood and metal and stone to actually build a catapult if you want to launch rocks.
When it comes to software, resources have always been pretty much irrelevant. All you need is the right brilliant person trying to solve the right problem. Software is just math, and like math, the big breakthroughs in software almost never come from governments or megacorps. They come from universities or often even just independent minds working towards an interesting problem that piqued their interest.
When it comes to manufacturing chips and devices that encode software, you need resources. That ceases to be pure CS/math and becomes engineering. If the world simply needs new algorithms and protocols and designs that run over existing wireless hardware to get these new mesh networks, though, then the engineers don't need to get involved, nothing but commodity hardware needs to be paid for, and big resources cease to be relevant.
Posted Feb 9, 2011 19:27 UTC (Wed)
by martinfick (subscriber, #4455)
[Link]
Posted Feb 15, 2011 5:37 UTC (Tue)
by jmorris42 (guest, #2203)
[Link]
Really. Creating Linux/GNU/X/etc was basically a rewrite project for the first decade, even now most of the work is more improving the existing codebase than inventing major new things. Yes some major new things do happen these days but it isn't the majority of the effort.
Now compare to this proposed 'Internet as it was supposed to be, nuke proof, private, perfected' idea. There aren't even any good theories in the academic world to take and run with. So phase one isn't even something for code monkeys, it if for math geeks, network and game theory nerds, etc. And remember that the final product has to withstand active attack by the almost limitless resources and manpower of nation state actors.
So what is the biggest Free Software organization currently. Does it have the resources to build thousand node test networks and then fund thousands of man hours by 'the best of the best' to trying to break it? If any did is it not reasonable to assume they might would have made such a hardening effort at things like Firefox, Glibc, Apache, BIND, etc. already?
That is the big problem with this notion, it not only requires a new breakthrough in network design it will almost certainly require a level of software reliability in the face of an unprecedented level of active attack that has never been achieved to date. Children break Windows and IE, serious Black Hats break LAMP servers. Is there anything the NSA couldn't break if they were desperate? Or the Chinese intelligence agencies? If the final proposed system can't, with a high confidence backed with hard maths, claim to be resistant to such determined attackers then it isn't worth a damn.
To deploy a system as is under discussion that isn't secure will only get a lot of people killed when they foolishly rely upon it and at the critical juncture the secret police round them all up. So either the required miracle in design has to also be a design that can be 100% private, untracable and yet verifiable regardless of implementation bugs or compromised communications links or the fielded implmementation has to prove 100% reliable when the day comes.... and it will probably be put to the ultimate test at most one or two times.
Posted Feb 9, 2011 0:13 UTC (Wed)
by dlang (guest, #313)
[Link] (2 responses)
the problem is creating high density mesh networks (where each node can hear hundreds or thousands of other nodes), that's where they fall apart.
Posted Feb 9, 2011 3:48 UTC (Wed)
by butlerm (subscriber, #13312)
[Link] (1 responses)
It seems to me that the real problem with mesh networks is that the links can't carry enough bandwidth to support anything resembling broadband access on a large scale. But if you only need narrowband access, it ought to be much more practical.
Posted Feb 17, 2011 19:14 UTC (Thu)
by hozelda (guest, #19341)
[Link]
Imagine a node somewhere in the middle of "the action" and where there are many inefficient paths or changing paths in the mesh. Then, at any point in time, besides trying to access what it wants, it has to deal with potentially routing queries and actual data from a number of end points that can cover almost all n nodes, and it might have to pass each of those data/query chunks through multiple times since the potential number of paths being tried out (in a very bad case scenario) could be something perhaps as ridiculous as n factorial. To avoid all of that duplication, we might need to do a lot of accounting per node (trade space for time).
I think we should consider dynamic meshes for small local areas. In the case of a revolution, this would suffice to have locals communicate. Then we'd need to share key information among such clusters. This would be limited to high content information based off noisy decisions taken at local levels. At the highest level, we'd want broadcasting but of largely noncontroversial organizational decisions or news.
The mesh, for wide-scale Internet purposes, may never come close to substituting for an efficient network with a relatively static central core on an efficiency measure. But an inefficient mesh system can still be very useful in a more limited capacity during real emergencies. We also have car and 2 way radio and a whole framework that can leverage these albeit much less efficiently than would be the case with a full Internet.
[Note, that if we have very limited bandwidth generally between any two neighbor nodes, then we have to have some sort of relay or p2p system for efficiency purposes, at least in order to access highly desirable data.]
Posted Feb 15, 2011 16:00 UTC (Tue)
by zooko (guest, #2589)
[Link] (1 responses)
It should also be possible to have *secure* decentralized routing, but I'm not aware of good published research on that. Maybe Bryan Ford touched on it at some point.
I have some ideas about how to accomplish that, but I'm rather busy with Tahoe-LAFS, $dayjob, and so on at the moment. :-)
Posted Feb 15, 2011 17:27 UTC (Tue)
by paulj (subscriber, #341)
[Link]
Have a look at Radia Perlman's byzantine-robust protocl: PhD thesis "Network layer protocols with byzantine robustness", MIT, 1988, http://hdl.handle.net/1721.1/14403; and "Routing with Byzantine robustness", Sun tech report TR-2005-146, 2005. The latter is more a brief description with extensions on the original, but I don't have a URL to hand.
Posted Feb 15, 2011 21:38 UTC (Tue)
by tpo (subscriber, #25713)
[Link]
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
UUCP is a hierarchical network (people configure a few paths to neighbors and they take care of the rest of the delivery)
It was worse than that, the sender had to specify the full path. In the mid-80s you had to know the topology of the UUCP network to send mail. This was later automated, so you could send mail to enduser@utzoo.uucp instead of oliveb!ihnp4!decvax!utzoo!enduser, but this required your local machine to have a copy of the connection map so it could compute the path. Many people chose very bad paths, because they replied to Usenet postings and sent their mail back along the circuitous delivery route (requiring, say, 10 hops instead of 4).
Moglen on Freedom Box and making a free net
But maybe you're thinking of Usenet and not UUCP, which used a flooding algorithm and recorded paths. When two machines connected, each would offer the other a set of messages, by message ID, excluding any that had the other machine's name in the delivery path. The delivery path could be used to trace not only the origin of the message but also who's connected to whom.
Moglen on Freedom Box and making a free net
FIDONet is even better example
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
However, for "static" networks with no moving nodes like sensors, alarm transmitters, ... it works much better since you can learn a lot more about network topology (Xth range neighbors) and just do path discovery when your didn't receive ACK your packet reached destination (in a very dynamic network you would do it almost for each packet).
Moglen on Freedom Box and making a free net
> satellite uplinks).
Moglen on Freedom Box and making a free net
Because of geographic reasons, any city closer than a few miles from the border can't have PtP links. It's also not realistic to think you can go "ad-hoc" all the way through a country's border. That's why I mentioned having satellite uplinks.
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
> and big resources cease to be relevant.
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
Moglen on Freedom Box and making a free net
> to have such a mesh network with more than a 1000 nodes?