|
|
Subscribe / Log in / New account

On technological liberty

By Jake Edge
April 24, 2019

LLW

In his keynote at the 2019 Legal and Licensing Workshop (LLW), longtime workshop participant Andrew Wilson looked at the past, but he went much further back than, say, the history of free software—or even computers. His talk looked at technological liberty in the context of classical liberal philosophic thinking. He mapped some of that thinking to the world of free and open-source software (FOSS) and to some other areas where our liberties are under attack.

He began by showing a video of the band "Tears for Fears" playing their 1985 hit song "Everybody wants to rule the world", though audio problems made it impossible to actually hear the song; calls for Wilson to sing it himself were shot down, perhaps sadly, though he and the audience did give the chorus a whirl. In 1985, the band members were young and so was open source, he said. But there were new digital synthesizers available, with an open standard (MIDI) that allowed these instruments to talk to one another. It freed musicians from the need for expensive studio time, since they could write and polish their music anywhere: a great example of technological freedom.

[Andrew Wilson]

They were singing about freedom, he said, and how fragile it is. It is a political song that describes the threats to freedom if people are inattentive.

His talk would look both backward and forward, he said, taking the novel approach, perhaps, of viewing software freedom through the lens of the thinking of three philosophers. He is "standing on the shoulders of the proverbial giants": John Stuart Mill, Isaiah Berlin, and Erich Fromm. Those three were "brutally intellectually honest" and he would do the same, he said. These thinkers all used the terms "freedom" and "liberty" interchangeably, so he would follow their lead.

His proposition in the talk is that FOSS embodies classical liberalism. The three giants had powerful ideas; from a 30,000-foot view, Mill was concerned with the sovereignty of the individual in their person and their mind; Berlin wrote about positive and negative liberty, which looks at "who governs me?" and "how am I governed?"; and Fromm discusses the psycho-social barriers to actualizing liberty. FOSS owes part of its success to these time-tested principles from classical liberal thought, he said.

Copyleft and permissive licensing are both valid and, in fact, necessary tools of software freedom. They represent the ideas of positive and negative liberty. The FOSS licensing model is a deep concept that is implemented narrowly. There are so many threats to freedom in our world, and to technological liberty, that cannot be addressed by copyright licensing alone; more tools are required.

Three giants

Mill was probably not much fun at parties, Wilson said, since he spent much of his time contemplating the errors of the human race. Mill is the father of political liberalism and his book, On Liberty, gives the underpinnings of that philosophy: "Over himself, over his own body and mind, the individual is sovereign." He believed in a sharp dichotomy between public and private life; Mill was a libertarian, not an anarchist or socialist. He believed in regulation, but only in the public sphere, not on private life, and only to preserve the rights of other individuals.

He also believed that no intellectual argument is fully resolved. In the mid-1850s, earth-shaking discoveries were regularly being made that were upending conventional thinking. Mill fought against the "tyranny of the majority"; he believed that the majority opinion contained flaws and that the minority opinion had elements of truth. Anyone claiming a 100% solution was claiming "infallibility"; that term was a reference to the Pope, which would have been a "mortal insult" to Englishmen at that time, Wilson said.

Wilson has no doubt that Mill would be a "formidable advocate" for free software and free culture if he were alive today. He would particularly like the "right to fork" since it provides effective protection against the tyranny of the majority.

Skipping ahead a century, Wilson turned to Berlin, whose family fled the Baltic states during the Russian revolution and who was eventually knighted by Queen Elizabeth. Berlin has two concepts of liberty, negative and positive. If the answer to the question "by whom am I governed?" for a particular area is only the individual, they are experiencing negative liberty; negative "as in a no-fly zone", Wilson said. If there are constraints from the outside, that individual is experiencing positive liberty.

For example, the internet protocols are not governed by any entity—people can use them as they see fit—which means they are experiencing negative liberty in that realm. It also means that people can use those protocols for good or bad (e.g. human trafficking). On the other hand, positive liberty can degenerate into paternalism, where restrictions are placed "for your own good". Both facets can cause problems and the balance between them shifts over time as concerns over bad actors versus paternalism wax and wane.

These concepts map directly to the differences between permissive licenses and copyleft, Wilson said. Permissive licenses are almost purely negative liberty, but not quite; adding things like defensive patent clauses adds more positive liberty into the mix. Copyleft adds even more positive liberty to prevent the hoarding of the source code. It is interesting to note that 100% negative liberty, that is the public domain, is not considered FOSS. Another thing to consider is that the GPL allows adding negative liberty in the form of additional permissions, but reserves the positive liberty additions for itself.

Thus, there is no war between copyleft and permissive licenses, both are needed, he said. There is some sibling rivalry between them, but no war. No "Sophie's choice" is necessary or desirable to permanently choose between them; it is not really meaningful to think of choosing only one of the two. "If you go down to the deep philosophical roots of open source, we need both."

He then moved on to Fromm, who is not really a philosopher but is, instead, a social psychologist. Fromm believes that ideas can become powerful forces, but only if they answer specific human needs in a social context; it requires some group of people to buy into the idea. He is also a pragmatist: freedom is only real when it is exercised. Theoretical freedom is not a freedom at all.

FOSS has attracted an identifiable "psycho-social group": software developers. It is a movement that resonates with certain kinds of people who lean toward "nerdiness", Wilson said, self-identifying as a nerd in order to be able to make that observation. Nerdiness can be a solitary pursuit, but those who buy into FOSS can gain some concrete benefits, including exercising personal creativity, gaining status and recognition with the movement, and even potentially establishing a successful career.

Wilson said that he would have liked to also talk about some of the ideas of Martin Heidegger, who had important things to say about technology. But Heidegger was a Nazi, so Wilson did not feel that he could present those ideas in a talk about freedom.

Today

It should not come as a surprise to anyone, Wilson said, that liberal democracy is under siege—again. All over the world we are seeing this, from right-wing populists attacking any form of perceived "elitism" to left-wing radicals who claim that nothing of value can be learned from dead white men. There are also the masses who don't engage on any thoughtful level at all; they do not educate themselves about what is going on in their countries or the world.

Beyond that, privacy is under siege. We are beset by obsessive data gathering by various businesses that have business models tied to monetizing all of the data they gather on us. He suggested that attendees look up Stingray devices if they have not heard about these cell-phone surveillance devices.

And truth is under attack, as well, sometimes from surprising sources. There is intentional cheating, such as in the Volkswagen emissions scandal, but there is also "fake news", of course. Beyond that, there is p-hacking, which is not that well known outside of the social sciences, but many researchers are losing their careers because of intentionally biasing the results of their studies. Medical scientists are publishing non-reproducible research in what is supposed to be the gold standard: peer-reviewed medical journals. There is a "rush to publish", which is understandable on some level, but, as a patient, he is not terribly excited by being treated by non-reproducible medicine. And on and on.

So, he asked, "is Mill still alive?" Is there still a separation between private and public lives? He respectfully disagrees with Scott McNealy, who famously said: "Privacy is dead. Get over it." Wilson is in the "cautious yes" camp with regard to Mill's ideas still being valid today. He believes that individuals are still sovereign, but only over their physical self and their own mind. Privacy does not extend into cyberspace, which is part of the public life of an individual.

Other communities

How can the free-software movement engage with other communities such as those in the free-culture world, he wondered. They are potential allies, but we should not insist they adopt our methods. We have had lots of success with FOSS, but that does not mean other communities must copy us exactly. They can learn from what we have done, but we must be respectful as we proselytize; different things will resonate with groups that are not made up of software developers. We should work with those communities with humility and understanding, he said.

There is the ancient cliche that if the only tool you have is a hammer, everything looks like a nail. Copyright licensing has been our hammer, Wilson said. But many of the problems in learning, privacy, and truth are outside the reach of copyright licensing. That will not solve the problem of social scientists twisting their research, for example.

But we have another tool, Wilson said, the open-source development model. "This is one of our great gifts to the world." That model provides ways to track contributions, signoffs, and approvals via Git metadata. There is also a human-readable discussion list that explains why certain design decisions were made; "this is knowledge and it is trackable". Perhaps coupling that with the blockchain would create something that is "legally admissible" and is tamper resistant. Getting others "spun up this kind of model" might be the "biggest gift of all".

Going forward

He is not Moses, who had ten (commandments), nor Richard Stallman, who had four (freedoms), but he has two ideas to start a discussion on a broader definition for technological liberty. The first is "freedom from technology". This is based on "deep humanism", he said; it is the idea that "humans must be in charge". If we ever lose that control, our individual liberty is gone.

He gave a "horrible example" from recent news: the Boeing 737 MAX airplane has technology that can't easily be overridden by humans, which apparently led to two separate crashes killing all aboard. "Off" must mean off, not just "slightly less on". That is "freedom zero".

His second thought is about "ethical technology". There are three pillars to that, he said. The first is "explicability"; if no human can understand what the machine is doing, then the machine is in charge. There must be a human-understandable description of how the machine makes its decisions. "Demonstrability": there must be some form of test that can be repeated that shows that the technology works. "If you can't demonstrate it, then it's not ethical." Lastly, there must be a human or other entity "that takes responsibility for the technology and can fix it when it breaks".

Open source meets the first two criteria for ethical technology, Wilson said, but falls down on the third. "The world is littered with the corpses of dead open-source projects."

We have done great things in 30 years, he said. He wishes he could be here in another 30 to see where things go, but that seems unlikely to him. His hope is that organizations with our "same ideological beliefs" will exist and that we, collectively, will have made great strides toward a technological landscape that is humanist.

[I would like to thank the FSFE and the LLW Diamond sponsors, Intel, the Linux Foundation, and Red Hat, for their travel assistance to Barcelona for the conference.]


Index entries for this article
ConferenceFree Software Legal & Licensing Workshop/2019


to post comments

On technological liberty

Posted Apr 24, 2019 21:41 UTC (Wed) by nix (subscriber, #2304) [Link] (4 responses)

That sounds like an utterly fascinating talk. Thanks for the writeup!

(But no, blockchain won't solve this, even if it were legally admissible, which it more or less isn't. Even ignoring its enormous running costs, it is very vulnerable to centralization, and once a single clique exceeds 50%, even for a moment, tamper-resistance is lost.)

On technological liberty

Posted Apr 27, 2019 8:55 UTC (Sat) by jezuch (subscriber, #52988) [Link] (3 responses)

What you're talking about is "blockchain as in bitcoin" and I'm pretty sure this is not the only way to create a blockchain even if all the other "coins" parrot its wasteful election protocol. I believe that another is... git.

On technological liberty

Posted Apr 30, 2019 14:58 UTC (Tue) by nix (subscriber, #2304) [Link] (2 responses)

Distributed systems based on a Merkle hash are not all blockchains. git has no fixed block sizes, no regular every-$interval additions to the chain, routine long-term forking, and no proof-of-*anything* to allow additions to the consensus chain (indeed, it may have no consensus chain at all, though most projects do).

Most of these things are considered essential if a blockchain of whatever sort is not to be just another word for "slow and inefficient database". So git is not a blockchain, though it *is* an existence proof that the useful parts of blockchains are not original :)

On technological liberty

Posted May 1, 2019 7:15 UTC (Wed) by raof (subscriber, #57409) [Link] (1 responses)

> Most of these things are considered essential if a blockchain of whatever sort is not to be just another word for "slow and inefficient database".

Wait - blockchain is *not* just another word for “slow and inefficient database”? ☺

On technological liberty

Posted May 2, 2019 13:45 UTC (Thu) by nix (subscriber, #2304) [Link]

Yeah, I think I lost track of that sentence somewhere in the middle :)

On technological liberty

Posted Apr 24, 2019 22:36 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

> It is a movement that resonates with certain kinds of people who lean toward "nerdiness", Wilson said, self-identifying as a nerd in order to be able to make that observation. Nerdiness can be a solitary pursuit, but those who buy into FOSS can gain some concrete benefits, including exercising personal creativity, gaining status and recognition with the movement, and even potentially establishing a successful career.

It's still an incredibly negatively connotation so I don't think people should keep using that word and expect any respect as a social movement outside of a narrow niche of technology. Calling people nerds is a form of anti-intellectualism that the victims have actually bought into. You can of course then talk about (actual) liberalism but don't expect anyone to take you seriously when you have been pigeonholed as someone who only knows things about technology or a singular area of (theoretical) science. And don't be surprised when experts are being ignored at every turn by people who think themselves their betters when you keep on self-deprecating and apologizing for your intellect.

On technological liberty

Posted Apr 25, 2019 6:56 UTC (Thu) by halla (subscriber, #14185) [Link] (1 responses)

"His proposition in the talk is that FOSS embodies classical liberalism."

Not for me. I don't stand in the grand old tradition of begging the King of England to pretty please not steal the money the merchants have made. Vide Adam Smith.

I am a socialist: I create free software so I can disrupt capital and give labour owership of the tools of production.

" FOSS has attracted an identifiable "psycho-social group""

What an asinine thing to claim. No such thing as a "psycho-social group" exists. It's a meaningless phrasse.

" to left-wing radicals who claim that nothing of value can be learned from dead white men"

Which is also a stupid claim: nothing like that exists in the real world. Nobody is claiming anything like that, but we're no longer living in the age of Kipling and Oppenheim anymore.

This thread is going to be a complete wasteland, of course, but that's what you get when you cover sheer idiocy like this presentation seems to have been.

On technological liberty

Posted Apr 25, 2019 11:48 UTC (Thu) by mbg (subscriber, #4940) [Link]

It's clear there's a strong left libertarian/anarchist strain in free software ideas too, which I would place in a different tradition entirely. But I guess the speaker is not a fan of RMS.

Overall it seems the talk was a little politically incoherent; I think Fromm was influenced by Marxist thought rather than classical liberalism.

On technological liberty

Posted Apr 25, 2019 8:46 UTC (Thu) by shiftee (subscriber, #110711) [Link] (1 responses)

"It is interesting to note that 100% negative liberty, that is the public domain, is not considered FOSS."

Is public domain software not considered FOSS?
Does it not have the four freedoms?

Public Domain

Posted Apr 27, 2019 13:36 UTC (Sat) by smurf (subscriber, #17840) [Link]

https://opensource.org/faq#public-domain

Basically you can't disawov being the copyright holder in all jurisdictions, and in those where you can't, what rights does a non-license convey if it doesn't legally exist?

This is a surprisingly thorny question, legally speaking. My recommendation is: if you want permissiveness, then use Apache or something MIT-ish (and if not, use the GPL).

On technological liberty

Posted Apr 25, 2019 12:47 UTC (Thu) by excors (subscriber, #95769) [Link] (15 responses)

> The first is "freedom from technology". This is based on "deep humanism", he said; it is the idea that "humans must be in charge". If we ever lose that control, our individual liberty is gone.
>
> He gave a "horrible example" from recent news: the Boeing 737 MAX airplane has technology that can't easily be overridden by humans, which apparently led to two separate crashes killing all aboard.

I don't really understand that example. Humans *were* in charge of the plane's behaviour. Specifically, the humans at Boeing who designed a system to reduce the risk of the pilot accidentally stalling the plane. (That system had some unintended side-effects, but being in charge doesn't mean the thing you're in charge of does precisely what you expect.)

I assume the suggestion is that the pilot should be more in charge than the plane's designers. But why? What principle is used to decide between them?

If it's because the pilot has more at stake (i.e. their life): so do the passengers, and they obviously shouldn't be in charge of the plane.

If it's because the pilot has better data to make real-time decisions, and e.g. won't get so confused by a single broken AoA sensor: that seems false. There are plenty of fatal air crashes where the pilot missed very basic information the plane was trying to show them, because they got overwhelmed in an emergency situation. A designer sitting in a comfortable office has a much better ability to consider all the available data and work out the correct response and encode it as an algorithm. It won't be perfect, but on average it'll probably do a better job than the pilot. Planes are getting more complex and more automated over time, and accident rates are continually going down. And it's still a human in charge of those decisions, even if that human is many miles and many years away from those decisions being put into practice.

"Individual liberty" seems kind of meaningless in something as incredibly complicated as a plane flight. That flight is the result of collaboration between millions of people over decades; it can't be reduced to one individual.

On technological liberty

Posted Apr 25, 2019 13:30 UTC (Thu) by kmweber (guest, #114635) [Link]

Pretending that incredibly complicated things are a lot simpler than they actually are is par for the course for these people. It's just intellectual laziness, one with nightmarish results.

On technological liberty

Posted Apr 25, 2019 15:48 UTC (Thu) by ededu (guest, #64107) [Link] (9 responses)

> I assume the suggestion is that the pilot should be more in charge than the plane's designers. But why? What principle is used to decide between them?
>
> If it's because the pilot has more at stake (i.e. their life): so do the passengers, and they obviously shouldn't be in charge of the plane.
>
> If it's because the pilot has better data to make real-time decisions, and e.g. won't get so confused by a single broken AoA sensor: that seems false.

I would say because the pilots are the users. The principle stated by presenter is to give control to users (who use the plane), whereas the designers (who built the plane) only *help* them.

On technological liberty

Posted Apr 25, 2019 17:28 UTC (Thu) by excors (subscriber, #95769) [Link] (3 responses)

In that case I'd think the passengers are the closest thing to "users". They're the ones making use of the plane to achieve some external goal, and paying for the benefit it provides to them. But if a passenger tries to take charge of the plane, they get called a hijacker.

More accurately they're users of a system that consists of the plane itself, the pilot, the other crew, air traffic control, airline ground staff, etc. The pilot is just one of that system's components, who is using some other components while being used by yet others. The pilot doesn't deserve the special status of "the user" since they're not at the top of the hierarchy; there is no hierarchy, just a complex mesh of organic and technological components.

Because it's a complex system, safety needs to be considered systemically and can't be simply deferred to the pilot. Maybe it's safer to give more direct control of the plane to the pilot (sometimes they will override a failed automated system), maybe it's safer to take away control (sometimes they crash through malice or incompetence) - that can be evaluated on a case-by-case basis in terms of the universally-accepted goal of improving safety. It can't be evaluated in terms of liberty or of giving humans control over technology - there are far too many humans involved to meaningfully decide whose liberty and whose control should be prioritised - so they don't seem like useful ways of looking at the problem. (And that makes me doubt their usefulness for any moderately complex problem.)

On technological liberty

Posted Apr 25, 2019 18:20 UTC (Thu) by ededu (guest, #64107) [Link] (1 responses)

Well, there are indeed several/numerous people involved in that flight, and no hierarchy, but each of them has its distinct role, and has to be in control in his job.

In this specific case, the plane itself was not doing well, and the user whose job is to pilot it is the pilot (not passengers, neither ground staff); the technology (i.e. plane designers) should only help him to take the decision.

Alternatively, if the problem was that someone A from the tower control said to the pilot "go up", and the communication system modified this into "go down", then again there would be a problem: the system decided instead of A. In this case, the user is A, and he must have the control over the system (its designers included).

If the problem was that the steward put coffee in some passenger's cup, but the machine changed it to something else (because it thought it's better), there would be a problem again: the user is the steward etc.

And generally, the principle is that each system should have a human controlling it, with the system only helping/advising him. By the way, I wonder if there is a system without a human fully controlling it.

On technological liberty

Posted Apr 25, 2019 20:27 UTC (Thu) by somlo (subscriber, #92421) [Link]

I think this is an illustration of the principle of separation of mechanism and policy (https://en.wikipedia.org/wiki/Separation_of_mechanism_and...). The designers and builders of the plane provide the mechanism. I'd say the pilot is the only one that should be allowed to set policy. If the mechanism interferes with the ability of the pilot to set policy, then the mechanism is clearly broken.

On technological liberty

Posted Apr 26, 2019 11:51 UTC (Fri) by nix (subscriber, #2304) [Link]

that can be evaluated on a case-by-case basis in terms of the universally-accepted goal of improving safety
Except that of course safety is not the only goal, even if it is universally accepted as an important one. It is always more safe to not fly and just not travel to where you wanted to go than it is to get into an aircraft and fly there. Safety is an important goal, but the goal of getting people to their destinations is clearly valued by the aviation industry and its users as well, and highly -- safety-related no-flys hardly ever happen, and even then never last for long unless they affect only one aircraft type, etc.

On technological liberty

Posted Apr 25, 2019 18:31 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (4 responses)

Give control to users does not work for safety assistance systems like the MCAS.

They exist because computers can react infinitely faster than humans, and keep cool under stress. (the recordings of the Paris Rio crash where safety systems did detect sensor failure and did give back control to human pilots, who then proceeded to crash the plane from high altitude, should be a cautionary tale)

The basic problem with the MCAS, is not its piss poor implementation (and I include giving back control or not to pilots implementation), it's that it is used to lie about the plane actual flight envelope, much like dieselgate software was used to lie about actual engine emissions.

Software is a hardware assist, it can not change actual hardware properties.

On technological liberty

Posted Apr 25, 2019 19:15 UTC (Thu) by antiphase (subscriber, #111993) [Link] (3 responses)

MCAS is not directly a safety system though, although it may contribute to the pilot's ability to prevent a stall by changing the behaviour of the aircraft to control inputs.

This is the irony; these aircraft and their passengers were destroyed by something which isn't even important to the stability of the aircraft - many people seem to be erroneously parroting that this airframe is in some way "dynamically unstable" and that MCAS is required to keep it in the air under some circumstances, which is just plain wrong.

On technological liberty

Posted Apr 28, 2019 9:35 UTC (Sun) by nim-nim (subscriber, #34454) [Link] (2 responses)

Human pilots are perfectly able to get a plane into stall by trying to apply too much power to engines on lift off/down, even without giving them a badly balanced plane to start with (see AF 206 crash for example). People are much too quick to move from “MCAS implementation was horrible” (true) to “MCAS was not needed in the first place” (false, pilots are not superhumans, if you want to play at raw airplane control do it on your private time, with a plane small enough to react quickly, and not full of passengers). The two planes crashed in good weather conditions, want to see what happens when one of those hits very bad weather?

The MCAS concept is sound. The implementation is not. The best MCAS implementation, or pilot, can not make a badly balanced body correctly balanced. Boeing had 15 years to produce this body, since the A320neo newspapers say Boeing went into panic about, is itself a response to the Bombardier C Series program, that started in 2004.

Airbus’ response to the C Series was to design the neo. Boeing’s was to ask its lobbyists to ban the plane from North America. End result: Airbus has two working efficient narrow body programs today, Boeing has a frankenplane and people debating whether the various kludges that went into this its assembly make it more or less viable.

On technological liberty

Posted Apr 28, 2019 13:41 UTC (Sun) by farnz (subscriber, #17727) [Link] (1 responses)

It's also connected to trying to avoid retraining pilots; the 737 MAX is, for regulatory purposes, just another 737 variant, and as such a pilot who obtained type-certification on a 737 in 1968 and has maintained it since is permitted to fly the 737 MAX with no further qualifications. As the 737 is the only narrow body aircraft Boeing sell, this means that a discount airline using 737 family planes only needs its pilots to get one set of certifications to fly any plane in the fleet, from an original 737-200 that's now coming up to 6 decades old, through to a brand new 737 MAX.

This amounts to brand lock-in; the A320 family (single type for regulatory purposes, covering A318/A319/A320/A321 and neo variants) means retraining your pilots, as does the A220 family (former Bombardier C Series being sold under the Airbus brand). If Boeing had produced a new narrow body aircraft type, instead of adding a new variant to the 737 family, then their potential customers would not be seeing potential savings from not retraining; as the A320neo was a strong enough competitor to overcome airlines' reluctance to retrain pilots for new types, Boeing had to either work out how to make the 737 continue to work for the airlines, or had to produce a sufficiently better product that airlines would preferentially retrain people for the new Boeing instead of going for an alternative type from Airbus, Bombadier, or other manufacturer (Comac, Embraer, Irkut and Tupolev all have models targeting the 737's niche).

On technological liberty

Posted Apr 30, 2019 11:33 UTC (Tue) by NRArnot (subscriber, #3033) [Link]

Amusing. The only country left with competition between two indigenous narrow-body airliner manufacturers is ... Russia!

On technological liberty

Posted Apr 25, 2019 18:49 UTC (Thu) by flussence (guest, #85566) [Link] (2 responses)

> I don't really understand that example.
The fly-by-wire system physically overpowered the human holding the flight stick. Other planes don't do that.

> I assume the suggestion is that the pilot should be more in charge than the plane's designers. But why? What principle is used to decide between them?
Presumably the principle that it's the pilot's job to land the plane in such a way that all occupants walk away from the landing, with the requisite thousands of hours of training that job entails. The pilot should have been allowed to do his job.

On technological liberty

Posted May 2, 2019 8:26 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

The problem is, that all of these sitatuations need resolving on a case by case basis ...

> > I don't really understand that example.

> The fly-by-wire system physically overpowered the human holding the flight stick. Other planes don't do that.

You mean like the Paris Rio crash? Where the plane (an Airbus I believe) did not do what the "pilot in charge" told it to do?

> > I assume the suggestion is that the pilot should be more in charge than the plane's designers. But why? What principle is used to decide between them?

> Presumably the principle that it's the pilot's job to land the plane in such a way that all occupants walk away from the landing, with the requisite thousands of hours of training that job entails. The pilot should have been allowed to do his job.

I think you have a very rose-tinted view of pilot experience. Most of today's pilots probably have very LITTLE experience - airlines do not like pilots flying planes, and usually George is in command.

The least-experienced pilot in the Paris Rio crash had, I believe, about 4000 hours. And that inexperience was a direct causative factor in the crash.

Contrast that with "Miracle on the Hudson" where - as I understand it - the pilot probably had *more* "pilot in charge" hours in a glider on his own dollar than he had as an airline pilot being paid to fly. That experience saved the plane when it suddenly turned into a large, and expensive, glider.

And then let's look at the crash in Europe a year or so ago, where the pilot deliberately did a "controlled descent into terrain", killing everyone on board somewhere, iirc, in the Pyrenees.

People like simple answers. Unfortunately, in reality, simple answers don't exist.

Cheers,
Wol

On technological liberty

Posted May 3, 2019 1:35 UTC (Fri) by flussence (guest, #85566) [Link]

My main point there could have been better worded, but in *this case* we're talking about the 737-MAX crashes, and it's already known why it went wrong. To restate it more thoroughly:

737 pilots are trained and certified *once* to fly *any* plane in the 737 line, which is half a century old. This is because, as should now be blatantly obvious from two crashes in quick succession, Boeing are corner-cutting greedheads, and they pretend every variant is close enough to the previous in order to sell more of them to similarly-minded airlines looking to cut costs (i.e. lower their pilots' job security and salary by commoditizing them). Until now they mostly got away with it, because other planes in the line handled similarly despite stretching the design further and further from what the original certifications were intended to cover.

This time that technical debt caught up with them, because the 737-MAX is a frankensteined-together assembly of an old fuselage, modern engines that are so much larger they have to be mounted in front of the wing instead of under, and software that's _supposed to_ hide the resulting change in handling from the user.

That software has to be *very insistent* that it's right, because if it allowed the pilots to fly this thing like a 737 - throttle it up and down like how they're used to - _they_ would've been the ones to crash it. But the software was broken. The pilots would have been able to look out the window and see it was wrong, but they could not override its faulty decision-making.

Because the software gaslights its users by design — as a workaround for them being misled about this being a plane they were capable of flying — this happened.

Several people failed to uphold their responsibility: Boeing management (and it's their heads that should roll for it), their engineers and developers (whether getting the thing to fly safely or whistleblowing that it can't), and the FAA, in making sure a certification is applicable to the vehicle. It was also in the news that some safety mechanisms for this, an aeroplane with several hundred person capacity, were being sold as optional extras. Would that have saved them? Who knows. By the time it reached the assembly line, the pilots were already prevented from doing their job - even in an emergency, which is a large part of the reason they're there.

The main point of my first post is that humans are *required* to be kept in the loop when ­— not if — the machine cocks up. Anyone who's been on the wrong end of someone else's faulty software (say, Google) should already understand why. The secondary point is that handwaving that away was such a crass viewpoint it didn't merit a full response, but whatever.

On technological liberty

Posted Apr 26, 2019 9:03 UTC (Fri) by nilsmeyer (guest, #122604) [Link]

> "Individual liberty" seems kind of meaningless in something as incredibly complicated as a plane flight.

The whole airport experience should already disabuse you of the notion that you enjoy any liberty, rights or dignity.

On technological liberty

Posted Apr 26, 2019 6:51 UTC (Fri) by LtWorf (subscriber, #124958) [Link]

> Thus, there is no war between copyleft and permissive licenses, both are needed, he said.

In my view, if you are not releasing a software to make money, copyleft makes the most sense. Everyone can use it and companies can't improve on it and release their proprietary version, so you get your software to get more contributions.

You might get less users but who cares about users that don't contribute back? My ego is big enough as it is, really.

Permissive licenses would be great if everyone played fair, but it's not the case. And it is the same reason why the capitalistic ideal doesn't work and you need to regulate the market.

The view from academic philosophy

Posted Apr 26, 2019 23:39 UTC (Fri) by spwhitton (subscriber, #71678) [Link] (3 responses)

I'm a Philosophy Ph.D. student in a department which is strong in political philosophy, and I've always been disappointed by the lack of engagement with the Free Software movement on the part of political philosophers. So I read this article with interest. (My own view is that one of the fascinating things about Free Software, as a political position, is that both libertarians and socialists have reasons to endorse its foundational principles.)

A few things stood out for a note of caution:

He believed in a sharp dichotomy between public and private life; Mill was a libertarian, not an anarchist or socialist.

Scholars have argued that Mill is in fact a socialist, and it might even be the standard view at this point. The interpretive question is certainly not as simple as suggested by the quoted sentence. I'm afraid I don't have references to hand.

The term 'libertarianism' usually refers to an economic view about public life, rather than a view about the distinction between public and private lives. The sharp dichotomy is certainly not something that anarchists and socialists have to reject. The view that there's a sharp dichotomy is usually called 'liberalism' rather than 'libertarianism', at least within academic philosophy.

Permissive licenses are almost purely negative liberty, but not quite; adding things like defensive patent clauses adds more positive liberty into the mix. Copyleft adds even more positive liberty to prevent the hoarding of the source code.

This is an interesting idea but I don't think enough has been said about the distinction between positive and negative liberty for the reader to see exactly what is being claimed. Possibly it just went by too fast in the talk.

The view from academic philosophy

Posted Apr 29, 2019 15:31 UTC (Mon) by nilsmeyer (guest, #122604) [Link] (2 responses)

> Scholars have argued that Mill is in fact a socialist, and it might even be the standard view at this point.

What is the standard definition of a socialist in that context? I find these label usually pretty meaningless.

The view from academic philosophy

Posted Apr 29, 2019 19:18 UTC (Mon) by spwhitton (subscriber, #71678) [Link] (1 responses)

I meant to contrast libertarianism and socialism as economic positions. Libertarianism means a very minimal state, while socialism wants at least some substantial state intervention in markets.

The view from academic philosophy

Posted May 2, 2019 8:36 UTC (Thu) by Wol (subscriber, #4433) [Link]

Libertarianism, as I understand it, places the emphasis on *your* freedom to do what you want, provided it doesn't harm me.

Socialism, on the other hand, restricts *my* freedom, to benefit *everyone*.

Health care is a very good example. In the US, it is libertarian, you can spend what you like (or can afford) on it. That - allegedly - doesn't harm me.

In the UK we have the NHS. That provides health care to everyone on the basis that (a) it's cheaper that way, and (b) providing Public Health means for example that *you* benefit when *my* kids are vaccinated.

The problem, as always, comes when people try to force one solution on all problems, and at the moment I feel that the American Philosoph> I don't really understand that example.
The fly-by-wire system physically overpowered the human holding the flight stick. Other planes don't do that.

> I assume the suggestion is that the pilot should be more in charge than the plane's designers. But why? What principle is used to decide between them?
Presumably the principle that it's the pilot's job to land the plane in such a way that all occupants walk away from the landing, with the requisite thousands of hours of training that job entails. The pilot should have been allowed to do his job.y is forcing Libertarian solutions to problems that can only be solved by a Socialist approach. Public Service in the UK is collapsing as companies take on outsourcing contracts, milk the profits, then hand the wreckage back to the Public Purse. But, and again I feel the NHS is a good example, we've long suffered by telling people they CAN'T have private health care, with the result that peoples' lives and careers have been ruined by a long wait, when if they'd been treated quickly it would have been far more cost-effective.

Cheers,
Wol


Copyright © 2019, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds