Leading items
LGM: Collaboration and animation
There were not any major new software releases at this year's Libre Graphics Meeting (LGM) in Madrid, although there were updates from most of the well-known open source graphics and design projects. But the annual gathering also offers an instructive look at where development is heading, both based on what new problems the programmers are tackling and on what the creative professionals report as their areas of concern. This year, two themes recurred in many of the sessions: collaboration (often in real-time) is coming to many creative-suite tools, and 2D animation is gaining momentum.
LGM was hosted at Medialab Prado, a brand-new facility in central Madrid, from April 10 though 13. As is typical of the event, talks were divided up among software development teams, artists, educators, and theorists exploring more abstract topics like consensus-building in communities. By coincidence, the best-known application projects from the graphics community (GIMP, Krita, Inkscape, Blender, Scribus, MyPaint) all happen to be in between stable releases, but there was still plenty to discuss. The conference runs a single track for all sessions (plus side meetings and workshops); altogether there were 68 presentations this year—which represented a broad sample of the open source graphics and design community.
Join together
One oft-repeated subject this year was development work to enable collaboration. Ben Martin of FontForge showcased the font editor's recently added ability to share a simultaneous editing session between multiple users. The work was underwritten by Dave Crossland, principally for use in type-design workshops, although Martin pointed out that it has other applications as well, such as running task-specific utilities in the background (for example, ttfautohint), or tweaking a font in the editor while showing a live preview of the result in a typesetting program.
It is sometimes assumed that collaborative editing requires deep changes to an application, but the FontForge implementation touches relatively little of the existing codebase. It hooks into the existing undo/redo system, which already handles serializing and recording operations. The collaboration feature is enabled by the first user session starting a server process; subsequent client sessions connect to the server. The server sends a copy of the open file to each new client; afterward any change on any client is relayed to all of the others. The code assumes a low-latency network at present, which simplifies synchronization, but the ZeroMQ library which handles the underlying networking is capable of running over almost any channel (including TLS and other secure options).
Though the overall design of the collaboration feature is straightforward, there are several challenges. First, undo/redo does not always correspond to a single action that needs to be relayed. There are no-ops, such as selection and deselection, which do not alter the file but still need to be tracked locally. FontForge also has seven different internal flavors of "undo" to handle special cases—such as when moving a spline point affects the total width of a glyph. When that happens, the user sees it only as moving a point on the canvas, but fonts store the width and side-bearings of a glyph as separate data. So one operation affects several parts of the file, but must still be undoable as a single operation. And users must be able to use the existing Undo and Redo commands from the menu, so the local undo/redo stack must be tracked separately from the collaboration stack.
There are also some editing tools that have not yet been hooked into the
collaboration feature, as well as non-canvas editing tasks like adding
layers or editing metadata tables. In addition, at the moment only
FontForge's native SFD file format is supported. But Martin argued
that more open source graphics applications ought to pursue real-time
collaboration features. Proprietary applications by and large do not
offer it, but with a robust undo/redo stack, implementation using
ZeroMQ is within reach.
The 2D animation studio Tupi also supports real-time collaborative editing, as Gustav Gonzalez explained. Gonzalez's talk did not go into as much detail as the FontForge talk did, but that was in part because the Tupi team was making its first LGM appearance, and had far more ground to cover. But Gonzalez told the audience that nothing would prepare them for the strangeness of collaborative editing, when the elements in video frame suddenly start to move without their intervention.
Real-time editing, with multiple users altering one file at the same time, certainly has its uses—Gonzalez observed that animation projects deal with far more data than still image editors, seeing as they have dozens of frames per second to worry about. But enabling better collaboration between teams working asynchronously came up in multiple presentations, too. Julien Deswaef presented a session on using distributed version control in design projects. Deswaef noted that many of the file formats which graphics designers use today are text-based, including SVG, DXF, OBJ, and STL, and that they are often accustomed to using version control on web-design projects.
But while version control is quite easy to get started with when developing a web site based on Bootstrap.js or another Github-hosted framework, Git support is not integrated into most desktop tools. Consequently, Deswaef has started his own branch of Inkscape that features Git integration. The basic commands are in place, such as init, commit, and branch, but he is still working on the more complicated ones, such as rolling back.
The "holy grail," he said, is a "visual diff" that will allow users to highlight changes in a file. How best to implement visual diff in the interface is an ongoing debate in design circles, he said. Currently Adobe offers only screenshot-like snapshots of files in its versioning system, which is not a solution popular with many users. SVG's XML underpinnings ought to allow Inkscape to do better, perhaps via CSS styling, he said. Ultimately, not every file format would integrate well with Git (raster formats in particular), but illustrators sharing and reusing vector art on GitHub could be as important as web designers using Git for site designs.
It's sunny; let's share
Of course, collaboration is not a concept limited to end users. Tom Lechner spoke about "shareable tools," his proposal that graphics applications find a common way to define on-canvas editing tools, so that they can be more easily copied between programs. Lechner is famous for the constantly evolving interface of Laidout, his page impositioning program; in another session he showcased several new Laidout editing tools, which included a new on-canvas interface for rotating and reflecting objects. Graphics applications' tools change regularly, so perhaps there is hope that the proposal will attract interest—during a workshop session later in the week, developers from MyPaint, Krita, GIMP, and other applications did hash out the basics of a scheme to share paintbrushes across applications.
Several other sessions touched on collaboration features. Susan
Spencer expressed interest in adding real-time collaboration to Tau Meta Tau Physica, her
textile-pattern-making application, and several artists commented that
collaboration had become a critical part of their workflows. But
without doubt, the most unusual demonstration of collaborative
features was the Piksels and
Lines Orchestra, which performed a live "opera" on its own branches of
Scribus, MyPaint, and Inkscape.
The applications were modified to hook sound events into each editing action (cut/copy/paste, painting, transforming images, and so on); the sounds were mixed together live using PulseAudio. Four artists on stage drew and edited in the various applications, while Jon Nordby of MyPaint conducted and Brendan Howell of PyCessing narrated. If you are having a hard time imaging the performance, you would be forgiven—experiencing it is really the only solution. Video of the sessions is scheduled to be archived at the Medialab Prado site shortly.
Animated discussions
The Tupi animation program was a welcome addition to the LGM program. Gonzalez provided an overview of the application (which began as a fork of the earlier project KToon), showed a video made with Tupi (featuring the voice of Jon "Maddog" Hall), and discussed the development roadmap. An Android version is currently in the works, and plug-ins are planned to simplify creating and importing work from Inkscape, MyPaint, and Blender.
There was not a session from the Synfig team, the other active 2D animation tool, but there was both a Synfig birds-of-a-feather meeting and a hands-on Synfig workshop, which was run by Konstantin Dmitriev from the Morevna Project. Perhaps the highest-profile discussion about 2D animation came in the talk delivered by animator Nina Paley, who lamented the state of the open source 2D animation programs. Paley is quite experienced and fairly well known as an animator. But she expressed frustration with the open source tools available, particularly in terms of their usability and discoverability.
Paley has been trying to move to an open source animation suite ever since Adobe canceled its Flash Studio product, she said, eager to avoid being locked into another proprietary tool that could be discontinued. But the difficulty involved in figuring out Synfig, Blender, and the other options makes staying on Flash on an old Mac machine preferable. People tell her she should just learn to program, she said, but that kind of comment misses the point: both her passion and her talent is for creating the animations; advising her to stop doing that and start programming instead would not result in good animation or good software.
As one might expect, Paley's dissatisfaction with Synfig elicited a passionate response from Dmitriev during the question and answer section; he argued that Synfig was perfectly usable, which he had demonstrated to Paley during the workshop, and repeatedly shows by holding training sessions for children. Paley replied by agreeing that the workshop had been useful, but said it also revealed the trouble: Synfig currently requires hands-on education. But, she said, she was willing to keep learning. As it stands, she is stuck on Flash, but she emphasized that this is a purely practical choice. Personally, she is very committed to free and open culture, which she demonstrated by showing an "illegal" animated short she had made that used music clips which were still under copyright.
Paley's talk expressed frustration, but in the larger scheme of things, the fact that 2D animation was discussed at all was a new development. Several other talks touched on it, including Thierry Ravet's session about teaching stop-motion animation at Numediart, and Jakub Steiner's presentation about creating animated new-user help for GNOME's Getting Started. Getting Started used Blender for its animation; Numediart used Pencil (which was also mentioned in passing during several other sessions).
While it is true that 2D animation with open source tools is currently a trying endeavor, not too long ago it was impossible. The growth of the topic for LGM 2013 bodes well for the future; not too many years ago, the pain-points at LGM were things like color management and professional print-ready output—now those features are a given.
Hearing criticisms about the
projects can be uncomfortable, but it is part of the value of
meeting in person. As Steiner observed on his
blog: "Feedback from an animator struggling to finish a task is
million times more valuable than online polls asking for a feature
that exists in other tools.
" Collaborative editing tools are
an area where open source may be a bit ahead of the proprietary
competition, while in animation, open source may be a bit behind. But
considering their frequency in this year's program, one should expect
both to be major growth areas in the years ahead.
[The author wishes to thank Libre Graphics Meeting for assistance with travel to Madrid.]
Surveying open source licenses
In adjacent slots at the 2013 Free Software Legal and Licensing Workshop, Daniel German and and Walter van Holst presented complementary talks that related to the topic of measuring the use of free software and open source (FOSS) licenses. Daniel's talk considered the challenges inherent in trying to work out which are the most widely used FOSS licenses, while Walter's talk described his attempts to measure license proliferation. Both of those talks served to illustrate just how hard it can be to produce measurements of FOSS licenses.
Toward a census of free and open source software licenses
Daniel German is a Professor in the Department of Computer Science at the University of Victoria in Canada. One of his areas of research is a topic that interests many other people also: which are the most widely used FOSS licenses? His talk considered the methodological challenges of answering that question. He also looked at how those challenges were addressed in studying license usage in a subset of the FOSS population, namely, Linux distributions.
Finding the license of a file or project can be difficult, Daniel said. This is especially so when trying to solve the problem for a large population of files or projects. "I'm one of the people who has probably seen the most different licenses in his lifetime, and it's quite a mess." The problem is that projects may indicate their license in a variety of ways: via comments in source code files, via README or COPYING files, via project metadata (e.g., on SourceForge or Launchpad), or possibly other means. Other groups then abstract that license data. For example, Red Hat and Debian both do this, although they do it in different ways that are quite labor intensive.
"I really want to stress the distinction between being empirical and being anecdotal". Here, Daniel pointed at the widely cited statistics on FOSS license usage provided by Black Duck Software. One of the questions that Daniel asks himself, is, can he replicate those results? In short, he cannot. From the published information, he can't determine the methodology or tools that were used. It isn't possible to determine the accuracy of the tools used to determine the licenses. Nor is it possible to determine the names of the licenses that Black Duck used to develop its data reports. (For example, one can look at the Black Duck license list and wonder whether "GPL 2.0" means GPLv2-only, GPLv2 or later, or possibly both?)
Daniel then turned to consider the challenges that are faced when trying to take a census of FOSS licenses. When one looks at the question of what licenses are most used, one of the first questions to answer is: what is "the universe of licenses" that are considered to be FOSS? For example, the Open Source Initiative has approved a list of around 70 licenses. But, there are very many more licenses in the world that could be broadly categorized as free, including rather obscure and little-used licenses such as a Beerware license. "I don't think anyone knows what the entire universe is." Thus, one must begin by defining the subset of licenses that one considers to be FOSS.
Following on from those questions is the question of what is "an individual" for the purpose of a census? Should different versions of the same project be counted individually? (What if the license changes between versions?) Are forks individual? What about "like" forks on GitHub? Do embedded copies of source code count as individuals? Here, Daniel was referring to the common phenomenon of copying source code files in order to simplify dependency management. And then, is an individual defined at the file level or at the package level? It's very important from a methodological point of view that we are are told what is being counted, not just the numbers, Daniel said.
Having considered the preceding questions, one then has to choose a corpus on which to perform the census. Any corpus will necessarily be biased, Daniel said, because the fact that the corpus was gathered for some purpose implies some trade-offs.
Two corpuses that Daniel likes are the Red Hat and Debian distributions. One reason that he likes these distributions is that they provide a clearly defined data set. "I can say, I went to Debian 5.0, and I determined this fact." Another positive aspect of these corpuses is that they are proxies for "successful" projects. The fact that a project is in one of those distributions indicates that people find it useful. That contrasts with a random project on some hosting facility that may have no users at all.
While presence in a Linux distribution can be taken as a reasonable proxy of a successful project, a repository such as Maven Central is, by contrast, a "big dumpster" of Java code, but "it's Java code that is actually being used by someone". On the other hand, Daniel called SourceForge the "cemetery for open source". In his observation, there is a thin layer of life on SourceForge, but nobody cares about most of the code there.
Then there are domain-specific repositories such as CPAN, the Perl code archive. There is clearly an active community behind such repositories, but, for the purpose of taking a FOSS license census, one must realize that the contents of such repositories often have a strong bias in favor of particular licenses.
Having chosen a corpus, the question is then how to count the licenses in the corpus. Daniel considered the example of the Linux kernel, which has thousands of files. Those files are variously licensed GPLv2, GPLv2+, LGPLv2, LGPLv2.1, BSD 3 clause, BSD 2 clause, MIT/X11, and more. But the kernel as a whole is licensed GPLv2-only. Should one count the licenses on each file individually, or just the individual license of the project as a whole, Daniel asked. A related question comes up when one looks at the source code of the FreeBSD kernel. There, one finds that the license of some source files is GPLv2. By default, those files are not compiled to produce the kernel binary (if they were, the resulting kernel binary would need to be licensed GPL). So, do binaries play a role in a license census, Daniel asked.
When they started their work on studying FOSS licenses, Daniel and his colleagues used FOSSology, but they found that it was much too slow for studying massive amounts of source code. So they wrote their own license-identification tool, Ninka. "It's not user-friendly, but some people use it."
Daniel and his colleagues learned a lot writing Ninka. They found it was not trivial to identify licenses. The first step is to find the license statement, which may or may not be in the source file header. Then, it is necessary to separate comments from any actual license statement. Then, one has to identify the license; Ninka uses a sentence-based matching algorithm for that task.
Daniel then talked about some results that he and his colleagues have obtained using Ninka, although he emphasized repeatedly that his numbers are very preliminary. In any case, one of the most interesting points that the results illustrate is the difficulty of getting accurate license numbers.
One set of census results was obtained by scanning the source code of Debian 6.0. The scan covered source code files for just four of the more popular programming languages that Daniel found particularly interesting: C, Java, Perl, and Python.
In one of the scans, Ninka counted the number of source files per license. Unsurprisingly, GPLv2+ was the most common license. But what was noteworthy, he said, is that somewhat more than 25% of the source code files have no license, although there might be a license file in the same directory that allows one to infer what the license is.
In addition, Ninka said "Unknown" for just over 15% of the files. This is because Ninka has been consciously designed to have a strong bias against mis-identifying licenses. If it has any doubt about the license, Ninka will return "Unknown" rather than trying to make a guess; the 15% number is an indication of just how hard it can be to identify the licenses in a file. Ninka does still occasionally make mistakes. The most common reason is that a source file has multiple licenses and Ninka does not identify them all; Daniel has seen a case where one source code file had 30 licenses.
The other set of results that Daniel presented for Debian 6.0 measured packages per license. In this case, if at least one of the source files in a package uses a license, then that use is counted as an individual for the census. Again, the GPLv2+ is the most common of the identified licenses, but comparing this result against the "source files per license" measure showed some interesting differences. Whereas the Eclipse Public License version 1 (EPLv1) easily reached the list of top twenty most popular source file licenses, it did not appear in the top twenty packages licenses. The reason is that there are some packages—for example, Eclipse itself—that consist of thousands of files that use the EPLv1 license. However, the number of packages that make any use of the EPLv1 as a license is relatively small. Again, this illustrated Daniel's point about methodology when it comes to measuring FOSS license usage: what is being measured?
Daniel then looked at a few other factors that illustrated how a FOSS license census can be biased. In one case, he looked at the changes in license usage in Debian between version 5.0 and 6.0. Some licenses showed increased usage that could be reasonably explained. The GPLv3 was one such license: as a new, well-publicized license, the reasons for its usage are easily understood. On the other hand, the EPLv1 license also showed significant growth. But, Daniel explained, that was at least in part because, for legal reasons, Java code that uses that license was for a long time under-represented in Debian.
Another cause of license bias became evident when Daniel turned to look at per-file license usage broken down across three languages: Java, Perl, and Python. Notably, around 50% of Perl and Python source files had no license; for Java, that number was around 12%. "Java programmers seem to be more proactive about specifying licenses." Different programming language communities also show biases towards particular licenses: for example, the EPLv1 and Apache v2 licenses are much more commonly used with Java than with the Python or Perl; unsurprisingly the "Same as Perl" license is used only with Perl.
In summary, Daniel said: "every time you see a census of licenses, take it with a grain of salt, and ask how it is done". Any license census will be biased, according to the languages, communities, and products that it targets. Identifying licenses is hard, and tools will make mistakes, he said. Even a tool such as Ninka that tries to very carefully identify licenses cannot do that job for 15% of source files. For a census, 15% is a huge amount of missing data, he said.
License proliferation: a naive quantitative analysis
Walter van Holst is a legal consultant at the Dutch IT consulting company mitopics. His talk presented what he describes as "an extremely naive quantitative analysis" of license proliferation.
The background to Walter's work is that in 2009 his company sponsored a Master's thesis on license proliferation that produced some contradictory results. The presumption going into the research was that license proliferation was a problem. But some field interviews conducted during the research found that the people in free software communities didn't seem to consider license proliferation to be much of a problem. Four years later, it seemed to Walter that it was time for a quantitative follow-up to the earlier research, with the goal of investigating the topic of license proliferation further.
In trying to do a historical analysis of license proliferation, one problem that Walter encountered is that there were few open repositories that could be used to obtain historical license data. Thus, trying to use one of the now popular FOSS project-hosting facilities would not allow historical analysis. Therefore, Walter instead chose to use data from a software index, namely Freecode (formerly Freshmeat, before an October 2011 name change). Freecode provides project licensing information that is available for download from FLOSSmole, which acts a repository for dumps of metadata from other repositories. FLOSSmole commenced adding Freecode data in 2005, but Walter noted that the data from before 2009 was of very low quality. On the other hand, the data from 2009 onward seemed to be of high enough quality to be useful for some analysis.
How does one measure license proliferation? One could, Walter said, consider the distribution of license choices across projects, as Daniel German has done. Such an analysis may provide a sign of whether license proliferation is a problem or not, he said.
Another way of defining license proliferation is as a compatibility problem, Walter said. In other words, if there is proliferation of incompatible licenses, then projects can't combine code that technically could be combined. Such incompatibility is, in some sense, a loss in the value of that FOSS code. This raises a related question, Walter said: "is one-way license compatibility enough?" (For example, there is one-way compatibility between the BSD and GPL licenses, in the direction of the GPL: code under the two licenses can be combined, but the resulting work must be licensed under the GPL.) For his study, Walter presumed that one-way compatibility is sufficient for two projects to be considered compatible.
Going further, how can one assign a measure to compatibility, Walter asked. This is, ultimately, an economic question, he said. "But, I'm still not very good at economics", he quipped. So, he instead chose an "extremely naive" measure of compatibility, based on the following assumptions:
- Treat all open source projects in the analysis as nodes in a network.
- Consider all possible links between pairs of nodes (i.e., combinations of pairs of projects) in the network.
- Treat each possible combination as equally valuable.
This is, of course, a rather crude approach that treats combinations between say the GNU C library (glibc) and some obscure project with few users as being equal in importance to (say) the combination of glibc and gcc. This approach also completely ignores language incompatibilities, which is questionable, since it seems unlikely that one would want to combine Lisp and Java code, for example.
Given a network of N nodes, the potential "value" of the network is the maximum number of possible combinations of two nodes. The number of those combinations is N*(N-1)/2. From a license-compatibility perspective, that potential value would be fully realized if each node was license-compatible with every other node. So, for example, Walter's 2009 data set consisted of 38,674 projects, and, following the aforementioned formula, the total possible interconnections would be approximately 747.9 million.
Walter's measure of license incompatibility in a network is then based on asking two questions:
- For each license in the network, how many combinations of two nodes in the network can produce a derived work under that license? For example, how many pairs of projects under GPL-compatible licenses can be combined in the network?
- Considering the license that produces the largest number of possible connections for a derived work, how does the number of connections for that license measure up against the total number of possible combinations?
Perhaps unsurprisingly, the license that allows the largest number of derived work combinations is "any version of the GPL". By that measure, 38,171 projects in the data set were compatible, yielding 728.5 million interconnections.
Walter noted that the absolute numbers don't matter in and of themselves. What does matter is the (proportional) difference between the size of the "best" compatible network and the theoretically largest network. For 2009, that loss is the difference between the two numbers given above, which is 19.3 million. Compared to the total potential connections, that loss is not high (expressed as a proportion, it is 2.5%). Or to put things another way, Walter said, these figures suggest that in 2009, license proliferation appears not to have been too much of a problem.
Walter showed corresponding numbers for subsequent years, which are tabulated below. (The percentage values in the "Value loss" column are your editor's addition, to try and make it easier for the reader to get a feel for the "loss" value.)
Year Potential value
(millions)Value loss
(millions)GPL market
share2009 747.8 19.3 (2.5%) 72% 2010 534.6 30.8 (5.7%) 63% 2011 565.9 56.4 (9.9%) 61% 2012 599.6 79.8 (13.3%) 59% 2013 621.6 60.3 (9.7%) 58%
The final column in the table shows the number of projects licensed under "any version of the GPL". In addition, Walter presented pie charts that showed the proportion of projects under various common licenses. Notable in those data sets was that, whereas in 2009 the proportion of projects licensed GPLv2-only and GPLv3 was respectively 3% and 2%, by 2013, those numbers had risen to 7% and 5%.
Looking at the data in the table, Walter noted that the "loss" value rises from 2010 onward, suggesting that incompatibility resulting from license proliferation is increasing.
Walter then drew some conclusions that he stressed should be treated very cautiously. In 2009, license proliferation appears not to have been much of a problem. But looking at the following years, he suggested that the increased "loss" value might be due to the rise in the number of projects licensed GPLv2-only or GPLv3-only. In other words, incompatibility rose because of a licensing "rift" in the GPL community. The "loss" value decreased in 2013, which he suggested may be due to an increase in the number of projects that have moved to Apache License version 2 (which has better license compatibility with the the GPL family of licenses).
Concluding remarks
In questions at the end of the session, Daniel and Walter both readily acknowledged the limitations of their methodologies. For example, various people raised the point that the Freecode license information used by Walter tends to be out of date and inaccurate. In particular, the data does not seem to be too precise on which version of the GPL a project is licensed under; the license for many projects is just defined as "GPL" which provided Walter's "any version of the GPL" license measure above. Walter agreed that his source data is dirty, but pointed out that the real question is how to get better data.
As Walter also acknowledged, his measure of license incompatibility is "naive". However, his goal was not to present highly accurate numbers. Instead, he wants to get some clues about possible trends and suggest some ideas for future study. It is easy to see other ways in which his results might be improved. Comparing his presentation with Daniel's, one can immediately come up with ideas that could lead to improvements. For example, approaches that consider compatibility at the file level or bring programming languages into the equation might produce some interesting results.
Inasmuch as one can find faults in the methodologies used by Daniel and Walter, that is only possible because, unlike the widely cited Black Duck license census, they have actually published their methodologies. In revealing their methodologies and the challenges they faced, they have shown that any FOSS licensing survey that doesn't publish its methodology should be treated with considerable suspicion. Clearly, there is room for further interesting research in the areas of FOSS license usage, license proliferation, and license incompatibility.
Current challenges in the free software ecosystem
Given Eben Moglen's long association with the Free Software Foundation, his work on drafting the GPLv3, and his role as President and Executive Director of the Software Freedom Law Center, his talk at the 2013 Free Software Legal and Licensing Workshop promised to be thought-provoking. He chose to focus on two topics that he saw as particularly relevant for the free software ecosystem within the next five years: patents and the decline of copyleft licenses.
The patent wars
"We are in the middle of the patent war, and we need to understand where we are and where the war is going." Eben estimates the cost of the patent war so far at US$40 billion—an amount spread between the costs of ammunition (patents and legal maneuvers) and the costs of combat (damage to business). There has been no technical benefit of any kind from that cost, and the war has reached the point where patent law is beginning to distort the business of major manufacturers around the world, he said.
The effort that gave rise to the patent war—an effort primarily driven by the desires of certain industry incumbents to "stop time" by preventing competitive development in, for example, the smartphone industry—has failed. And, by now, the war has become too expensive, too wasteful, and too ineffective even for those who started it. According to Eben, now, at the mid-point of the patent war, the costs of the combat already exceed any benefit from the combat—by now, all companies that make products and deliver services would benefit from stopping the fight.
The nature of the war has also begun to change. In the US, hostility to patents was previously confined mainly to the free software community, but has now widened, Eben said. Richard Posner, a judge on the US Court of Appeals for the Seventh Circuit, has spoken publicly against software patents (see, for example, Posner's blog post and article in The Atlantic). The number of American-born Nobel Prize winners who oppose software patents is rising every month. The libertarian wing of the US Republican party has started to come out against software patents (see, for example, this Forbes.com article, and this article by Ramesh Ponnuru, a well-known Republican pundit).
Thus, a broader coalition against software patents is likely to make a substantial effort to constrain software patenting for the first time since such patenting started expanding in the early 1990s, Eben said. The dismissal of the patent suit by Uniloc against Red Hat and Rackspace was more than a victory for the Red Hat lawyers, he said. When, as in that case, it is possible to successfully question the validity of a patent in a motion to dismiss, this signals that the economics of patent warfare are now shifting in the direction of software manufacturers. (An explanation of some of the details of US legal procedure relevant to this case can be found in this Groklaw article.)
Illustrating the complexities and international dimensions of the patent war, Eben noted that even as the doctrine of software patent ownership is beginning to collapse in the US, the patent war is spreading that doctrine to other parts of the world. Already, China, the second largest economy in the world, is issuing tens of thousands of patents a year. Before the end of the patent war—which Eben predicts will occur two to four years from now—China's software industry will be extensively patented. The ownership of those patents is concentrated in the hands of a few people and organizations with extremely strong ties to government in a system of weak rule of law, he said.
Long before peace is reached, the strategists and lawyers who got us into the patent war will be asking how to get out of the mess that the war has gotten them into, and everyone else in the industry is going to feel like collateral damage, Eben said. As usual, the free (software) world has been thinking about this problem longer than the business world. "We are going to save you in the end, just as we saved you by making free software in the first place."
We're at the mid-point of the patent war over mobile, Eben said. The "cloud services ammunition dumps [patents] will begin to go up in flames" about a year and a half from now. Those "ammunition dumps" are the last ones that have not yet been exploited in the patent wars; they're going to be exploited, he said. He noted that some companies will be feeling cornered after IBM's announcement that its cloud services will be based on OpenStack. Those companies will now want to use patents stop time.
As the patent wars progress, we're going to become more dependent on organizations such as the Open Invention Network (OIN) and on community defense systems, Eben said. OIN will continue to be a well-funded establishment; SFLC will continue to scrape by. Anyone in the room who isn't contributing to SFLC through their institutions is making a serious mistake, Eben said, because "we're able to do things you [company lawyers] can't do, and we can see things you cannot; you should be helping us, we're trying to help you".
The decline of copyleft
Eben then turned a discussion of copyleft licenses, focusing on the decline in their use, and the implications of that decline for industry sectors such as cloud services.
The community ecosystem of free software that sustains the business of "everyone in this room" is about to have too little copyleft in it, Eben said. He noted that from each individual firm's perspective, copyleft is an irritation. But seen across the industry as a whole, there is a minimum quantity of copyleft that is desirable, he said.
Up until now, there has been sufficient copyleft: the toolchain was copyleft, as was the kernel. That meant that companies did not invest in product differentiation in layers where such differentiation would cost more than it would benefit the company's bottom line. While acknowledging that there is a necessary lower bound to trade secrecy, Eben noted the "known problem" that individual firms always over-invest in trade secrecy.
The use of copyleft licenses has helped all major companies by allowing them to avoid over-investment in product differentiation, Eben said. In support of that point, he noted that the investments made by most producers of proprietary UNIX systems were an expensive mistake. "It was expensive to end the HP-UX business. It cost a lot to get into AIX, and it cost even more to get out." Such experiences meant that the copyleft-ness of the Linux kernel was welcomed, because it stopped differentiation in ways that were more expensive than they were valuable.
Another disadvantage of excess differentiation is that it makes it difficult to steal each another's customers, Eben said. And as businesses move from client-server architectures to cloud-to-mobile architectures, "we are entering a period where everyone wants to steal everyone else's customers". One implication of these facts is that more copyleft right now at layers where new infrastructure is being developed would prevent over-investment in (unnecessary) differentiation, he said. In Eben's view, people will come to look on OpenStack's permissive licensing with some regret, because they're going to over-invest in orchestration and management software layers to compete with one another. "I am advising firms around the world that individually are all spending too much money on things they won't share, which will create problems for them in the future." Eben estimates that several tens of millions of dollars are about to be invested that could have been avoided if copyleft licenses were used for key parts of the cloud services software stack.
There are other reasons that we are about to have too little copyleft, Eben said. Simon Phipps is right that young developers are losing faith in licensing, he said. Those developers are coming to the conclusion that permission culture is not worth worrying about and that licenses are a small problem. If they release software under no license, then "everyone in this room" stands to lose a lot of money because of the uncertainty that will result. Here, Eben reminded his audience that Stefano Zacchiroli had explained that the free software community needs help in explaining why license culture is critically important. (Eben's talk at the Workshop immediately followed the keynote speech by Stefano "Zack" Zacchiroli, the outgoing Debian Project Leader, which made for a good fit, since one of Eben's current roles is to act as pro bono legal counsel for the Debian community distribution.)
Eben also noted that SFLC is doing some licensing research on over three million repositories and said that Aaron Williamson is presenting the results at the Linux Foundation Collaboration Summit. Some people may find the results surprising, he said.
Another cause of trouble for copyleft is the rise in copyright trolling around the GPL. That is making people nervous that the license model that has served them well for twenty years is now going to cause them problems. Asked if he could provide some examples of bad actors doing such copyright trolling, Eben declined: "you know how it is"; one presumes he has awareness of some current legal disputes that may or may not become public. However, Eben is optimistic: he believes the copyright trolling problem will be solved and is not overly worried about it.
Eben said that all of the threats he had described—educating the community about licenses, copyright trolls, and over-investment in differentiation in parts of the software stack that should be copyleft but are instead licensed permissively—are going to be problems, but he believes they will be solved. "I'm going to end on a happy note by explaining a non-problem that many people are worrying about unnecessarily."
The OpenStack revolution is putting companies into the software-as-a-service business, which means that instead of distributing binaries they are going to be distributing services. Because of this, companies are worrying that the Affero GPL (AGPL) is going to hurt them. The good news is that it won't, Eben said. The AGPL was designed to work positively with business models based on software-as-a-service in the same way that the GPL was designed to work with business adoption of free software, he said. "We will teach people how the AGPL can be used without being a threat and how it can begin to do in the service world what the GPL did in the non-services software world."
Your editor's brief attempt at clarifying why the AGPL is not a problem but is instead a solution for software-as-a-service is as follows. The key effect of the AGPL is to make the GPL's source-code distribution provision apply even when providing software as a service over a network. However, the provision applies only to the software that is licensed under the AGPL, and to derived works that are created from that software. The provision does not apply to the other parts of the provider's software stack, just as, say, the Linux kernel's GPLv2 license has no effect on the licensing of user-space applications. Thus, the AGPL has the same ability to implicitly create a software development consortium effect in the software-as-a-service arena that the GPL had in the traditional software arena. Consequently, the AGPL holds out the same promise as the GPL of more cheaply creating a shared, non-differentiated software base on which software-as-a-service companies can then build their differentiated software business.
As Eben noted in response to a question later in the morning, if businesses run scared of the AGPL, and each company builds its own specific APIs for network services, then writing software that talks to all those services will be difficult. In addition, there will be wasteful over-investment in duplicating parts of the software stack that don't add differential value for a company and it will be difficult for companies to steal each other's customers. There are large, famous companies whose future depends on the AGPL, he said. "The only question is if they will discover that too late."
Eben concluded on a robustly confident note. "Everything is working as planned; free software is doing what Mr. Stallman and I hoped it would do over the last twenty years." The server market belongs to free software. The virtualization and cloud markets belong to free software. The Android revolution has made free software dominant (as a platform) on mobile devices. The patent wars are a wasteful annoyance, but they will be resolved. The free software communities have answers to the questions that businesses around the world are going to ask. "When Stefano [Zacchiroli] says we are going to need each other, he is being modest; what he means is you [lawyers] need him, and because you need him, and only because you need him, you need me."
Looking in at GNOME 3.8
On March 27, the GNOME project announced the release of GNOME 3.8. It has a variety of new features, including new privacy settings, desktop clutter reduction, improved graphics rendering and animation transitions, new searching options, and, perhaps most significantly, a Classic mode that restores some of the appearance and usability features of GNOME 2. With these additions, the GNOME team is attempting to broaden the appeal of GNOME 3, so that it will be more attractive to old-time GNOME 2 users, while also being a viable alternative to proprietary systems in the business and professional world.
GNOME 3 was designed to be flexible and highly configurable. As
part of that effort, the GNOME extensions web site was introduced six
months after GNOME 3. The new Classic mode bundles some of those
extensions to give users a way to configure their desktop to
look more like GNOME 2. In Classic mode, there are menus for
applications and places; windows have minimize and maximize buttons; there
is a taskbar that can be used to restore minimized windows and to switch
between them; and Alt-Tab shows windows instead of applications.
A video of Classic mode shows some of these features from a pre-release version of GNOME 3.8. Some rough spots in the interface are on display, but have likely been fixed in the final release. The general availability of the release will be governed by the release dates of underlying distributions. GNOME 3.8 is released, but not yet available except by either using a testing distribution or building it yourself.
As part of the goal of increasing its attractiveness to the business
world, GNOME 3.8 continues to work to keep the desktop uncluttered and
streamlined. To this end, the applications launching view has a new
"Frequent/All" toggle that will display either recently used programs or
all of those available. In addition, some applications are grouped into
folders. Each folder icon is a black box that displays mini-icons
representing the folder's contents. The effect is to see all the available
applications at a glance.
This interface choice can produce a few problems. The initial state of the Frequent/All toggle is "Frequent", but new users won't have any entries there. Since there is insufficient contrast between the toggle and the default wallpaper, a person who just wades in to use the system can easily miss it, and then no applications will display.
Another problem is that "Help" has been moved to one of the new groups of programs, in this case the "Utilities" group. For former users of GNOME who know what Help looks like and that it is considered an application, it does not take long to search for. There are, at the moment, only two groups. However, for a new user, there is insufficient information to suggest where to look for Help when it is needed most.
Searching from the "Activities" overview has been improved in several ways, both in the way the results are presented and in the new settings available. These allow you to specify a subset of applications and limit your search to that subset. This is useful because it allows you to quickly narrow in on an application of interest.
One of the bigger new items in GNOME 3.8 is the privacy settings. These are designed for people whose desktop is not in a physically private space, allowing a person to keep her name, activities, and viewing history private. There are a number of practical uses for this.
If you are in a public space, such as working in a coffee shop or on an
airplane, you might want to preserve your anonymity. There is a new
setting that ensures that your name is not displayed on the computer.
When "Name & Visibility" is marked "Invisible", the name in the upper right
corner disappears. Beyond that are settings for web and usage history
retention, screen locking, and Trash and temporary file cleanup.
There are many other new usability features. For example, better rendering of animated graphics provides smoother transitions in the interface. There is also greater support for internationalization. First, many more languages supported. Second, there are also improvements to, and expansion of, GNOME's input methods. And, of course, there were numerous bug fixes throughout.
In an interview with GNOME 3 designer Jon McCann that appeared on GNOME's website along with the release announcement, he said the overriding goal of GNOME, starting with GNOME 3, is to make it more accessible to application developers. To that end, GNOME 3.8 uses a number of new interfaces and widgets for GTK+ that were not present in 3.6, Allan Day of the GNOME team explained. "These widgets are not available to application developers yet, but will become available in future releases." The new widgets also take advantage of GNOME's improved graphics and animation.
There is a new Weather application for viewing current weather conditions and forecasts for various locations. Weather, per the decisions made at the February hackfest, is written in Javascript.
The GNOME developers have also begun work on future plans, including creating a testable development environment, so that application developers can develop for and test against soon-to-be released versions of GNOME. In the medium term, adding application bundling and sandboxing is in the works, and, in the long term, providing better coding and user interface development tools is planned.
Page editor: Jonathan Corbet
Next page:
Security>>