User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for April 10, 2014

Project updates from Libre Graphics Meeting 2014

By Nathan Willis
April 9, 2014
LGM 2014

Last week we took a brief look at the many new projects that were represented on the initial day of Libre Graphics Meeting (LGM) 2014 in Leipzig. Although there were a few other newcomer projects presented in the remaining three days, the schedule for the latter part of the event was slanted more toward updates from existing projects, user presentations, and slots for team meetings, workshops, and hackfests. All of these are valuable, of course—in particular, LGM routinely does an exceptional job soliciting talks from real-world software users. But the updates from established projects, particularly those that set out short- and medium-term roadmaps, are likely of interest to many of those who could not attend in person.

The opening day's overview talk (mentioned last week) included slides provided by many project teams, among them GIMP and Inkscape, which are two of the most widely used open source graphics programs. GIMP's current development branch is 2.9, which is now feature-frozen. The plan is for that branch to be released as GIMP 2.99, which will debut a number of significant new features, such as a rotatable canvas window, integration of the former plugins IWarp and Seamless Clone as built-in tools, many more operations ported to the new GEGL processing core, greatly improved metadata support (courtesy of Yorba's metadata wrappers), and the ability to search for actions by name rather than hunting for them in the menu hierarchy. After the 2.99 release, the project will then work on its 3.0 branch, the main focus of which will be porting to GTK+ 3.

[Kolås at LGM]

Bumping the version number of GIMP will no doubt attract quite a bit of attention (perhaps mostly from people less familiar with free software's "release early, release often" approach to versioning). The same issue came up during the Inkscape update. Inkscape has been a reliable cross-platform vector editor for years, but its version number still hovers at 0.48, several years after talk of version 0.49 began. But that number will change in the second half of 2014, the project announced, when it will bump the next stable release up to version number 0.91. That release (which would otherwise have been 0.49) will incorporate changes like the new Cairo-based rendering engine, a new tracing engine, substantially reduced memory usage, a library for incorporating symbols and other reusable drawing elements, and the new on-canvas measuring tool.

0.91 will also solve two longstanding complaints, said Krzysztof Kosinski: the fact that Inkscape's native coordinate system places the origin in the bottom-left corner as opposed to SVG's in the upper left, and Inkscape's broken support for flowed text. The program's flowed text implementation is based on the abandoned SVG 1.2 specification, making it incompatible with most other SVG tools; the new solution will support files using the old implementation and the modern fix. Subsequent to 0.91, though, the project will work on a 1.0 release. That number is largely a public-relations target, the team admitted, but one that will ease the minds of many would-be users. The 1.0 release will also denote full support of the SVG specification, which was the original target of the project. Or, at least, Kosinski said, it will support all of SVG that makes sense for a vector editor to implement.

Also from the GIMP camp, interaction designer Peter Sikking spoke about the project's future, in particular the user interface (UI) challenges being addressed for the application's move to using entirely non-destructive editing tools based on GEGL. Historically, other non-destructive editing applications like video compositors have taken a "node graph"-based approach, with users hooking up boxes and tubes on a canvas to represent operations. That is almost always an awkward approach, he said, particularly given the considerably larger set of operations involved in painting as compared to video compositing. Consequently, he has been working on techniques to simplify the UI for GEGL-based GIMP, such as automatically hiding portions of the node graph and compressing or de-emphasizing subtrees of the graph. It is ongoing work, without an announced release date as of yet.

[Schäfer at LGM]

Christoph Schäfer and Jason Harder spoke about the desktop publishing (DTP) application Scribus, focusing primarily on real-world Scribus deployments, but addressing the project's upcoming roadmap as well. The next stable release will be version 1.6, which will introduce a significant number of long-requested features, including vertical text alignment, footnotes and endnotes, the ability to clone document objects, "real" table objects (as opposed to workarounds like constructing large grids of small text objects), orphan/widow control, and support for layered SVGs.

[Lechner at LGM]

Artist and developer Tom Lechner also provided an update about his DTP application Laidout, which was originally created to do prepress impositioning, but has since expanded to tackle several other page-layout, formatting, and prepress tasks. The latest updates in Laidout include a tool for creating engraving-style images (as transformable vectors), new tools for using meshes to transform shapes and gradients, and a tessellation tiling tool that can produce a wide array of M.C. Escher-like tilings from any source images.

The 3D modeling and animation tool Blender was the subject of several talks. The most significant news for Blender's future is the recent launch of Project Gooseberry, the latest in the Blender Foundation's "open movie" projects, which provide what speaker Francesco Siddi called "production-driven development." Each project sets out a target set of features needed to complete the chosen film project. Gooseberry differs significantly from its predecessors, however, in that it will produce a feature-length title rather than a short. Consequently, it will require a significantly larger team of modelers, animators, and effects experts; the project's answer to this challenge is to partner with a dozen or so independent animation studios around the world.

[Siddi at LGM]

While the features added to Blender for the project to support Gooseberry have not yet been set entirely in stone, they will naturally focus on improving Blender's support for large projects. One feature sure to be included, however, is integration with online asset-tracking and project-management tools. Siddi explained that Blender is rolling out such online services as another way for interested users to contribute to Project Gooseberry; by signing up, users can help provide funding to the project over the lengthier production schedule Gooseberry will entail.

High-quality color management in free-software graphics applications is one of LGM's most widely cited legacies. By and large, the available tools just "do the right thing" whether the task at hand is 2D or 3D, vector or raster. But colord developer Richard Hughes and color workflow consultant Chris Murphy appealed to LGM on behalf of the OpenICC project to do even better. OpenICC is a Freedesktop.org-hosted effort to define and implement a full color-management framework for Linux-based desktops. Hughes and Murphy told the LGM audience that the project would like to move beyond working with individual application-level projects and instead push color-management into the toolkit level. The result would be less for application authors—and end users—to worry about, but the project needs more input and feedback from GTK+, Qt, and other toolkits in order to move forward.

[Murphy and Hughes at LGM]

Hughes, of course, is also known to the LGM crowd for his ColorHug open hardware colorimeter (which we looked at in 2012). He provided a somewhat sobering update on the project in a separate session. Although ColorHug has been successful, he said, the rapid rise of Organic LED (OLED) screens has posed a serious challenge: they use significantly different light sources with different primaries that the ColorHug cannot measure nearly as accurately as older displays.

The "correct" solution, he said, is a true spectrometer, which entails significantly more engineering: precise illuminants, components guaranteed to have temperature-stable characteristics, even handling ultraviolet. He has explored the idea, but reported that the engineering costs seem to make it prohibitive. In recent polling, he had only 81 people willing to commit to buying such a device, which is not enough to recoup costs. There might be a few more, of course, but he said he would probably need to find a significant new market segment for the device or else he could not move forward with it. Linux users and open hardware buffs are not enough, he concluded; all ideas and prospects are welcome.

Among the other projects that presented progress updates at the event were SuperGlue, GlitterGallery, and GStreamer Editing Services (GES). SuperGlue is a self-hosted web publishing platform that combines ideas found in FreedomBox (like independence from proprietary web services) and longstanding (if rarely implemented) web concepts like making all page content editable in the browser. The latest SuperGlue builds provide a built-in grid system that makes constructing page content simple, and a nice suite of WYSIWYG on-canvas editing tools. The project made its debut at LGM 2013, and has made significant progress over the past year.

GlitterGallery is an initiative that started in Fedora's design team; it aims to build a file collaboration tool that is as useful to graphic designers as Git is to software developers. Sarup Banskota presented an update on its progress, including his own work as a Google Summer of Code (GSoC) student. The tool runs on the OpenShift platform, and provides version control, issue tracking, and pull-request management for SVG-based image projects. Future work, he said, will tackle related collaboration tasks like user-to-user messaging and file synchronization with SparkleShare.

GES is the GStreamer-based editing library that powers the Pitivi non-linear video editor. Thibault Saunier and Mathieu Duponchelle provided a brief update, emphasizing GES's stable support for timelines, clips, and other video-editing primitives. The goal set out for GES, they said, is to move beyond the basics and implement what the video editing community demands. On that front, Pitivi has had some success recruiting new developers, and is running a crowd-funding campaign to push toward its own 1.0 release.

[Bah at LGM]

Last but not least, there were two update talks about progress in open standards. Tavmjong Bah reported on the progress of SVG 2, and Chris Lilley reported on color glyphs in the OpenType font format. SVG 2, most notably, splits a number of features that were in SVG 1.1 out into separate standards—text handling and image filters, for example, will be standardized in the CSS specification, rather than SVG 2. This change simplifies the standards overall (reducing duplication of effort), but developers need to be aware of it.

We have looked at the color OpenType font proposals in previous editions; in addition to explaining them, Lilley's presentation hit on two other key points. First, all of the proposed standards were adopted in January by the MPEG committee that oversees the OpenType standard; that means the marketplace, in effect, will determine which formats take off and which will be relegated to historical footnotes. Second, however, Lilley reported that the "marketplace" for color OpenType font software so far consists entirely of proprietary applications; if the open source community wants to play a role in this new standard, then developers need to get started quickly.

Progress in free-software development, of course, is an ongoing thing. Still, it is especially instructive to look at how far the individual efforts that make up the community have come, all in one place. It is a welcome reminder that the free-software community—where people so often work in relative isolation—is a large and diverse space.

[The author would like to thank Libre Graphics Meeting for assistance to travel to Leipzig for LGM 2014.]

Comments (10 posted)

US Supreme Court looks at patents again

April 9, 2014

This article was contributed by Adam Saunders

For the first time in over forty years, the Supreme Court of the United States is evaluating the patent-eligibility of software. On Monday March 31, the Court heard oral arguments in Alice Corp. v CLS Bank International [PDF]. How it rules may dramatically affect the future of patent law in the US. Given some of its earlier rulings, though, there is another possibility: a narrow ruling that gives no real guidance for other, similar cases.

Alice holds patents on a system and a process for hedging the risk that one party to a set of financial transactions won't pay at one or more parts of the transaction. This risk is known as "settlement risk". The "invention" requires using a computer to account for the transactions between the parties, and if the computer determines that a party does not have sufficient funds to pay their obligations to the other side, then the transaction is blocked. The relevant patents are #5,970,479, #6,912,510, #7,149,720, and #7,725,375.

History

The litigation started in 2007 at the district court level. CLS, a competitor to Alice, moved for a declaratory judgment; it sought rulings that Alice's patents were unenforceable and invalid, and that CLS didn't infringe. Fighting back, Alice claimed that CLS was indeed infringing.

After a discovery period, in 2009 CLS asked for a summary judgment ruling that, among other things, the patents were invalid because the claims were abstract ideas. Alice moved against that request. Both parties referred to the Bilski ruling at the Federal Circuit. The district court refused to make a determination until after the Supreme Court heard the Bilski appeal.

Patent eligibility was brought up again in 2010, following the Supreme Court's ruling in Bilski, and the district court heard oral arguments about that in early 2011. The court then ruled that both the method and system claims were not eligible for patentability, as they did not qualify as eligible subject matter. As a result, these patent claims were invalidated. The court struggled with determining what the threshold should be for valid patents involving computers: "nominal recitation of a general-purpose computer in a method claim does not [...] save the claim from being found unpatentable [...] On the other hand, a computer that has been specifically programmed to perform the steps of a method may [...] be considered [...] a particular machine."

Alice appealed to the Federal Circuit in front of a panel of three judges. Alice argued, essentially, that the patent claims were not abstract ideas because they are "tied to a particular machine or apparatus". The court ruled in July 2012, 2-1 in favor of Alice, overturning the district court. In its ruling, the majority determined that the patent claims spoke to a limited form of risk-hedging, and left "broad room for other methods".

The lone dissenter sharply criticized the majority, accusing it of "resist[ing] the Supreme Court's unanimous directive to apply the patentable subject matter test". The dissenting judge also criticized the majority for failing to adequately address the issue and provide "any explanation for why the specific computer implementation in this case brings the claims within patentable subject matter". The dissenter would have upheld the district court's finding of ineligibility.

CLS asked the Federal Circuit to hear an appeal en banc (with all the judges of the court). It granted that request, and issued a remarkably fragmented ruling in May 2013. Between ten judges were seven opinions, with no opinion supported by more than four judges. The court did manage to form a binding opinion upholding all of the district court's ruling that the patents were invalid, but the court remained split on whether or not an invention executed by a computer is inherently ineligible for patents.

On to the Supreme Court

The Supreme Court of the United States (SCOTUS) is extremely selective with the cases it hears: in the Court's words, it "receives approximately 10,000 petitions for a writ of certiorari each year. The Court grants and hears oral argument in about 75-80 cases." Yet I suspect that seeing such a fragmented ruling from the Federal Circuit, which has exclusive appellate jurisdiction across all of America on patent issues, on a core element of patent law, made SCOTUS's decision to hear an appeal relatively straightforward.

After the filing of dozens of amicus briefs from concerned organizations and individuals, SCOTUS heard oral arguments from Alice, from CLS, and from the Solicitor General as amicus on March 31. Alice's lawyer, Carter Phillips, started the session by getting right to the point and noting that "[t]he only argument between the parties is the abstract idea exception" for patent eligibility. Justice Ginsburg pointedly asked how these patent claims could not be considered abstract when SCOTUS recently ruled in "the Bilski case [...] that hedging qualified as an abstract idea". Phillips claimed that the patents weren't abstract because they read on a "very specific way of dealing with" this issue, which involves using computers.

Justice Kennedy said: "All you're talking about is — if i can use the word — an 'idea'". Phillips replied "I prefer not to use that word for obvious reasons", which prompted laughter from the courtroom. When Phillips tried to refer to the patents as speaking instead to "a method or a process", Justice Breyer asked "why is that less abstract?", giving an example of King Tut paying workers in gold, and "hir[ing] a man with an abacus" to account for the transactions, to illustrate his concerns about abstract ideas. When Phillips tried to defend, Justice Sotomayor leapt in, stating "all I'm seeing in this patent is the function of reconciling accounts, the function of making sure they're paid on time".

Justice Scalia expressed an interest in the broader philosophical issue of the nature of invention, willing to allow some computer-implemented patents but not all: "If you just say use a computer, you haven't invented anything. But if you come up with a serious program that ­­ that does it, then, you know, that may be novel." Phillips would later argue that "this is not something that simply says use a computer. [...] It ­­-- it operates in a much more specific and concrete environment". Phillips would also insist that the "invention" would need to be implemented by a computer "to make [it] effective", as without that level of automation, it would be impossible to manage a large number of transactions at the same time. Justice Scalia would later indicate that he thought the patents were valid "we haven't said that you can't take an abstract idea and then say here is how you use a computer to implement it [...] which is basically what you're doing."

Justice Kagan asked specifically what part of Alice's "invention" goes beyond the mere step of "use a computer". Phillips was unable to cite any; he could only speak vaguely of "simultaneously" managing transactions in "a global economy".

Mark Perry, the lawyer for CLS, got right to it: "Bilski holds that a fundamental economic principle is an abstract idea and Mayo [another recent SCOTUS ruling regarding patent-eligibility] holds that running such a principle on a computer is, quote, 'not a patentable application of that principle.' Those two propositions are sufficient to dispose of this case." After Perry replied to some relatively friendly queries from Breyer and Sotomayor, Chief Justice Roberts noted that the instructions for implementing the invention looked complex. Perry replied that those instructions, which Phillips had referenced in his oral argument, refer to patent claims that were not asserted at all in this case.

Addressing Scalia's earlier remarks, Perry argued (likely to the dismay of many LWN readers), that at least some computer-implemented inventions are patentable: "a patent that describes sufficiently how a computer does a new and useful thing [...] would be within the realm of [...] the patent laws". However, the patent asserted in this case "is not such a patent". For such inventions to succeed, Perry argued that "the computer must be essential to that operation and represent an advancement in computer science or other technology." When pressed by Sotomayor to define some examples, Perry noted that "e-mail and word processing [...] would have been technological advances that were patentable." Kagan asked what the "threshold" is that needs to be crossed for a computer-implemented invention to be patentable; Perry replied that there would need to be "something significantly more than the abstract idea itself".

Donald Verrilli, as the Solicitor General, argued in favor of CLS, taking up generally the same argument Perry used: that simply adding, in effect, "use a computer" to an abstract idea doesn't give you a patentable invention. What is patentable, according to Verrilli, is an "improvement in computing technology or an innovation that uses computing technology to improve other technological functions." After being questioned by Ginsburg on the patentability of software, Verrilli explicitly stated that software would remain patentable, subject to the limitations he described.

Sotomayor asked if SCOTUS has to specifically address the patentability of software in its ruling. Verrilli said that was not needed; all that needs to be addressed is the nature of the abstract idea exception to patentability. Kennedy asked for an example of a patentable business process not needing a computer; Verrilli gave "a process for additional security point­-of-sale credit card transactions using particular encryption technology". It's a bit hard to see how that example answered Kennedy's question, as the encryption mentioned would likely be performed by a computer.

Chief Justice Roberts took issue with what he saw as the Solicitor General's complicated solution outlined in their brief; Verrilli clarified by referring to the limitations he mentioned earlier.

On rebuttal, Phillips, for the patent-asserting Alice, denied that the instructions on using the computer for the invention did not apply to the case. He also asserted that certain business method ideas, like a frequent flier program (referred to earlier by Verrilli as an example), would be invalid for obviousness.

Analyzing the statements of the judges in court, and looking at the Bilski ruling, the safest prediction one can make is that Alice's patent will likely be invalidated by the court for being an abstract idea, just as Bilski's patent was. From there it gets somewhat more uncertain, including where the Supreme Court will (likely) draw a line between patentable and unpatentable computer-implemented inventions. It is possible that the vast majority of software patents will remain valid. It is also possible that we will see a narrow ruling that avoids making significant changes to the status quo.

However, given some of the skepticism expressed by several justices, it is possible that a large chunk of software patents might be invalidated. In particular, software patents that boil down to implementations of basic mathematical concepts, such as eHarmony's patent on singular value decompositions with romantic compatibility indicators being assigned to variable names, could run afoul of new patentability guidelines. That patent was specifically called out by Ben Klemens in the film Patent Absurdity. The ruling, which should come out in the next few months, will make for interesting reading — and analyzing — stay tuned.

Comments (18 posted)

Much ado about debugging

By Jonathan Corbet
April 8, 2014
Recently, an interaction problem between systemd and the kernel was reported. After a calm discussion, developers of both projects found ways in which behavior could be improved and set about coding up the solutions. The technical press was filled with glowing reports on another success of collaborative problem solving... or, perhaps, most of the preceding text is entirely fictional and the systemd "debug flag" problem spiraled out of control in several ways at once.

Actually, that description is not entirely fantasy, if one looks at the problem the right way. It turned out that systemd was using the debug argument from the kernel command line to turn on much of its own debugging output. As Linus Torvalds noted, that is exactly how this flag was intended to be used. But a mistake in the systemd camp caused an assertion to fire, generating so much output that the system was rendered unusable; the end result was an unbootable system. After some discussion, a couple of decisions were made:

  • Systemd will stop logging through the kernel once the journald logging daemon is available; that will cause much of that output to be directed elsewhere. There are also patches floating around to cause systemd to recognize systemd.debug, rather than plain debug, as the signal to turn on its own debugging options. If merged into systemd, this change will make it easier to turn on kernel debug output without also enabling systemd's output (something which is already possible, but not in The Way Kernel Developers Have Always Done It).

  • The kernel developers have realized that it should not be possible to incapacitate a system by logging too much data from user space. Consequently, some sort of rate limiting will be applied to the /proc/kmsg interface. The proper nature of that limiting and how it will be controlled are still under discussion, but chances are good that some sort of change will find its way into the 3.15 kernel.

In other words, appropriate fixes are being applied on both sides to prevent this kind of problem from recurring. So a reasonable observer might well wonder why the technical press is full of headlines like Linus Torvalds suspends key Linux developer and Open war in Linux world. That comes down to less-than-optimal behavior on both sides of the fence — and even worse behavior in the press.

When Borislav Petkov first encountered this problem, he filed a bug against systemd, asking that its behavior be changed. A little over one hour later, systemd developer Kay Sievers closed it as "NOTABUG," saying that the behavior was expected and that the kernel is not the sole keeper of the debug flag: "Generic terms are generic, not the first user owns them." A lengthy back-and-forth followed, with developers reopening the bug and Kay closing it several times. Eventually the discussion spilled over onto the linux-kernel list when Steven Rostedt proposed hiding the debug flag from user space entirely.

Shockingly, the move to linux-kernel did little to calm the conversation. Eventually Linus announced that he was not interested in accepting any patches from Kay until Kay's pattern of behavior (as seen by Linus) changed. It didn't take that long, though, for things to calm down and for various developers to start looking at real solutions to the problem. As of this writing, that thread has been silent for a few days.

In other words, what we have here is a story that has been seen many times over. A problem turns up that reveals suboptimal behavior by two interacting pieces of software. Developers for both projects are slow to acknowledge that they could be doing things better and point fingers at the other camp. Certain high-profile community members known for their occasionally over-the-top rhetoric live up to their reputations. But once people have some time (measured in hours) to calm down, the problems are fixed and everybody moves on.

That, alas, is not a story that plays well in much of the press. So, instead, various reporters tried to inflate it into some sort of spectacular showdown. The development community was not portrayed in a good light, and perhaps some of that was even deserved. But what was really conveyed by all those articles was that, after all these years, much of the technical press still has a poor (at best) understanding of how free software development communities work.

Proprietary software tends not to be followed by stories like this because the inevitable politics, profanity, and chair-throwing are kept behind closed doors and firewalls. We, instead, do most of it in the open — though flying furniture still tends to be an exceptional occurrence. These events can be fun to watch from a suitable distance and with enough popcorn. But they mean less than the hidden corporate disagreements that we never hear about — and much less than the public accomplishments that we almost never hear about. The 3.15 merge window, ongoing while this debate was happening, has seen (as of this writing) the merging of well over 10,000 changesets from 1100 developers, most of whom are working together smoothly. But none of the press accounts mentioned that.

That's just life in the free software world. Or almost anywhere else, for that matter; where there are people, there will be misunderstandings, blowups, and the occasional failure to immediately recognize a problem. Somehow, we manage to muddle through anyway and create lots of high-quality free software. But that is so normal and mundane that it doesn't qualify for consideration as news.

Comments (49 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: Heartbleed; New vulnerabilities in kernel, openssl, tomcat, xen, ...
  • Kernel: 3.15 Merge window; New perf features; Sealed files.
  • Distributions: CoreOS: A different kind of Linux distribution; AVLinux, ...
  • Development: Font development at LGM; MongoDB 2.6; Transmageddon 1.0; merging Xwayland; ...
  • Announcements: The LLVM Foundation to launch, Brendan Eich Steps Down as Mozilla CEO, Crowdfunding the Novena Open Laptop, ...
Next page: Security>>

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds