LWN.net Weekly Edition for May 7, 2015
FontForge and moving forward
The annual Libre Graphics Meeting conference provides a regular update on the development of a number of free-software applications for visual graphics and design work. At this year's event in Toronto, one of the most talked-about sessions was Dave Crossland's take on the future of FontForge, the free-software font-editing tool.
Despite all of the progress that FontForge has made in the past few years, he said, recent developments have convinced him that the momentum lies elsewhere. In particular, newer projects started by dissatisfied FontForge users are likely to close the feature gap, and the type-design community seems more interested in engaging with those efforts. Perhaps predictably, that assessment of the current state of affairs sparked quite a bit of discussion—including a debate on the relative merits of desktop versus web-based applications.
Crossland began by recounting his personal experience with FontForge. He discovered the project while studying graphic design in college, and subsequently used it during his graduate work in the University of Reading's type-design program. Needless to say, virtually every other student used one of the proprietary, commercial font-design applications instead, and FontForge proved difficult to use in a number of ways.
![Dave Crossland [Dave Crossland at LGM 2015]](https://static.lwn.net/images/2015/05-lwn-crossland-sm.jpg) 
It was after that experience that Crossland got directly involved in FontForge's development. Working with Google's Web Fonts office (which exclusively uses open fonts and relies on an open-source toolchain) he was able to fund some contractors to improve FontForge. He personally underwrote additional development with his share of the proceeds from Crafting Type, a series of type-design workshops that he co-hosted.
The workshops used FontForge, and the development work that they prompted led to important bug fixes and new features (such as collaborative editing). They also led to a significantly improved packaging situation for Windows and Mac users. Nevertheless, even after several hundred students went through the workshops, only a tiny fraction of them stayed with FontForge (in the post-talk discussion, Crossland estimated the number was in the single digits).
FontForge's technical debt was clearly a problem. The application had been initially developed as a one-off project by a developer who later lost interest and left the project. Although it more or less worked, the code was not organized in a way that let new developers get involved, it contained a lot of hard-coded designs that were hard to change, and relied on a custom widget set that was difficult to maintain.
Crossland cited one example for reference: when users asked that a certain text-entry box be made elastic in size rather than fixed-width (so it could expand to fill the available horizontal space), implementing the change took more than three hours. That was far from an ideal situation, but, he said, it also shed light on a deeper problem: users would not stick with the project and help make FontForge better because contributing small design patches or even observational notes was difficult to impossible.
Freedom to compete
Meanwhile, a growing collection of new free-software font editors was cropping up, the vast majority of which are implemented as web applications. He cited four in particular: Typism, Fontclod, Prototypo, and Glyphr Studio. These web-based tools are not as full-featured as FontForge, he said, but they are developing at a rapid clip and, even more importantly, they are attracting considerable input and involvement from working type designers.
Thus, when Crossland got involved with the Metapolator project (a whole-font-interpolation program that can be used, for example, to rapidly generate multiple weights from one font), he pushed that team to adopt a similar model: build a web-based application, and solicit input from type designers. That strategy has been successful enough, he said, that he decided he cannot justify making further contributions to FontForge.
The latest round of FontForge development has given the application robust support for importing and exporting the Unified Font Object (UFO) interchange format. Soon, users will be able to create a basic font in FontForge, interpolate its weight and other properties in Metapolator, then perform any additional tweaks in FontForge.
But he expects Prototypo, Glyphr Studio, and other UFO editors to catch up to FontForge's functionality; that and the already existing ecosystem of open-source UFO scripts and utilities (most of which originate from users of other, proprietary font editors) may make FontForge irrelevant. "It seems like a lot of people want a free-software font editor and get so frustrated with FontForge that they leave and start their own," he said in summary. "Maybe we need to work on 'conservation' of FontForge rather than 'restoration' work trying to turn it into a modern editor."
Moreover, the web-based applications have demonstrated an ability to draw end users into the development and design process—something that desktop applications rarely, if ever, achieve. One of the ultimate goals of free software's "four freedoms" is to enable the user to participate in development, Crossland said. The newer, web-based font-development applications can do so in a way that FontForge has never been able to. JavaScript and CSS are easier to understand and tweak than are C and C++.
Feedback
At the end of the session, a number of audience members took issue with what they interpreted as advice from Crossland for developers to stop working on FontForge. For example, one audience member noted that he had looked at Glyphr Studio and found it to be far behind FontForge in terms of its supported features.
Another audience member suggested that if a shortage of contributors was holding FontForge back, then the project should figure out what it needs and run a public crowdfunding campaign to raise support. Crowdfunding is an approach that has been increasingly successful for free-software graphics applications of late, and Crossland had mentioned successful Kickstarter drives by the competing web-based font editors in his talk.
To those points, Crossland responded that the technical debt of FontForge is beyond what a crowdfunding campaign can raise to fix, and that the web-based editors may be behind today, but are catching up quickly. Furthermore, he added, C-based desktop applications still separate users and developers into disjoint classes. Type designers are required to learn CSS and JavaScript as part of their jobs, so they can easily get involved with web-application development. "The ideal cultural impact of software freedom is co-creation. We want a proactive culture of design and development, and I don't think traditional desktop software is the ideal way to create that."
Øyvind "Pippin" Kolås from GIMP then disagreed strongly with the notion that web-development languages could compete with C and C++. CSS and JavaScript user interfaces rely on levels of abstraction to keep things simple, he said; the core development underneath is just as complex in a "web stack" as it is in native code.
It is easy to get started writing something new (regardless of the development platform); but over time, he said, web applications will become just as complicated as native ones—if not worse, given the abstraction layers required. Thus, ditching FontForge in favor of partially completed web applications—just to solicit UI patches from type designers—is throwing the baby out with the bathwater. The community has something that works now, while the alternatives do not do much of anything by comparison.
Crossland replied that he was not trying to make a blanket statement about native development, much less telling anyone to stop working on GIMP, Scribus, or other desktop tools. He conceded Kolås's point about the underlying complexity of applications, regardless of the programming stack. He would also be happy to see others push FontForge development forward, he added; he just cannot justify it for himself. As time on the clock was running out, FontForge developer Frank Trampe cheerfully assured the crowd that he was continuing to contribute to the project, which got a round of applause.
Because the session was the last talk of the day, the discussion carried over into the evening event that followed. Ultimately, most people seemed to come away with a clearer understanding of the distinct points under debate, which were a bit conflated at the outset. The size and scope of the technical debt in FontForge is one issue; the broader competition between web-development and native application stacks is a separate one. Crossland later posted a FAQ entry on the Metapolator wiki to explain his concerns and Kolås's, together and in more detail.
The larger question about if or when a project accumulates too much technical debt to be manageable was one that resonated with a lot of attendees at the event. There were, in fact, several other web-based applications on display in the other sessions, some of which are taking on established free-software desktop application projects. Technical debt and support problems for aging code can plague any project; no one seems to have a silver bullet.
It was also noted in the discussion that web and desktop programming platforms have begun to overlap in a number of ways. GNOME Shell is scripted in JavaScript, for example, while GTK+3 and Qt's QML both rely on CSS-like styling. On the other hand, Mozilla and Google are both exploring approaches (such as asm.js and Native Client) that bring C and C++ to web applications. As hard as it may be to predict where FontForge and Glyphr Studio will be in a year's time, it is clear that the tug-of-war between desktop and web development is far from over.
A Libre Graphics Meeting showcase summary
Every year, there are a variety of talks at Libre Graphics Meeting that showcase entirely new work. While these new-project sessions frequently highlight still-in-development ideas or code that may not be quite ready for packaging and mass distribution, they are always a fascinating counterpoint to the progress reports from established application projects. At this year's event, the new projects showed off work designed to help crowdsource image collection, to do desktop publishing with HTML, and to transform 3D meshes into 2D images and back again.
The List
Matt Lee, technical lead at Creative Commons, introduced The List in a session on the first day of the event. The List is, in essence, a social-networking system in which users post public requests for images, and other users respond by posting matching images—images they have created from scratch, photographs they have taken, physical artwork they have scanned, and so forth. The project uses a free-software Android app as its interface, with an iOS app to follow shortly.
![Matt Lee [Matt Lee at LGM 2015]](https://static.lwn.net/images/2015/05-lgm-lee-sm.jpg) 
The crux of the problem that The List sets out to solve is that everyone needs images to communicate, but few people have the skills to create high-quality imagery (even among people for whom open content is a priority). The main example Lee discussed was contributions to Wikipedia; his talk was (not coincidentally) scheduled right after one from a Wikipedia volunteer who described that project's efforts to generate higher-quality SVG illustrations.
The app was developed with support from the Knight Foundation, and ultimately aims to be useful for a range of purposes, including collecting images for journalists, non-profits, and cultural institutions. The goal is a lofty one; harnessing the collective power of crowds to find obscure or out-of-print cultural works, for example, is arguably a higher calling than filling in missing photographs of buildings on Wikipedia. Creative Commons also plans to use The List app as a means to explain the values behind Creative Commons itself: at first startup, users are given a walkthrough of Creative Commons licenses before they are presented with the categorized list of requested images.
At the end of the talk, Lee outlined several of the challenges facing the project team. Internationalizing the requests for images is a hard problem, he said, relying as it does on human language and cultural context. There is also concern about whether or not the average smartphone camera will produce photos good enough to meet the needs of the people making image requests. The List's backend is not limited to smartphone photos, of course (all of the uploaded images are stored at The Internet Archive, which can handle essentially any file type). A web or application-based interface could easily allow the upload of other image types. The future of The List may include support for these other upload types—among other features, like enabling the device's geographic location to alert the user to nearby image-taking opportunities.
Finally, there is the question of image and request moderation. Balancing the desire for an open community with the Internet's tendencies for attracting trolls is a difficult challenge, to be sure. But it is one that Creative Commons, Wikipedia, and the Internet Archive have all worked at maintaining for several years now.
html2print
On the second day of the conference, Stéphanie Vilayphiou of Open Source Publishing (OSP) presented a talk about html2print, its tool for doing print-ready page layout using HTML and CSS. Printable web documents are not a new idea, of course, but many of the contemporary projects rely heavily on the HTML5 <canvas> element. That makes the resulting output more like a drawing than a text document: individual elements are not accessible through the document object model (DOM), and there is no separation between the code and the design.
![Stéphanie Vilayphiou [Stéphanie Vilayphiou at LGM 2015]](https://static.lwn.net/images/2015/05-lgm-html2print-sm.jpg) 
Html2print relies on some lightweight CSS tools (such as the preprocessor LessCSS). It defines page dimensions (including paper size and margins) as CSS properties, so that they can be adjusted and inspected easily, and allows the creation of re-flowable text elements using CSS Regions. It even allows the user to define CMYK spot colors as separate "inks" that can then be "mixed" in individual elements (e.g., 40% of chartreuse_1 and 15% eggshell_2). That feature is particularly useful for print work, because by default HTML and CSS only support RGB color at 8-bit-per-color granularity.
OSP likes to use collaborative editors like Etherpad for all of its projects, and html2print is designed to support that. It lets multiple users edit a large project simultaneously. It also supports what she called "responsive print" (a reference to responsive web design). She showed a demonstration using a booklet project: resizing the page-size parameters automatically adjusts other properties (like font size) to match. Large-format, book-like dimensions trigger a lot of text per page, while scaling the page dimensions down to pocket size proportionally scales the text so that its printed size will remain readable.
Vilayphiou explained a few of the challenges that OSP had to tackle along the way, such as implementing a new zoom tool. Relying on the web browser's zoom function was a bad idea, she said, because zooming invariably triggers a re-render operation, which often causes difficult-to-fix artifacts. The zoom tool was more or less a solved problem, she said, but others have proven trickier—such as the inherent instability of the various browser engines. Two years ago, for instance, Chrome worked perfectly, but then the project dropped WebKit for its own Blink engine and discontinued support for CSS Regions.
Vilayphiou closed out the session by noting that html2print does not yet have a proper license attached, which she said was simply an accident. She encouraged anyone with interest in preparing print documents for publication to take a look and offer feedback, although at this stage new users might need to set time aside to learn their way around the tool.
Embedding 3D data in 2D images
![Phil Langley [Phil Langley at LGM 2015]](https://static.lwn.net/images/2015/05-lgm-langley-sm.jpg) 
Architect Phil Langley presented what he called a "digital steganography" project in his session on day three. Steganography is often associated with hiding information, he said, but the term originally just referred to embedding one message inside another. The most common example is encoding a string inside the least-significant bits of an RGB image, but Langley showed several other real-world techniques, such as an audio track embedded in a photograph, and concealing one text message within another. He also noted that depth-map images such as those produced by the Microsoft Kinect camera are actually encoding 3D data in a 2D form.
That final example leads into the project that Langley undertook. A course he teaches at the Sheffield School of Architecture involves advanced 3D modeling, and he recently set up a Twitter bot to send out status reports as the software progressed through its calculations. The trouble was that tweets are only 140 characters long, which is hardly enough to explain progress in detail. So the team set out to hack a solution that used a different feature instead. Twitter allows large images to be attached to Tweets, so the team looked for a way to encode their 3D meshes as 2D images.
The initial attempt was fairly straightforward. Each face of a typical 3D model is a triangle, and each vertex of that triangle includes three coordinates (x,y,z). They tried mapping the (x,y,z) triples to RGB values, thus using three pixels to represent each triangle. While this worked, it did not provide much resolution: 8-bit-per-channel graphics allowed just 256 units in each spatial dimension, which is quite coarse.
![[A 3D mesh and corresponding
2D PNG]](https://static.lwn.net/images/2015/05-lgm-3D-sm.png) 
After further refinement, they settled on a plan that used the 24 bits of an RGB pixel to encode vertex coordinates more efficiently. The 3D model is first sliced into thin horizontal bands; each band has far fewer units in the z direction, so more resolution is available for the x and y coordinates. Furthermore, the resulting image somewhat resembles a 2D projection of the model.
This encoding scheme allowed the class's Twitter bot to generate and send PNG images that could quickly be converted into 3D models. The compression ratio was impressive, too: Langley showed a 6.7KB PNG image that represented a 156.8KB file in the STereoLithography (STL) format.
![[Mesh manipulated in GIMP]](https://static.lwn.net/images/2015/05-lgm-mesh-sm.png) 
But that was not the end of the experimentation. Once the image-based format was working, Langley and the students started experimenting with altering the PNGs in GIMP and other image-editing tools. Some image transformations would have a predictable effect on the resulting 3D model (such as stretching or compressing), while others were more chaotic. There were practical uses to treating the models as 2D images, too—Langley showed a tool that graphed the changes in a model over time by plotting each step's 2D image together in a sequence.
These three sessions might not be described as a representative sample of LGM 2015, consisting, as they do, of such wildly different projects. But they do provide a glimpse into what makes the conference interesting every year. Html2print has clear application for those in the publication-design field, while Langley's 3D-to-2D object transformation is notably more experimental, and The List presents an interesting new take on consumer-grade content creation. Together, they are nothing if not thought-provoking.
Video editing and free software
Two talks at the 2015 Libre Graphics Meeting in Toronto came from video-editing projects. One was an update from Natron, a relatively young project that deals with video compositing, while the other was a reflection on ten years' worth of development on the general-purpose non-linear editor (NLE) Pitivi. Both are active projects, but they take two markedly different approaches: one aims to support an existing industry standard, while the other must build its core functionality from the ground up.
Natron
Alexandre Gauthier-Foichat gave the first report of the two, describing progress in the Natron visual effects (VFX) compositing application. Gauthier-Foichat began with a brief discussion of the project's background: it is developed at the French Institute for Research in Computer Science and Automation (INRIA), which means "it is funded by people's taxes," and the team consists of developers, a vision scientist, and several visual effects artists, among others.
![Alexandre Gauthier-Foichat [Alexandre Gauthier-Foichat]](https://static.lwn.net/images/2015/05-lgm-natron-sm.jpg) 
Natron's 1.0 release was made in December 2014, and the project has attracted considerable attention in recent months—presenting at SIGGRAPH and other conferences. In January, it won a "best innovation" award at the inaugural Paris Images Digital Summit. Gauthier-Foichat gave a preview of what has been happening on the development side—work that is expected to be released as Natron 2.0 by the end of May.
Natron provides a node-based editing environment in which users can visually construct a effects pipeline (by connecting processing nodes on a canvas) that is then used to process video. It implements an industry standard plugin interface called OpenFX that is supported by a wide variety of other VFX applications. That well-supported standard helps Natron gain acceptance, and not just through compatibility with other applications' plugins. The reality of large productions like television and movies requires small VFX studios to collaborate and exchange data, so many of the studios write their own scripts and tools to be compatible with OpenFX, too.
![[Node editing in Natron 2]](https://static.lwn.net/images/2015/05-lgm-natronnode-sm.png) 
For the upcoming 2.0 release, the Natron team has reworked the user interface, in particular making the node-connection window easier to use. The team also spent a lot of time talking to VFX artists, who began bringing feature requests to the project as Natron grew in popularity. The number one request, he said, was for "interaction with 3D," but it took considerable research to narrow that request down into a specific feature set. Natron 2.0 will feature 3D support in the form of interaction with Blender, allowing users to place depth into the arrangement the images and clips that make up their scene, rather than treating all of the scene elements as flat pieces on the same plane.
It will also add new scripting features. Python scripting (using Python 3) will be supported, but not just for automating tasks. The new version will be the first to feature scriptable nodes—meaning that a video element can be manipulated by custom code in the middle of a processing pipeline. Natron's implementation will adhere to another widely used industry standard, SeExpr.
He showed several examples of SeExpr scripts in action, including an interactive lighting tool that lets users move illumination sources around the screen and immediately see updated results—without having to re-run the rendering process. Pixar offers a similar tool called LPics, he said, but Natron does the same thing in just a few lines of code. Still more work is yet to come, including G'MIC filter support and support for CUDA hardware acceleration.
Pitivi
Jean-François Fortin Tam presented the second of the talks, about the GTK+-based NLE Pitivi, which recently passed its tenth birthday.
![Jean-François Fortin Tam [Jean-François Fortin Tam]](https://static.lwn.net/images/2015/05-lgm-pitivi-sm.jpg) 
Ten years is a noteworthy milestone, Fortin Tam pointed out. Free-software NLE projects have a habit of starting strong then dying out; he showed a timeline graphic that depicted the rise and fall of many other free-software NLEs over the past decade. NLEs have developed their own form of the classic pick any two conundrum, he said: users can choose a subset of "soon," "cheap," and "complete" but they cannot have all three. "You can get something fairly usable, but if you want it in the next decade, you need to pay people to work on it."
This is a problem, he said, because there is ultimately no business model that supports building a free-software NLE. Even a successful Kickstarter campaign can only generate enough funds to pay "a McDonald's salary" for three to six months. The reason is that less than one percent of computer users use Linux, and of those less than one percent use NLEs. As if one percent of one percent wasn't bad enough, he estimated only a fraction of that number would ever donate more than a dollar toward development.
But financing is not the only hurdle. There are "DevOps challenges" facing an NLE project, too. Regardless of whether they are paid or not, most teams are only one to three people, which is nothing compared to the hundreds employed by each proprietary NLE company. Furthermore, he said, technology making up the development platform keeps changing in front of you.
He then played a clip from the ending of Rambo III on which Pitivi-related dialog had been superimposed. In Fortin Tam's version of the scene, two protagonists on foot are congratulating each other on having completed a port to GStreamer Editing Services, only to be surprised by the arrivals of an ever-increasing horde of armed adversaries in tanks and helicopters—who are labeled "GTK+3," "GObject Introspection," "GStreamer 1.0," and so on—and demand that they "throw down their old libraries" in surrender. Keeping up with the pace of the desktop application stack, it seems, is not easy for Pitivi's small team. The code has undergone several rewrites in its ten-year history.
Then again, he continued, the video industry changes rapidly, too. Ten years ago, capturing video from external analog and digital sources was critical; today it is ancient history. High-definition has given way to 4K, 8K, and 3D video in rapid succession, and users get mad when their HD movies take more than five minutes to render. "When I was young," he said, "I was happy if it took less than a day." On the plus side, users "no longer have to take out a mortgage" to make a quality movie, and computing power has "gone through the roof."
The big question is how a project like Pitivi can make forward progress under such circumstances. Fortin Tam said that the project's strategy is to focus on enhancing the core, not the user interface. The next release will "finally kill off" GNonLin (the NLE library that originated in the GStreamer project) in favor of a new, integrated editing layer. Design flaws in GNonLin have held back the Pitivi core for years, and the project finally decided that fixing the library was not possible.
Moving forward, he said, Pitivi has to ignore legacy support issues (such as external capture), and focus solely on the modern video-editor workflow. It has to ignore the frequent calls for collaborative-editing features as well, he said. People often say they want it, but when one surveys working NLE users, no one uses collaboration features in other products.
The project also needs to focus on being a storytelling tool, he said, and not try to be every part of the production workflow from caption editor to VFX compositor to render farm. That means adjusting to the notion of being part of a "creative suite" of applications that the users will use, he said—like making sure Pitivi works with Natron. There are still technical features that need addressing, like color management and hardware acceleration, he said. But while the future may hold amazing new features, too ("3D video, holograms, and ponies," he said), focusing on letting users tell their stories is what will make Pitivi relevant for the next ten years.
One topic that Fortin Tam did not directly address in his talk was the influence that Blender has on the various efforts to develop an NLE. Like it or not, Blender has steadily expanded the scope of its built-in toolset, which has had the side effect of stealing away a large number of Linux NLE users. The talk schedule for the week included several sessions led by video artists describing their work; the vast majority use Blender for editing and exporting their output.
Video is a rapidly changing area for software developers. Natron has grown into a popular and stable application in a short amount of time (a bit less than two years). But it also has the advantage of sticking to the widely used OpenFX specification—so it gains a lot of functionality for free, so to speak—and it has a larger potential user base by virtue of being cross-platform. Pitivi does not have either luxury; after ten years and several under-the-hood rewrites, the project may finally have found a solid footing on which to build—but it will still be a challenge.
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: SpamAssassin 3.4.1; New vulnerabilities in clamav, dnsmasq, net-snmp, owncloud, ...
- Kernel: Year-2038 preparations; OrangeFS; String processing.
- Distributions: Packaging QtWebEngine, OpenBSD, Debian GNU/Hurd, Fedora 22 Beta for aarch64 and POWER, ...
- Development: Opposition to type hints in Python; Git hosting on Launchpad; Jython 2.7.0; community adoption of App Container; ...
- Announcements: International Day Against DRM, PGConf Silicon Valley CfP, ...
 
           