By Nathan Willis
April 24, 2013
Libre Graphics Meeting always boasts a diverse program; as we
mentioned last week, the 2013 edition in Madrid was a
prime example. Among the project updates, reports from artists, and
philosophical sessions, though, there were also several talks which
were noteworthy for raising issues to which open source
developers—in graphics and in other fields—should give
further thought. For example, the topic of meshing print and online
design in a single application came up multiple times, and there were
sessions that dealt with how open source works with (or does not work
with...) proprietary formats and Internet standards bodies.
Liquid layout
A popular refrain among futurists is that paper printing is
dying, to be wholly replaced by electronic publishing. But as Claudia
Krummenacher observed, futurists have been saying that for decades, at
least since the days of radio, and they have not been right yet.
Moreover, she said, even if paper does eventually disappear, reality
is that software applications will have to deal with both for the
foreseeable future. And, right now, they do not deal with them well.
Krummenacher described her experiences trying to produce
good-looking output for both EPUB and PDF with the Scribus desktop
publishing application. Scribus gained support for EPUB (through an
HTML intermediary) output a year ago, but its tool set is still
designed strictly around the assumptions of the print world. For
example, pages have fixed margins, a fixed grid on which elements are
laid out, and a fixed resolution. EPUB reading devices like Kindles
and smartphones can and often do violate all of these assumptions: the
page size and resolution varies, the user can switch to a different
font or font size (thus throwing off the grid alignment), and so on.
The "correct" way to design EPUB documents is with liquid layout
rules: relative measurements such as "75% column width" and floating
positions for graphics and other elements. Web design tools already
support these conventions, but desktop publishing applications do
not. Applications like Scribus will need to support both fixed and
liquid layouts, Krummenacher said, or they will not be relevant.
She offered two possible paths forward: either design documents
using fixed measurements and improve the "best guess" approach of HTML
exporters, or change the design tools so that liquid measurements are
the default—and let the PDF exporter convert documents to fixed
measurements. Perhaps there are lessons for liquid layout and
exporting-to-fixed layout to be learned from other systems, such as
LaTeX or the various word processors, she suggested.
Other speakers echoed Krummenacher's comments, since
several of them dealt with projects that were designed for both print
and electronic output (such as JoaquĆn Herrera's Blender book).
Despite the talk, however, the conversation over how best to proceed
is only beginning.
Scribus developer Ale Rimoldi offered some hope
for the future in his talk, which described the side projects he has
been working on over the past twelve months. Those twelve months were
spent in a (self-imposed) hiatus from Scribus, during which time
Rimoldi experimented with writing small, stand-alone programs meant to
be chained together. His inspiration was smartphone apps, he said,
where no app is very complex, but he thought that smaller applications
meant to be chained together like Unix pipes could be useful on the
desktop, too. There are existing applications like Phatch that recreate the Unix
pipe model, so he thought it might be possible for improving Scribus
as well. He did not have any code to release, unfortunately.
The great thing about standards
Inkscape developer Tavmjong Bah presented an overview of the
changes coming in SVG 2.0. It was through his work on Inkscape that
he was invited to work on the SVG standard, he said. SVG is a World
Wide Web Consortium (W3C) standard, created to merge the competing
proposals Vector
Markup Language (which was backed by Macromedia, among others) and
Precision
Graphics Markup Language (whose backers included Adobe). But
despite its origins with graphics application vendors, it has evolved
over the years to become mainly the concern of web browser vendors.
After Adobe bought Macromedia, it lost interest in SVG development,
but the format saw a resurgence in popularity with the rise of HTML5,
which includes SVG and MathML.
SVG 2.0 is now in rapid development, Bah reported. There are 45 members
of its W3C working group, although only ten or so are active
participants. The big picture is that SVG 2.0 will retain most of SVG
1.1 (the current standard), but it is being "ripped apart" into
multiple modules, most of which will be shared with CSS or other W3C
standards. For example, SVG 2.0 will no longer include its own
definitions of colors, but will instead refer to the CSS Color Module. Similar
moves are found in relation to backgrounds, fonts, animations, and
other components.
For the most part, Bah said, sharing modules with the CSS
specification is a good thing, but it does have some drawbacks for
graphics application developers. The biggest problem is that the CSS
working group is dominated by the browser vendors, who do not always
share the same concerns as other application developers. As an
example, he cited image scaling. CSS specifies that images must be
scaled up with a particular algorithm, which in extreme cases results
in blurry pictures. But SVG users would often rather have an image
scaled up and retain its "blocky" look—think of a scaled-up icon,
for example. SVG cannot change the CSS specification on its own, so
graphics applications like Inkscape must break the specification or
find a sneaky way to get around it.
Much of Bah's session included updates on the many changes coming
in SVG 2.0, which are certainly of importance to LGM attendees. But
it was also interesting to hear how the W3C standards process works,
particularly since open source applications rarely participate.
Mozilla and Google's Chrome teams do participate in most W3C working
groups, but they only bring the perspective of the web browser to
bear. Other projects need to get involved if they want their concerns
to be addressed. The good news, Bah said, is that W3C policy dictates
that there must be two implementations of a feature for it to be
considered for a standard. There is no rule that either
implementation must be in a browser, however—so many other
projects that rely on W3C standards can influence the outcome by
participating and taking on some responsibility for the features they
care about.
Reversal of fortune
Standards bodies are of critical importance to ensuring
interoperability between open source and proprietary applications, but
there will always be a need for reverse engineering, too. Fridrich
Strba and Valentin Filippov from LibreOffice presented a talk on their
efforts to reverse engineer several proprietary file
formats—work that benefits numerous other projects.
Strba started out by commenting that LibreOffice is really a bunch
of libraries with various interfaces put on top. That means that
there is a standalone framework for generating the ODF file format
that the various LibreOffice applications use, and libraries for
converting it to other formats. Many other applications can and do
make use of these libraries, including Inkscape, Calligra, and
Scribus. The reuse is great from LibreOffice's standpoint, he said,
since the project gets far more bug reports and sample files than it
would see otherwise.
The recent reverse-engineering work has centered on Microsoft's
Visio diagram format, Corel Draw, and Microsoft Publisher, Strba
said. LibreOffice frequently uses Google Summer of Code projects to
advance its support for these and other formats, a strategy that has
paid off. LibreOffice is now capable of reading every version of Visio
ever released, including the recently-released Visio 2013. Filippov
described the reverse-engineering process in more detail, including
the use of custom hex viewers and parsers which he wrote.
At the moment, Filippov said, the team is planning to tackle
Freehand, a comment which prompted an audience member to ask about
Adobe InDesign. Everybody asks for InDesign, Filippov replied, but
the problem is that few people understand what it takes to pitch in
and help with the reverse engineering. It is not enough to simply
send over a complete file, he said. A volunteer needs to be prepared
to create multiple files, with specific changes, and describe how they
should look to the developers. That is how the reverse engineers
locate and interpret the many undocumented fields in a file format.
It takes a lot of specifics, and a lot of iterations. "It's good if
it's not somebody addicted to sleep," the pair concluded. "We can
promise that you'll be famous, but not that you'll be rich."
In a sense, all three topics deal with the uncomfortable realities
that confront open source developers. Print is not going away, but
then again neither are e-readers. Many standards bodies are ostensibly
open to all participants, but they can still be dominated by big
players with orthogonal or opposing concerns. Even when an open
format is demonstrably superior, the world will continue to produce
millions of documents in closed, proprietary formats. But the talks
at LGM all offered some positive steps that projects can take: reverse
engineering is hard work but it is not impossible, standards bodies
are much easier to work with when you understand how you can influence
them, and whatever the future holds for document creation, open source
can adapt to meet it.
[The author wishes to thank Libre Graphics Meeting for assistance with travel to Madrid.]
(
Log in to post comments)