Your editor has long enjoyed photography. As a high school student, he
even pondered, briefly, the idea of pursuing photography as a career; for
better or for worse, common sense won out and your editor went to
engineering school instead. But taking pictures has remained an active
hobby, even if it has tended to degrade to the creation of a stream of
snapshots of the kids for grandparent consumption in recent years. The
advent of digital photography has brought a couple of your editor's
passions back together, with only one thing - free time - missing. But,
your editor has discovered, one of the keys to the finding of free time is
to take an activity of interest and redefine it as "work." Thus, this
A while back, your editor stumbled across the Flickr HDR pool; some
of the photos in that pool were sufficiently amazing to inspire an
immediate "I wanna do that" reaction. The better part of a year later, it
finally became possible to learn a bit more about the process behind
those pictures. HDR (or high dynamic range)
photography is a set of techniques for overcoming the limitations of
contemporary hardware and, in the process, generating images which better
represent a scene as viewed by the human eye - or which appear to come from
a work of fantasy art.
The sensors in today's digital cameras have gotten good, but they still
fall short of the human eye in a few ways. In particular, the range of
light levels which can be captured by the sensor is not yet up to what film
can handle, and is far from what the eye can do. Anybody who has spent any
time taking pictures is familiar with this problem: one can take a
beautiful landscape picture, but, in the end result, the wild cloud
formations are washed out completely and the shadows just go black. Being
unable to capture a scene that one can see quite well can be most
The idea behind HDR, as it is used with photography, is to extend the
available dynamic range by taking multiple shots at different exposure
levels. For a given exposure, there will be a range of light levels which
will be captured with good resolution by the sensor; everything else gets
compressed at one end or the other. If one has a series of images at
different exposures and a reasonable model of the camera's response curve,
one can generate a composite image by using the parts of each source image
which are in the good part of the curve. So, in that landscape picture, a
very dark exposure can be used for the bright parts of the scene - clouds,
for example - while a bright exposure yields low-light details. By mixing
them together, the HDR algorithm can produce an image with full sensor
resolution across a much wider dynamic range.
As an example, consider the photograph below, taken from your editor's
(Larger versions of the images are
available.) In the original, parts of the plant in the foreground are
entirely lost in the shadows. Meanwhile, the breathtaking view of Colorado
suburbia (with mountains in the distance) is washed out entirely. The HDR
version brings all of that detail back.
HDR is not applicable to all situations. It has a tendency to turn people
into cartoon characters. Beyond that, the need for multiple exposures
generally implies setting up a tripod and taking some time for the entire
process. It is thus not well suited to changing scenes, sports photography
(though baseball, perhaps, can be expected to stand still for the requisite
time), etc. It can work well for relatively static scenes: landscapes,
buildings, the SCO case, and so on.
Most of the people playing with HDR seem to be using proprietary plugins
for a proprietary image manipulation program running under a proprietary
operating system. That is, needless to say, not your editor's preferred
mode of operation. Thus your editor began a search for tools which would
perform HDR processing under Linux. It turns out that there are a few such
tools around; there is no need to use proprietary software for this task.
The first step is to look for a way to represent HDR images - normal image
formats are not up to that task. Linux.com ran a reasonable
article on HDR formats
late last year; the end result appears to be
that the OpenEXR
format is the way to
go. The OpenEXR package comes with the libraries needed by other
applications and the deeply painful "exrdisplay" image viewer. The pfstools
adds a set of pipeline-oriented tools for working with HDR images; it is a
necessary part of any HDR hobbyist's toolkit.
Next, one should come up with a set of source images. Ideally, these images are
taken with a tripod-mounted camera and cover a range of at least two
f-stops above and below the nominally "correct" exposure. Varying the
exposure time is preferred over changing the aperture; if nothing else,
this ensures that all of the images will have the same depth of field. One
can start with images taken without a tripod, but it will be
necessary to register them before continuing. Your editor did not get into
that aspect of the task; tools like hugin and hdrprep
can be used for this job. These tools may be a good topic for your
editor's attention in a future article. One can also apply HDR techniques
to a single image, especially if it is in the camera's raw format, but
multiple exposures give much better results.
With the images in place, one can look at combining them into an HDR
image. This is a two-stage process (two user-visible stages, at least):
creating a set of response curves and using them to map the images together
into a single dynamic range space. The response curves are a mapping
between some sort of real-world light levels and the resulting sensor
values on all three color channels. When combined with information on the
relative exposure times of two (or more images), the response curves allow
the HDR program to map pixels from all of the images into the same space.
The response curves can be generated directly from the source images; they
don't normally change, so they can be saved and reused later.
The first HDR-generation tool to look at is cinepaint,
once known as "Film Gimp." This tool is a fork of the GIMP which is aimed
at use by movie studios; its floating-point image data support makes it useful for
HDR processing as well. The generation of HDR is done with the "bracketing
to HDR" plugin which is, happily, packaged with the cinepaint source
distribution. There is a
detailed explanation of what this plugin does and how to use it. Be
warned that it makes for somewhat difficult reading - and it would even if
it weren't originally written in German.
The good news is that actually using this plugin is easy. One selects
"bracketing for HDR" from the File->New from menu, then
selects the set of source images from a simple dialog. The plugin will
then import them. There is no provision for obtaining the relative
exposure information from the image files themselves; instead, the plugin
sorts the images by brightness and applies an assumed (adjustable) exposure
difference between them.
It attempts to feed each image to dcraw for decoding, but your
editor was not able to get raw images to work despite the fact that dcraw
supports his camera just fine; it looks like the raw import plugin was
written for an older version of dcraw. That problem is likely to be easily
overcome; your editor just didn't want to spend much time on it. So TIFF
files were used instead.
Once the images are in, the user can check the exposure values, then hit
the "compute response" button. That yields the two plots shown in the
screenshot. By messing around with the buttons, one can look for the
reference image which yields the smoothest set of response curves - or one
can just accept what the plugin does by default.
Then a click on the "generate HDR" button creates the final
product, which can then be saved out in the OpenEXR format.
Your editor set out to take some amazing pictures for this article. The
area in which your editor resides is widely held to be beautiful, but,
frankly, Colorado is not at its best in early March; perhaps this article
should have been written in June. Nonetheless, the effort
was made. Below is a rather mediocre shot of the Boulder foothills in
original and HDR (with cinepaint) forms (larger
The HDR image above shows a halo effect (the bright sky above the mountain)
which is characteristic of some tone mapping algorithms; we'll get into
tone mapping shortly.
An alternative approach is PFScalibration,
a command-line HDR generation utility based on pfstools. These tools work
as a netpbm-like pipeline; their use requires a fair amount of typing,
though much of the work can be scripted. The steps are the following:
- Run jpeg2hdrgen to generate a description file for the source
images. It reads the EXIF information from the source files to get
the relative exposures and outputs it in a simple file. There is a
dcraw2hdrgen tool as well, but the subsequent stages in the
pipeline are not able to work with raw files. Your editor suspects
that TIFF files could be used by creating the hdrgen file by hand, but
the whole process seems to be intended for use with JPEG files. A
lossy file format is not the most auspicious starting point for
somebody interested in high dynamic range imagery, but that's how it
- The pfshdrcalibrate utility can then be used to create a set
of response curves; gnuplot can be used to visualize them.
This process can take some time (it's significantly slower than
cinepaint), but the resulting file can be saved and reused with
different images in the future.
- Another pfshdrcalibrate run then uses the response curves to
create the HDR image. Piping the output into pfsoutexr
generates an OpenEXR file.
Here's an example generated from a series of pictures of your editor's
dungeon office (larger
As a general rule, HDR images generated with cinepaint and PFScalibration
tend to look identical. The generation of HDR is not where the real magic
lies, so the results should be close.
For those who don't like command-line HDR processing, the qtpfsgui utility may be worth a
look. It is a graphical wrapper around PFScalibration based on QT4; it
handles both HDR generation and tone mapping. On the HDR side, it puts up
a file selection dialog for the source images followed by the "HDR creation
wizard." The user is asked to select a "creation configuration," from a
list of configurations helpfully named "Configuration 1" through
"Configuration 6". The advice to stick with Configuration 1 was
hard for your editor to ignore; simply hitting "next" generated the image.
Said image appeared in a display window; like exrdisplay, this window can
only show the image in full resolution. Your editor, lacking a
7 megapixel monitor, was thus unable to view the entire image at
once. Even worse, qtpfsgui is one of the family of (generally KDE-based)
graphical tools which feels the need to implement its own window manager.
The display window lives within the larger qtpfsgui window; it cannot be
resized with the usual shortcut your editor is used to. In summary,
qtpfsgui gets the job done, but writing a simple script around
PFScalibration seems like an easier way to go.
While the tools above will generate a fine HDR image, one problem remains:
the dynamic range in that HDR image far exceeds the range of your editor's
monitor (or printer). Turning that image into something which can be
displayed requires a step called tone mapping. This is
where the serious magic comes in: somehow the vast amount of information in
the HDR image must be scaled back in a way which does not compromise the
image quality that was the whole point of this exercise in the first place.
Several tone mapping algorithms exist, and most of them have a number of
mysterious knobs to tweak. While the generation of HDR can be mostly
automated, tone mapping inherently requires experimentation and human
The bulk of the action appears to be in the pfstmo package, which
implements several tone mapping algorithms as separate, standalone
filters. One can use pfstmo with the rest of the pfstools package to
construct pipelines which generate tone-mapped images. Given the iterative
nature of the task, however, it would be nice if there were a better way.
That better way is qpfstmo,
a Qt-based graphical interface to pfstmo. The interface feels a little
clunky at times, and it would sure be nice to have some online
documentation on what the various parameters do, but qpfstmo does what is
really needed: it lets the user play with tone mapping algorithms and
compare the results. A small image size can be used for trying out
algorithms and parameters - a real time saver, since some of the algorithms
can take a long time on a full-size image - and multiple versions of the image can be on the
screen at once. When a final configuration is found for a given image, it
can be generated in a larger size and saved in any of the usual image
formats. When applied to a large image file, this step can be rather hard
on the hardware; your editor discovered that 1GB of memory was not really
The qtpfsgui tool mentioned above has the ability to drive pfstmo as well.
It is, in fact, clear that this tool shares a lot of code with pfstmo. The
interface is far less friendly, however: everything happens within the One
Big Window and it does not appear to be possible to see the results from
more than one algorithm at the same time. It resets the display image size
every time the user changes algorithm. One assumes that this (fairly new)
tool will improve over time. For now, though, qpfstmo seems like a much
better way to go for tone mapping control.
A different set of tone mapping operators is supplied with the exrtools distribution. Your editor
tried them all; each one is a cumbersome, multi-step process. It can take
a long time to process an image, only to find that the parameters need
quite a bit of tweaking. The tools seem like they will do quality
transformations, but they just cry out for a qpfstmo-like interface which
allows experimentation with smaller-size images and comparison of results.
For what it's worth, here's a shot taken from the hill above your editor's
house mapped with the exrtools non-linear masking method:
See the larger versions for more detail.
Doubtless one could get good results from these tools with enough effort,
but your editor found it easier to get quality images with psftmo.
For the generation of HDR images, your editor found cinepaint to be faster
and simpler to work with. This does not count, however, the long and
frustrating experience of building the HDR plugin on a Fedora
Rawhide system; one gets the sense that the plugin's author uses a rather
older, less picky version of g++. Longer-term, however, the PFScalibration
suite may prove to be the way to go. It is far more compact and easy to
install on a new system; why lug the weight of cinepaint if one is not
going to use its other features? A bit of scripting will easily turn
PFScalibration into a single-command HDR generation tool.
It's worth noting that there are a couple of other HDR generators for Linux
out there. MakeHDR is
where a lot of it started; one of its authors is Paul Debevec, who did much
of the early research in this area. The code was last touched in 1999,
however, and it comes with a "educational purposes only" license. One can
also look at HDRgen, but it is a
binary-only, free-beer tool. Your editor did not actually try either one
of them; given that the free tools do the job so well, there didn't seem to
be any point.
For tone mapping, pfstmo (and qpfstmo) are the best tools at this point. It is
hard to be entirely satisfied with the state of the art in this area,
though. Tone mapping will always be an exercise in compromises, so it's
not surprising that the results are rarely perfect. There is likely to be
room for improvement - in both the algorithms and the interface to them -
for some time to come.
As is the case in many areas, Linux has the tools one needs to play with
high dynamic range imagery. One just has to work a little harder to get
started than on some other systems. HDR has found its way into your
editor's photographic toolkit; look for the results in the reporting from
some conference in some exotic part of the world. When playing with this
stuff, your editor is far from grumpy.
Comments (29 posted)
The problem first came
in February: the Red Hat Directory Server developers would like to
include the Java Security Services module in the Fedora distribution. The
code, it seems, is free, but there is still a problem: the Java virtual
machine requires that all Java Cryptography Extension providers (of which
JSS is one) to be signed with a Sun-approved key. If an application tries
to use a JCE module which lacks the requisite signature, the whole thing
comes crashing down - an experience which probably differs from what the
user had in mind. In practice, this limitation means that users either use
the signed version obtained from Sun, or do not use JSS at all.
Warren Togami recently posted a couple of
possibilities for how Fedora might be able to ship JSS. They were:
- The Fedora team builds the JSS module, then compares it to the
Sun-signed version. Assuming they match, Fedora has proved that it
can rebuild the software. So the project can declare Mission
Accomplished, dump the module it just built, and ship Sun's version.
A variation on this approach suggested later on involved having Red
Hat obtain an approved key and sign the modules that Fedora would
distribute; in this way, Fedora could add its own modifications.
- Fedora ships an unsigned version of JSS. Applications would then have
to be recoded to load the module in a way which shorts out the
signature check. Any applications not fixed up in this way would
The first option, at first blush, would appear to work. Fedora would be
able to build its own module and ship the source. It falls down, however,
as soon as a Fedora user tries to make a change; that user will not be able
to rebuild the patched module in a way that will actually work. Derivative
distributions would run into the same problem. As a result, it would
appear that Fedora stopped considering this option fairly quickly.
Not signing the module at all has obvious problems as well. It seems
likely that many potential JSS users have their own applications in mind.
If those applications do not work, they will rightly see Fedora as not
having support for the features they are looking for.
Other alternatives have been considered; one is to emphasize the use of the
GCJ compiler and try to steer users away from Sun's virtual machine. That
approach would certainly offer a higher degree of freedom, at the cost of
not really providing what many Java users appear to want. Additionally,
not everybody is convinced that GCJ has achieved the level of maturity that
many users would expect.
In an interesting way, this is really just the Tivo problem under a
different guise. Locked-down hardware refuses to run software which lacks
the expected signatures. In this case, we have virtual hardware, in the
form of the Java virtual machine, which is doing the same thing. The
result is the same as well: the software is available and nominally free,
but the users of that software cannot create their own versions and expect
to be able to run them.
If Sun follows through on its desire to move to GPLv3, and if that license
retains its requirement that any needed signing keys be distributed with
the source, Sun may find itself in an interesting position. It is hard to
see how the current policy would be compliant with the new GPL's
The upcoming GPL Java release would appear to be the best hope for
distributors trying to deal with this situation. Once the code is free,
distributors can patch it to make the whole of Java distributable as free
software. So the real solution to shipping JSS with a distribution which
insists on freedom would appear to be to wait for a free Java.
Comments (9 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Intrusion detection for the browser; New vulnerabilities in amarok, kdelibs, mplayer, xen, ...
- Kernel: Kernel events without kevents; paravirt_ops considered harmful?; RSDL hits a snag.
- Distributions: CentOS; new releases: CentOS 5 beta, Mandriva Corporate Desktop 4.0 beta, RHEL 5; new Debian etch release schedule; Gentoo fights flamewars; EOL for SUSE Linux 9.3; pg_live; Linux Mint reviewed
- Development: Mirage: a fast and simple image viewer, Draft KDE 4.0 release roadmap,
new versions of pg8000, MySQL Community Server, ECB_AT91 SBC, Linbox Directory
Server, IPCop, Seagull, Asymptote, GnuPG, OpenSSH, SQL-Ledger, Bullet Physics
SDK, Kimboot, wxWidgets, Dissent, Csound, MMA, UFRaw, Firekeeper, ACC,
OSELAS.Toolchain, DOSBox, GUASI.
- Press: Internet Radio on Death Row, FSFE's Freedom Task Force,
FSF: 2007 year of GPLv3, PyCon coverage, Linux on Dell issues,
Novell's acquires RedMojo, RHEL 5, FAA and Japan consider Linux,
debug html with Firebug, reviews of K3b, Thunar and Zabbix, AussieChix.
- Announcements: EFF kills Clear Channel patent, EFF on DVB, OLPC betas available, OO.o letter
to Dell, mobile databases, NetBeans supports Ruby, OpenVZ for RHEL5, Oracle
joins Eclipse board, Sun's Project Darkstar, Summer of Code Mentoring HOWTO,
EFF Pioneer Awards, Mellon Award, PyWeek, Dell Linux survey, Black Hat CFP,
DeepSec CFP, GUADEC CFP, GUADEMY, JavaOne, MySQL Conf, Samba XP, SciPy.