Longtime GStreamer hacker Wim Taymans opened the first-ever GStreamer
conference with a look at where the multimedia framework came from, where
it stands, and where it will be going in the future. The framework is a
bit over 11 years old and Taymans has been working on it for ten of those
years, as conference organizer Christian Schaller noted in his
introduction. From a simple project that was started by Eric Walthinsen
on an airplane flight, GStreamer has grown into a very capable framework
that is heading toward its 1.0 release—promised by Taymans by the end
Starting off with the "one slide about what GStreamer is",
Taymans described the framework as a library for making multimedia
applications. The core of the framework, which provides the plugin system for
inputs, codecs, network devices, and so on, is the interesting part to
him. The actual implementations of the plugins are contained in separate
with a core-provided "pipeline that allows you to connect
When GStreamer was started, the state of Linux multimedia was "very
poor". XAnim was the utility for playing multimedia formats on
Linux, but it was fairly painful to use. Besides GStreamer, various other
multimedia projects (e.g. VLC, Ogle, MPlayer, FFmpeg, etc.) started in the
1999/2000 timeframe, which was something of an indication of where things
were. The competitors were well advanced as QuickTime had appeared in 1991
and DirectShow in 1996. Linux was "way behind", Taymans said.
GStreamer's architecture came out of an Oregon Graduate Institute research
project with some ideas from DirectShow (but not the bad parts) when the
was started in 1999. Originally, GStreamer was not necessarily targeted at
multimedia, he said.
The use cases for GStreamer are quite varied, with music players topping
the list. Those were "one of the first things that actually
worked" using GStreamer. Now there are also video players (which
are moving into web browsers), streaming servers, audio and video editors,
and transcoding applications. One of the more recent uses for GStreamer,
which was "unpredicted from my point of view", is for
voice-over-IP (VoIP) and both the Empathy messaging application and
Tandberg video conferencing application are using it.
After the plane flight, Walthinsen released version 0.0.1 in June 1999. By
July 2002, 0.4.0 was released with GNOME support, though it was "very
rough". In February 2003, 0.6.0 was released as the first version
where audio worked well. After a major redesign to support
multi-threading, 0.10.0 was released in December 2005. That is still the
most recent major version, though there have been 30 minor releases, and
0.10.31 is coming soon. 0.10.x has been working very well, he said, which
raises the question about when there will be a 1.0.
To try to get a sense for the size of the community and how it is growing,
Taymans collected some statistics. There are more than 30 core developers
in the project along with more than 200 contributors for a codebase that is
roughly 205K lines of code. He also showed various graphs of the
commits per month for the project and pointed a spike around the time of
the redesign for 0.10. There was also a trough at the point of the Git
conversion. As expected, the trend of the number of commits per month
rises over the life of the project.
In order to confirm a suspicion that he had, Taymans made the same graph
for just the core, without the plugins, and found that commits per month
has trailed off over the last year or so. The project has not been doing
much in the way of new things in the core recently and this is reflected in
the commit rate. He quoted Andy Wingo as an explanation for that:
"We are in 'a
state of decadence'".
When looking at a graph in the number of lines of code, you can see
different growth rates between the core and plugins as well. The core
trend line is a flat, linear growth rate. In contrast, the trend line for the
plugins shows exponential growth. This reflects the growing number of
plugins, many of which are also adding new features, while the core just gets
incremental improvements and features.
The current state
Taymans then spent some time describing the features of GStreamer. It is
fully multi-threaded now; that code is stable and works well. The advanced
trick mode playback is also a high point, and it allows easy seeking within
audio and video streams. The video editing support is coming along, while
the RTP and streaming support are "top notch". The plugins are
extensive and well-tested because they are out there and being used by lots
of people. GStreamer is used by GNOME's Totem video player, which puts
it in more hands. "Being in GNOME helps", he said.
The framework has advanced auto-plugging features that allow for dynamic
pipeline changes to support a wide variety of application types. It is
also very "binding friendly" as it has bindings for most
languages that a developer might want to use. Developers will also find that
it is "very debuggable".
There are many good points with the 0.10 codebase, and he is very happy
with it, which is one of the reasons it has taken so long to get to a 1.0
release. The design of 0.10 was quite extensible, and allowed many more
features to be added to it. Structures were padded so that additional
elements could be added for new features, without breaking the API or ABI.
For example, the state changes and clocks handling code was rewritten
during the 0.10 lifetime. The developers were also able to add new features like
navigation, quality of service, stepping, and buffering in 0.10.
Another thing that GStreamer did well was to add higher-level objects.
GStreamer itself is fairly low-level, but for someone who just wants to
play a file, there are a set of higher-level constructs to make that easy—like playbin2, for playing video and audio content, and tagreadbin
to extract media metadata. The base classes that were implemented for
0.10, including those that have been added over the last five years, are
also a highlight of the framework. Those classes handle things like sinks,
transforms, decoders, encoders, and so on.
There are also a number of bad points in the current GStreamer. The
current negotiation of formats, codecs, and various other variable
properties is too slow. The initial idea was to have a easy and
comprehensible way to ask an object what it can do. That query will return
the capabilities of the object, as well as the capabilities of everything
that it is connected to, so the framework spends a lot of time generating a
huge list of capabilities. Those capabilities are expressed in too verbose
of a format in Taymans's opinion. Reducing the verbosity and rethinking the
negotiation API would result in major performance gains.
The "biggest mistake of all" in GStreamer is that there is no
extensible buffer metadata. Buffers are passed between the GStreamer
elements, and there is no way to attach new information, like pointers to
multiple video planes or information to handle non-standard strides, to
those buffers. There also need to be generic ways to map the buffer data
to support GPUs and DSPs, especially in embedded hardware. It is very
difficult to handle that with GStreamer currently and is important for
While dynamic pipeline modifications work in the current code, "the
moment you try it, you will suffer the curse of new segments",
Taymans said. Those can cause the application to lose its timing and
synchronization, and it is not easy to influence the timing of a stream, so
it is difficult for an application to recover from. The original idea was that
applications would create objects that encapsulated dynamic modifications,
but that turned out not to be the case. There are also a handful of minor
problems with 0.10, including an accumulation of deprecated APIs, running
out of padding
in some structures, and it becoming harder to add new features without
breaking the API/ABI.
A look to the future
To address those problems, Taymans laid out the plans for GStreamer for the
next year or so. In the short term, there will be a focus on speeding up
the core, while still continuing to improve the plugins. There are more
applications trying to do realtime manipulation of GStreamer pipelines, so
it is important to make the core faster to support them. Reducing overhead
by removing locks in shared data structures will be one of the ways used to
In the medium term, over the next 2 or 3 months, Taymans will be collecting
requirements for the next major version. The project will be looking at
how to fix the problems that have been identified, so if anyone "has
other problems that need fixing, please tell me". There will also
be some experimentation in Git branches for features like adding extensible
Starting in January, there will be a 0.11 branch and code will be merged
there. Porting plugins to 0.11 will then start, with an eye toward having
them all done by the end of 2011. Once the plugins have started being
ported, applications will be as well. Then there will a 1.0 release near
the end of 2011, not 2010 as was announced in the past. "This time
we'll do it,
promise". Taymans concluded his talk with a joking
promise that "world domination" would then be the result of a GStreamer 1.0
Comments (54 posted)
The term "high dynamic range photography" (HDR) encompasses a variety of
techniques for working with specialized image formats that are capable of
handling extremes of brightness and shadow beyond what can be stored in
more pedestrian file formats like TIFF and JPEG, and beyond what can be
displayed on CRT and LCD monitors. The leading HDR application for desktop
Linux users is Luminance
HDR, though that dominance is mostly by default: Linux-based HDR
applications are quite scarce, and have, if anything, become more so since
the Grumpy Editor's HDR with Linux
article was published in 2007. Luminance recently released an update,
which makes progress
on the usability front, but still leaves considerable room for growth in
Version 2.0.1 was unveiled on October 9, the
first update to the 2.0-series released by the project's new maintainer
Davide Anastasia. Anastasia inherited maintenance duties in September,
making him the fourth project leader in two years. At that time he outlined a short list of
goals on the project blog, beginning with fixing long-standing crashes,
then working to undo feature regressions introduced in the 2.0 release, and
finally improving on what many users and software reviewers have
(accurately) described as a confusing user interface. 2.0.1 introduces a
few cleanups, but primarily consists of bug-fixes.
Linux users can download source code
packages from the project's SourceForge.net site (the URL comes from the
application's original name, Qtpfsgui — arguably the most
intimidating project moniker open source software has ever released).
There are Mac OS X and Windows binaries provided as well. The code is
simple to compile; it uses the Qmake build tool and depends on Qt4, the
image processing libraries Exiv2, libTIFF, and OpenEXR, and the FFTW3 and
GNU Scientific Library math libraries. The only hiccup that I encountered
in the build process was that Qt4-specific versions of Qmake, the Qt user
interface compiler UIC, and Qt meta-object compiler MOC are required; those
who build Qt4 applications regularly should have no trouble whatsoever.
HDR workflow: image creation
Presumably, anyone with hardware capable of natively capturing and
displaying HDR content also has special-purpose editing software provided
by Skywalker Ranch, Weta Digital, or some other professional studio.
Luminance is designed for the rest of us, with standard-issue digital
cameras and displays. Thus, its workflow consists of two major tasks:
importing a set of low dynamic range (LDR) images to blend into a single
HDR image, and taking an HDR image and mapping it into an LDR format for
distribution over the web or in print.
Most readers have seen HDR-based photos on Flickr or other online sites.
The canonical example scenarios are city streets photographed late at night
(where the buildings, the street lights, and the lit windows are all
visible in one shot) and scenes in broad daylight, where both sun-lit and
shaded subjects are properly exposed. In all of these situations, the key
is taking multiple exposures at different settings: some exposed for the
shadow areas, some exposed for the highlights. In software, we can blend
them together, leaving neither washed-out bright spots nor murky,
Creating a new HDR image in Luminance consists of loading in the set of
LDR originals, taken at bracketed exposure settings, lining them up, and
blending the stack down into a single image. The image importer allegedly
supports a wide variety of formats, including any camera raw file type
supported by DCraw,
JPEG, and TIFF. Once imported, Luminance reads the exposure setting from
each file's EXIF tags, or allows you to input it manually if such a tag
cannot be found. There are two automatic image-alignment algorithms
available — an internal scheme labeled "Median threshold bitmap," and
the align_image_stack function from the open source panorama tool Hugin. Alternately, you can
choose to manually align the images using built-in editing tools. Finally,
you must choose an HDR image "profile," consisting of a weighting function,
response curve, and HDR creation algorithm. The settings you choose are
applied to your stack of input images, and the result pops up in a preview
window, where you can inspect it in all of its HDR glory.
In my tests, however, there were more than a few pitfalls to this
process. First, selecting and loading the LDR images is more difficult
than it needs to be, because you must select all of the images you
want to use in the file selector, at the same time (i.e., there is no "add
another" button). This means they must all be in the same directory, and on
a practical level it means you must look them up in an image previewer
first, because there is no thumbnail preview, and after a while the
contents of IMG_4342.CR2 and IMG_4243.CR2 become harder to memorize.
Luminance also cannot read Exif tags from TIFF files, and I was unable to
successfully load JPEG conversions of my raw images, with Luminance
complaining that they were an "invalid size," regardless of what size they
were saved at.
The alignment step is also problematic; the Median threshold algorithm
crashed every time I tried it, and align_image_stack tended to hang
indefinitely. Eventually I decided to align my test images in Hugin
directly, but this is also a very trying process. The wiki documentation
is more than two releases out-of-date, and I could not decipher which
combination of checkboxes needed to be set for Hugin to align the images
geometrically without attempting to blend their exposure settings. That
experiment ended up being a useless tangent anyway, however, because
Luminance could not read any of the TIFF files Hugin produced. At the
Hugin wiki's suggestion, I also attempted to use the Perl-based hdrprep
for alignment, but it too failed to read the Exif data from the TIFFs.
The manual alignment tools offer some fine-grained control, including
multiple ways to overlay two images on the canvas in order to eyeball their
overlap and a masking function called Anti Ghosting. Sadly these tools
also fall a bit short, primarily because there is no way to correct
rotation problems, only vertical and horizontal pixel shifts. Even when I
took test photos with a tripod, a small amount of rotation misalignment was
part of the natural wind-and-camera-shake effects.
It is also difficult to make an informed choice about the HDR profiles,
which are named "Profile 1" through "Profile 6." The weighting function,
response curve, and HDR creation algorithm options are similarly opaque,
and because installing 2.0.1 from source evidently does not include the
user manual, looking up the HDR terminology online is the confused
user's only recourse.
HDR workflow: image output
Luminance is capable of saving directly to several HDR image formats,
TIFF and OpenEXR.
These formats use floating point numbers rather than integers for pixel
data, allowing them to encode a much wider range of total values —
potentially 38 f-stops, depending on the options.
This is orders of magnitude greater than a modern PC screen can display,
so Luminance provides an HDR "visualizer" that allows you to explore an HDR
image by adjusting the gamma and exposure with sliders. It might be
confusing to new users at first, because it appears at first glance like
the process of importing and blending the source images has produced
nothing more than another LDR image, but in fact the visualizer only shows
a portion of the image at a time, due to the physical limitations of the
If your goal is to save the image to OpenEXR or another format, you can
do so as soon as the import process is complete. Most of the time,
however, you will be interested in the second of Luminance's major tasks,
compressing the HDR image back into a common LDR format — in such a
way that it preserves as much detail as possible. You do this with the
"Tonemap HDR image" menu entry, which brings up a workspace where you can
select and test nine different tone-mapping algorithms, creating
thumbnail-sized preview images before committing to a final choice.
Here again the user interface confronts the user with a formidable list
of techno-speak options and little in the way of explanation. At some
level, that is expected; the algorithms have scientific (rather than
marketing-approved) names such as Mantiuk '06 and Reinhard '02 because they
are named after their
creators. But without reading the original papers, it is unreasonable
to expect a user to decipher all of the individual settings. The Ashikhmin
algorithm, for example, sports a checkbox labeled "Simple" and a radio
button allowing you to choose between "Equation Number 2" and "Equation
Number 4." Anyone who can guess what that means without looking is my
Still, at least Luminance gets it right by allowing you to experiment
with multiple test images and to compare them side-by-side. Other parts of
the GUI (such as the image loader) have a frustrating lack of backup or
undo operations. The final output, after all, is the end that justifies
all of the means — so if a user can experiment with different
algorithms and eventually stumble across a pleasant result, he or she will
be happy even if the underlying formulas remain a mystery.
The upper-bound on usability
The tone-mapping algorithm "issue" raises an important question for
Luminance and other niche graphics applications, namely: is it
always possible to build a user interface with novice-level
simplicity, or are some tasks inherently complicated? Do users really
need all nine tone-mapping algorithms? Perhaps Luminance could be
refactored to hide all of the mathematical details from the user, or dress
them up in friendlier terms — but maybe that process would destroy
too much of the application itself, turning it into a toy. The same
question could probably be asked about Hugin or any of several complex GIMP
I tend to think that photographers (like everyone else) have a greater
capacity for understanding the scary mathematical and theoretical tasks
than they give themselves credit for. Most have gotten used to the arcane
demosaicing and noise-removal algorithms found in raw image editors, after
all. While Luminance 2.0.1 was frustrating to work with for many reasons,
the bulk of the frustration came not from exposing too much scientific
technobabble, but from the same sort of usability and interface problems
that plague any understaffed project: the lack of thumbnail previews,
vacant tooltips, missing "undo" buttons, unsupported file formats, and
sudden crashes. My guess is that, absent those stumbling blocks, almost
any user could get used to the peculiarities of HDR image creation and
That having been said, Anastasia has his work cut out for him.
Luminance has had many cooks in recent years, a fact that has undoubtedly
contributed to its perplexing user interface and crash-proneness. Cleaning
it up is high on Anastasia's to-do list as project maintainer; those of us
who want to see a high-quality open source HDR tool can only hope he
manages to build some momentum. Version 2.0.1, although it was only a
bug-fix release, is a tantalizing first step because it came mere weeks
after Anastasia took over the reins — the gap between the last 1.9.x
release and 2.0.0 lasted well over a year. Today, Luminance has an active
maintainer, a new release, and a TODO file included with the source code
package. It isn't perfect, but it could be the beginning of something
Comments (4 posted)
The exploration of design patterns is importantly a historical
search. It is possible to tell in the present that a particular
approach to design or coding works adequately in a particular
situation, but to identify patterns which repeatedly work, or
repeatedly fail to work, a longer term or historical perspective is
needed. We benefit primarily from hindsight.
on design patterns took advantage of the
development history of the Linux Kernel only implicitly, looking at
the patterns that could be found it the kernel at the time with little
reference to how they got there. Perspective was provided by looking
at the results of multiple long-term development efforts, all included
in the one code base.
For this series we try to look for patterns which become visible only over
an extended time period. As development of a system proceeds, early
decisions can have consequences that were not fully appreciated when
they were made. If we can find patterns relating these decisions to
their outcomes, it might be hoped that a review of these patterns
while making new decisions will help to avoid old mistakes or to
leverage established successes.
A very appropriate starting point for this exploration is the Ritchie
and Thompson paper, published in Communications of the ACM, which introduced
In that paper the authors claimed that the success of Unix was not in
"new inventions but rather in the full exploitation of a carefully
selected set of fertile ideas."
The importance of "careful selection" implies a historical
perspective much like the one here proposed for exploring design
patterns. A selection can only be made if previous experience is
available which demonstrates a number of design avenues to choose
between. It is to be hoped that identifying patterns would be one
aspect of the care taken in that selection.
Over four weeks we will explore four design patterns which can be traced
back to that early Unix of which Ritchie and Thompson wrote, but which
can be seen much more clearly from the current perspective.
Unfortunately they are not all good, but both good and bad can provide
valuable lessons for guiding subsequent design.
"Full exploitation" is essentially a pattern in itself, and one we
will come back to repeatedly. Whether it is applied to software
development, architecture, or music composition, exploiting a good
idea repeatedly can enhance the integrity and cohesion of the result
and is - hopefully - a pattern that does not need further
That said, "full exploitation" can benefit from detailed
illumination. We will gain such illumination for this, as for the
other three patterns, by examining two specific examples.
Ritchie and Thompson identified in their abstract several features of
Unix which they felt were noteworthy. The first two of these will be our
two examples. Using their words:
- A hierarchical file system incorporating demountable volumes,
- Compatible file, device, and inter-process I/O,
The second of these is sometimes seen as a key hallmark of Unix and
has been rephrased as "Everything is a file". However that term does
the idea an injustice as it overstates the reality. Clearly
everything is not a file. Some things are devices and some things are
pipes and while they may share some characteristics with files, they
certainly are not files.
A more accurate, though less catchy, characterization would be
"everything can have a file descriptor". It is the file descriptor as
a unifying concept that is key to this design. It is the file
descriptor that makes files, devices, and inter-process I/O
Though files, devices and pipes are clearly different objects with
different behaviors, they nonetheless have some behaviors in common
and by using the same abstract handle to refer to them, those
similarities can be exploited. A program or library routine that does not
care about the differences does not need to know about those differences
at all, and a program that does care about the differences only needs
to know at the specific places where those differences are relevant.
By taking the idea of a file descriptor and exploiting it also for
serial devices, tape devices, disk devices, pipes, and so forth, Unix
gained an integrity that has proved to be of lasting value. In modern
Linux we also have file descriptors for network sockets, for receiving
timer events and other events, and for accessing a whole range of new
types of devices that were barely even thought of when Unix was first
developed. This ability to keep up with ongoing development
demonstrates the strength of the file-descriptor concept and is
central to the value of the "full exploitation" pattern.
As we shall see, the file descriptor concept was not exploited as
fully as possibly it could have been, either initially or during ongoing
development. Some of the weaknesses that we will find are in
places where there was missed opportunity for full exploitation of
file descriptors or related ideas, and many of the strengths are in
places where file descriptors were used to enable new functionality.
Single, Hierarchical namespace
The other noteworthy feature identified by Ritchie and Thompson (first
in their list) was a hierarchical filesystem incorporating
There are three key aspects to this file system which are particularly
significant for the present illustration.
- It was hierarchical. We are so used to hierarchical namespaces
today that this seems like it should be a given. However at the time
it was somewhat innovative. Some contemporaneous filesystems, such as
the one used in CP/M, were completely flat with no sub-directories.
Others might have a fix number of levels to the hierarchy, typically
two. The Unix filesystem allowed an arbitrarily deep hierarchy.
- It allowed demountable volumes. While each distinct storage
volume could store a separate hierarchical set of files, this
separation was hidden by combining all of these file sets into a
single all-encompassing hierarchy. Thus the idea of hierarchical
naming was exploited not just for a single device, but across the
union of all storage devices.
- It contained device-special files. These are filesystem objects
that provide access to devices, both character devices like modems
and block devices like disk drives. Thus the hierarchical naming
scheme covered not only files and directories, but also all devices.
The design idea being fully exploited here is the hierarchical namespace.
The result of exploiting it within a single storage device,
across all storage devices, and providing access to devices as well as
storage, is a "single namespace". This provides a uniform naming
scheme to provide access to a wide variety of the objects managed by
The most obvious area where this exploitation continued in subsequent
development is the area of virtual filesystems, such as procfs and
sysfs in Linux. These allowed processes and many other entities which
were not strictly devices or files to appear in the same common
Another effective exploitation is in the various autofs or auto-mount
implementations which allow other objects, which are not necessarily
storage, to appear in the namespace. Two examples are
/net/hostname which includes hosts on the local
network into the namespace, and /home/username which
allows user names to appear. While these don't make hosts and users
first-class namespace objects they are still valuable steps forward.
In particular the latter removes the need for the tilde prefix
supported by most shells and some editors (i.e. the mapping from
~username to that user's home directory). By incorporating
this feature directly in the namespace, the functionality becomes available to
As with file descriptors, the hierarchical namespace concept was not
exploited as fully as might have been possible so we don't really have
a single namespace. Some aspects of this incompleteness are simple
omissions which have since been rectified as mentioned above. However
there is one area where a hierarchical namespace was kept separate,
with unfortunate consequences that still aren't fully resolved today.
That namespace is the namespace of devices. The
device-special files used to include devices into the single
namespace, while effective to some degree, are a poor second cousin to
doing it properly.
A little reflection will show that the device namespace in Unix is a
hierarchical space with three or more levels. The top level
distinguishes between 'block' and 'character' devices. The second
level, encoded in the major device number, usually identifies the driver which
manages the device. Beneath this are one or two levels encoded in bit
fields of the minor number. A disk drive controller might use some
bits to identify the drive and others to identify the partition on
that drive. A serial device driver might identify a particular
controller, and then which of several ports on that controller
corresponds to a particular device.
The device special files in Unix provide only limited access to this
namespace. It can be helpful to see them as symbolic links into this
alternate namespace which add some extra permission checking.
However while symlinks can point to any point in the hierarchy,
device special files can only point to the actual devices,
so they don't provide access to the structure of the namespace.
It is not possible to examine the different levels in the
namespace, nor to get a 'directory listing' of all entries from some
particular node in the hierarchy.
Linux developers have made several attempts to redress this omission
with initiatives such as devfs, devpts, udev, sysfs, and more recently
devtmpfs. Given the variety of attempts, this is clearly a hard
problem. Part of the difficulty is maintaining backward compatibility
with the original Unix way of using device special files which gave,
for example, stable permission setting on devices. There are
doubtless other difficulties as well.
Not only was the device hierarchy not fully accessible, it was not
fully extensible. The old limit of 255 major numbers and 255 minor
number has long since been extended with minimal pain. However the
top level of "block or char" distinction is more deeply entrenched and harder to
change. When network devices came along they didn't really
fit either as "block" or "character" so, instead of being squeezed
into a model where they didn't fit, network devices got their very
own separate namespace which has its own separate functions for
enumerating all devices, opening devices, renaming devices etc.
So while hierarchical namespaces were certainly well exploited in the
early design, they fell short of being fully exploited, and this lead to
later extensions not being able to continue the exploitation fully.
These two examples - file descriptors and a uniform hierarchical
namespace - illustrate the pattern of "full exploitation" which can
be a very effective tool for building a strong design. While we can
see with hindsight that neither was carried out perfectly, they both
added considerable value to Unix and its successors, adequately
demonstrating the value of the pattern. Whenever one is looking to
add functionality it is important to ask "how can this build on what
already exists rather than creating everything from scratch?" and
equally "How can we make sure this is open to be built upon in the
The next article in this series will explore two more examples, examine their historical
development, and extract a different pattern -- one that brings
weakness rather than strength. It is a pattern that can be recognized
early, but still is an easy trap for the unwary.
The interested reader might like to try the following exercises to
further explore some of the ideas presented in this article. There
are no definitive answers, but rather the questions are starting
points that might lead to interesting discoveries.
- Make a list of all kernel-managed objects that can be referenced
using a file descriptor, and the actions that can be effected through
that file descriptor. Make another list of actions or objects which do
not use a file descriptor. Explain how one such action or object
could benefit by being included in a fuller exploitation of file
- Identify three distinct namespaces in Unix or Linux that are not
primarily accessed through the "single namespace". For each,
identify one benefit that could be realized by incorporating the
namespace into the single namespace.
- Identify an area of the IP protocol suite where "full exploitation"
has resulted in significant simplicity, or otherwise been of benefit.
- Identify a design element that was fully exploited in the NFSv2
protocol. Compare and contrast this with NFSv3 and NFSv4.
Ghosts of Unix past, part 2:
Comments (31 posted)
Page editor: Jonathan Corbet
Next page: Security>>