Leading items
Welcome to the LWN.net Weekly Edition for February 15, 2024
This edition contains the following feature content:
- A turning point for CVE numbers: the kernel project is taking on the responsibility of issuing its own CVE numbers; the approach being taken to that task may have wide-ranging repercussions.
- KDE Plasma X11 support gets a reprieve for Fedora 40: there is a strong desire to drop support for the X Window System in Fedora, but that support will live on for at least a little longer.
- Another runc container breakout: leaked file descriptors offer another way to escape from a container.
- Pitchforks for RDSEED: random-number generators built into the CPU are subject to entropy exhaustion, with possibly catastrophic consequences for some use cases.
- A look at dynamic linking: an overview of how dynamically linked programs are launched on a Linux system.
- Gnuplot 6 comes with pie: new features in the Gnuplot 6.0 release.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
A turning point for CVE numbers
The Common Vulnerabilities and Exposures (CVE) system was set up in 1999 as a way to refer unambiguously to known vulnerabilities in software. That system has found itself under increasing strain over the years, and numerous projects have responded by trying to assert greater control over how CVE numbers are assigned for their code. On February 13, though, a big shoe dropped when the Linux kernel project announced that it, too, was taking control of CVE-number assignments. As is often the case, though, the kernel developers are taking a different approach to vulnerabilities, with possible implications for the CVE system as a whole.CVE numbers can be useful for anybody who is interested in whether a given software release contains a specific vulnerability or not. Over time, though, they have gained other uses that have degraded the value of the system overall. Developers within software distributors, for example, have been known to file CVE numbers in order to be able to ship important patches that would, without such a number, not be accepted into a stable product release. Security researchers (and their companies) like to accumulate CVE numbers as a form of resume padding and skill signaling. CVE numbers have become a target in their own right; following Goodhart's law, they would appear to have lost much of their value as a result.
Specifically, in many cases, the CVE numbers resulting from these activities do not correspond to actual vulnerabilities that users need to be worried about. The assignment of a CVE number, though, imposes obligations on the project responsible for the software; there may be pressure to include a fix, or developers may have to go through the painful and uncertain process of contesting a CVE assignment and getting it nullified. As this problem has worsened, frustration with the CVE system has grown.
One outcome of that has been an increasing number of projects applying to become the CVE Numbering Authority (CNA) for their code. If a CNA exists for a given program, all CVE numbers for that program must be issued by that CNA, which can decline to issue a number for a report that, in its judgment, does not correspond to a real vulnerability. Thus, becoming the CNA gives projects a way to stem the flow of bogus CVE numbers. In recent times, a number of projects, including curl, PostgreSQL, the GNU C Library, OpenNMS, Apache, Docker, the Document Foundation, Kubernetes, Python, and many others have set up their own CNAs. The OpenSSF has provided a guide to becoming a CNA for other projects that might be interested in taking that path.
Corporations, too, can become the CNA for their products. Many companies want that control for the same reasons that free-software projects do; they grow tired of responding to frivolous CVE-number assignments and want to put and end to them. Of course, control over CVE assignments could be abused by a company (or a free-software project) to try to sweep vulnerabilities under the rug. There is an appeal process that can be followed in such cases.
The kernel CNA
The kernel project has, for the most part, declined to participate in the CVE game. Famously, the project (or, at least, some of the most influential developers within it) has long taken the position that all bugs are potentially security issues, so there is no point in making a fuss over the fixes that have been identified by somebody as having security implications. Still, the kernel has proved fertile ground for those who would pad their resumes with CVE credits, and that grates on both developers and distributors.
The situation has now changed, and the kernel will be assigning CVE numbers for itself. If that idea brings to mind a group of grumpy, beer-drinking kernel developers reviewing and rejecting CVE-number requests, though, then a closer look is warranted. The key to how this is going to work can be found in this patch to the kernel's documentation:
As part of the normal stable release process, kernel changes that are potentially security issues are identified by the developers responsible for CVE number assignments and have CVE numbers automatically assigned to them. These assignments are published on the linux-cve-announce mailing list as announcements on a frequent basis.Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team.
(Emphasis added). What this text is saying is that anything that looks like a bug fix — meaning many of the changes that find their way into the stable kernel updates — will have a CVE number assigned to it. Bear in mind that, as of 6.1.74, the 6.1 kernel (which has been out for just over one year) has had 12,639 fixes applied to it. The 4.19 kernel, as of 4.19.306, has had 27,952. Not all of these patches will get CVE numbers, but many will. So there are going to be a lot of CVE numbers assigned to the kernel in the coming years.
Back in 2019, LWN covered a talk by Greg Kroah-Hartman about the CVE-number problem. From that article:
Kroah-Hartman put up a slide showing possible "fixes" for CVE numbers. The first, "ignore them", is more-or-less what is happening today. The next, option, "burn them down", could be brought about by requesting a CVE number for every patch applied to the kernel.
It would appear that, nearly five years later, a form of the "burn them down" option has been chosen. The flood of CVE numbers is going to play havoc with policies requiring that shipped software contain fixes for all CVE numbers filed against it — and there are plenty of policies like that out there. Nobody who relies on backporting fixes to a non-mainline kernel will be able to keep up with this CVE stream. Any company that is using CVE numbers to select kernel patches is going to have to rethink its processes.
A couple of possible outcomes come to mind. One is that the CVE system will be overwhelmed and eventually abandoned, at least with regard to the kernel. There was not much useful signal in kernel CVE numbers before, but there will be even less now. An alternative is that distributors will simply fall back on shipping the stable kernel updates which, almost by definition, will contain fixes for every known CVE number. That, for example, is the result that Kees Cook seemed to hope for:
I'm excited to see this taking shape! It's going to be quite the fire-hose of identifiers, but I think that'll more accurately represent the number of fixes landing in stable trees and how important it is for end users to stay current on a stable kernel.
It is easy to get the sense, though, that either outcome would be acceptable to the developers in charge of mainline kernel security.
However it plays out, it is going to be interesting to watch; popcorn is recommended. The CVE system has been under increasing stress for years, and it hasn't always seemed like there has been much interest in fixing it. The arrival of the kernel CNA will not provide that fix, but it may reduce the use of kernel CVE numbers as resume padding or ways to work around corporate rules and, perhaps, draw attention to the fact that keeping a secure kernel requires accepting a truly large number of fixes. That might just be a step in the right direction.
KDE Plasma X11 support gets a reprieve for Fedora 40
The Fedora Project is working toward the release of Fedora Linux 40, and (as with each release) that means changes to the way the project works and the software included in its repositories. Most of the changes set for Fedora 40 are uncontroversial, but one change is causing quite a stir. The KDE Special Interest Group's (SIG) proposal to adopt KDE Plasma 6 with only Wayland session support, which it interpreted as a mandate to block any X11 packages for Plasma. Others saw that as an overreach by the SIG, and an attempt to block users and contributors from maintaining software they needed.
During planning for Fedora 40 in 2023, members of the SIG had proposed to adopt KDE Plasma 6 under Wayland as the sole supported version of KDE in the release and, simultaneously, drop support for a Plasma X11 session entirely. The proposal was accepted as a change for Fedora 40, over objections by Fedora contributor Kevin Kofler. When change was put forward, Kofler said he would submit packages to support X11 sessions for Plasma. In late January, Kofler requested review and inclusion of two packages that would provide the support for Fedora 40.
At the time the change proposal was discussed, KDE SIG member Timothée Ravier said
that the SIG "certainly can’t stop you
" from submitting
packages, but it would not maintain them. However, after Kofler
submitted the packages, the SIG did try to block their inclusion by appealing to the Fedora Engineering Steering Committee (FESCo), saying that the packages would undo the approved change. FESCo applied a
"preliminary
injunction" on the packages until it could consider the issue.
As one might imagine, the idea of blocking packages that are otherwise
"legal" under Fedora guidelines sparked
a lengthy discussion about a number of things, including
the Wayland's shortcomings, if it's ever appropriate to block volunteers
from including otherwise acceptable packages, and how to wean users from X11 for Plasma.
Stephen Gallagher, one of the members of FESCo, apologized
for the use of the term "injunction", because it was a stronger word
than was necessary. The January 29 FESCo meeting lacked a quorum to vote
on the request to exclude the packages, he said, and the idea was simply to avoid
wasting anyone's time should FESCo vote to block the packages. In the same
message, Gallagher suggested a proposal that the packages be allowed in
the main repositories, but "they may not be included by
default on any release-blocking deliverable (ISO, image, etc.)
".
During its regular meeting on February 12, FESCo approved a modified version of Gallagher's proposal with five in favor and one opposed:
KDE packages which reintroduce support for X11 are allowed in the main Fedora repositories, however they may not be included by default on any release-blocking deliverable (ISO, image, etc.). The KDE SIG should provide a notice before major changes, but is not responsible for ensuring that these packages adapt. Upgrades from F38 and F39 will be automatically migrated to Wayland.
In addition, Gallagher added a clarification that FESCo was not providing
detailed technical requirements in the proposal but "we expect all
parties to follow the spirit of this decision when making technical
decisions
". One might think the matter would end there, but a Fedora
user (who is not a member of the KDE SIG) has requested that the Fedora Council overrule FESCo on the matter and block Kofler's packages from official Fedora repositories.
A long road to Wayland
Fedora has been working to move to Wayland for many years now. Wayland became the
default for Fedora Workstation in Fedora 25 in November 2016, with a fallback option
to an X11 session for users who need (or want) it. Despite making
this change nearly eight years ago the project has yet to
remove the X11 session option entirely. Instead, the plan seems to
be waiting for GNOME upstream to remove support and then remove it for Fedora
Workstation. However, as part of his explanation of the "injunction,"
Gallagher asserted that Fedora is "a Wayland distribution first and foremost
" with
X11 support "a migration-support tool rather than a first-class citizen
".
This came as a surprise to some in the discussion. Richard W.M. Jones wrote
that being Wayland first "is very much news to me
"
and pointed to the Fedora Xfce spin and others that also require
X11. Jones added that "Wayland doesn't actually
replace all the basic functionality of X11 even after all these
years, which is why I need to use it
".
One might argue, as some have,
that there are willing maintainers for X11 packages—so why not let them
do their thing? Gallagher wrote that this line of thinking is
misleading. "The move to Wayland is specifically because the upstream
of X11 *itself* is largely unmaintained. These packages are not
maintaining X11, they are adding new dependencies on it.
"
Why, if X11 is "largely unmaintained", are so many users eager
to cling to it? Some users are stuck with X11 because of
NVIDIA GPUs that only work well with Wayland using the proprietary drivers
instead of the Nouveau
open-source drivers. On February 9, KDE SIG lead and FESCo member Neal Gompa wrote
that many recent NVIDIA GPUs "will hopefully work much better
out of the box starting with tonight's compose
",
but "that doesn't help folks with older GPUs
."
Other users are reluctant to give up features that they
won't find with Wayland. Steven A. Falco cited
two bugs he filed about Plasma under Wayland failing
to honor window positioning, and failing to save
windows between sessions. Falco said that he understands that these
projects are driven by volunteers and he can't expect them to address the
missing features, but he still needs the functionality. "I'm delighted
that there are like-minded folks who want to maintain X11. Please allow
them to do so
."
Forcing users to Wayland
Whether one agrees that Fedora is "Wayland first," the project has tried
to encourage users to use Wayland in its primary Workstation (GNOME-based)
edition—without forcing them to do so. The KDE SIG, however, is
trying to force a hard break with X11 support in Fedora 40,
even though the Plasma 6 upstream will have X11 support and
there are willing packagers in the Fedora community. X11 session support is
expected to be removed by upstream at some point in Plasma
6's lifetime, but no date is set. The SIG's change proposal acknowledged
there's no good time to drop X11, but argued dropping the X11 session would
"drastically reduce our support burden and give us the ability to focus on quality for the KDE Plasma stack
and continue our feature-forward nature
." The proposal also claimed that
"we feel users will be more accepting of the change on the major version
upgrade rather than some point later on in a minor version
". The reaction
to the KDE SIG's proposal, and its attempts to block inclusion of Kofler's
packages, suggests that many users are not more accepting of the change
just because it's a major version upgrade.
Gompa had argued that allowing the packages in Fedora 40 main, rather than a Copr
repository, "would destroy the main point of our
efforts: driving KDE Plasma Wayland to be the best and most complete
experience
". Gompa wrote that a "huge burst of feature development
occurred precisely because of the feedback from the discussion in the
Change proposal
". Providing the "crutch of Plasma X11
" means that
there's "simply not enough motivation to find solutions to the remaining
missing features/capabilities
". Steve Cossette weighed in that forcing Wayland has been very effective in getting users to hit snags and file bugs so issues can be fixed. "You'd be amazed how many issues I've seen being fixed in the last couple months
".
On the other hand, the SIG does not want to see bugs about
X11 packages. One of the concerns about allowing Kofler's packages is that they will
end up making much more work for the SIG, because users will report bugs
in packages whether they come from the SIG or not. Gompa also argued
that inclusion of the packages will add complications for the SIG shipping updates to
KDE Plasma "on the cadence we usually do
" and would prefer the X11
packages live in Copr.
Zbigniew Jędrzejewski-Szmek replied
that what Gompa wrote was true but if other contributors want to work on an
alternate approach "we shouldn't _force_ the obsolescence of older
software, especially if it still has users
". Jędrzejewski-Szmek wrote
that "the situation with X11 is complicated: many people report that it
still works better for them (whatever the reasons may be)
". Therefore,
he wrote, it's fine for the KDE SIG to move ahead without waiting for X11,
but "it's not fine to say 'we will forbid having KDE-X11 packages in
Fedora'
".
As the dust settles
As with any compromise, Gallagher noted (as seen in the meeting transcript), "no one is getting everything they
wanted
". The SIG doesn't get to exclude X11 support, but it will be
able to automatically migrate users to Wayland. While this doesn't entirely
defeat the purpose of including X11-supporting packages in Fedora, users who want KDE Plasma with an X11 session will not have an
entirely seamless installation or upgrade experience for Fedora 40. Nor
will they have to look to Copr repositories to enable support.
In the end, the reluctance to forbid the inclusion of packages seems that it will carry the day. FESCo allowed Kofler to reintroduce KDE X11 support for Fedora 40, and Gompa said that the KDE SIG is not contesting or participating in the effort to overrule FESCo.
Another runc container breakout
Once again, runc—a tool for spawning and running OCI containers—is drawing attention due to a high severity container breakout attack. This vulnerability is interesting for several reasons: its potential for widespread impact, the continued difficulty in actually containing containers, the dangers of running containers as a privileged user, and the fact that this vulnerability is made possible in part by a response to a previous container breakout flaw in runc.
The runc utility is the most popular (but not only) implementation of the Open Container Initiative (OCI) container runtime specification and can be used to handle the low-level operations of running containers for container management and orchestration tools like Docker, Podman, Rancher, containerd, Kubernetes, and many others. Runc also has had its share of vulnerabilities, the most recent of which is CVE-2024-21626 — a flaw that could allow the attacker to completely escape the container and take control of the host system. The fix is shipped with runc 1.1.12, with versions 1.0.0-rc93 through 1.1.11 affected by the vulnerability. Distributors are backporting the fix and shipping updates as necessary.
Anatomy
The initial vulnerability was reported by Rory McNamara to Docker, with further research done by runc maintainers Li Fu Bang and Aleksa Sarai, who discovered other potential attacks. Since version 1.0.0-rc93, runc has left open ("leaked") a file descriptor to /sys/fs/cgroup in the host namespace that then shows up in /proc/self/fd within the container namespace. The reference can be kept alive, even after the host namespace file descriptors would usually be closed, by setting the container's working directory to the leaked file descriptor. This allows the container to have a working directory outside of the container itself, and gives full access to the host filesystem. The runc security advisory for the CVE describes several types of attacks that could take advantage of the vulnerability, all of which could allow an attacker to gain full control of the host system.
Mohit Gupta has an analysis of the flaw and reported it took about four attempts to successfully start a container with its working directory set to "/proc/self/fd/7/" in a scenario using "docker run" with a malicious Dockerfile. Because runc did not verify that the working directory is inside the container, processes inside the container can then access the host filesystem. For example, if an attacker is successful in guessing the right file descriptor and sets the working directory to "/proc/self/fd/7" they can run a command like "cat ../../../../etc/passwd" and get a list of all users on the system—more damaging options are also possible.
Sarai noted in the advisory that the exploit can be considered
critical if using runtimes like Docker or Kubernetes "as
it can be done remotely by anyone with the rights to start a
container image (and can be exploited from within Dockerfiles
using ONBUILD in the case of Docker)
". On the Snyk blog, McNamara
demonstrated how to exploit this using "docker build" or
"docker run", and Gupta's analysis demonstrated how one might
deploy an image into a Kubernetes cluster or use a malicious Dockerfile to
compromise a CI/CD system via "docker build" or similar tools
that depend on runc. What's worse, as McNamara wrote, is that "access privileges
will be the same as that of the in-use containerization solution, such as
Docker Engine or Kubernetes. Generally, this is the root user, and it is
therefore, possible to escalate from disk access to achieve full host root
command execution
."
This isn't the first time runc has had this type
of vulnerability, it also suffered a container breakout flaw (CVE-2019-5736)
in February 2019. That issue led Sarai to work on openat2(), which was
designed to place restrictions on the way that programs open a path that
might be under the control of an attacker. It's somewhat amusing then,
for some values of "amusing" anyway, that Sarai mentioned that "the switch to openat2(2) in runc was the cause of one of the fd leaks that made this issue exploitable in runc
."
While this vulnerability has been addressed, Sarai also wrote
that there's more work to be done to stop tactics related to keeping a handle
to a magic link (such as /proc/self/exe) on the host open and then
writing to it after the container has started. He noted that he has reworked
the design in runc, but is still "playing around with prototypes at the
moment
".
Other OCI runtimes
While the vulnerability is confirmed for runc, the advisory contains a
warning for other runtimes as well. Sarai writes, "several other container
runtimes are either potentially vulnerable to similar attacks, or do not have
sufficient protection against attacks of this nature
." Sarai specifically
talks about crun (a container
runtime implemented
in C, used by Podman by default), youki
(a container runtime implemented in Rust), and LXC.
According to Sarai, none of the other runtimes leak useful file
descriptors to their "runc init" counterparts—but
he finds crun and youki wanting in terms of ensuring that all non-stdio
files are closed and that the working directory for a container is
inside the container. LXC also does not ensure that a working directory is
inside the container. He suggests that maintainers of other runtimes peruse the
patches for runc's leaked file descriptors. While these implementations are not exploitable ("as far as we can
tell
"), they have issues that could be addressed to prevent similar
attacks in the future.
The good news is that, despite many container vulnerabilities over the years, so far we haven't seen reports of a widespread exploit of these flaws. Whether the credit belongs to the vigilance of container-runtime maintainers and security reporters, disciplined use of trusted container images, use of technologies like AppArmor and SELinux, the difficulty in using these flaws to conduct real-world attacks, or a combination of those is unclear. This shouldn't be taken to to suggest that those threats are not real, of course.
Pitchforks for RDSEED
The generation of random (or, at least, unpredictable) numbers is key to many security technologies. For this reason, the provision of random data as a CPU feature has drawn a lot of attention over the years. A proper hardware-based random-number generator can address the problems that make randomness hard to obtain in some systems, but only if the manufacturer can be trusted to not have compromised that generator in some way. A recent discussion has brought to light a different problem, though: what happens if a hardware random-number generator can be simply driven into exhaustion?As background, it is worth looking at two related instructions provided on x86 systems:
- RDSEED is a true random-number generator; it reads entropy collected from the environment and stores a random bit pattern into a CPU register. It is provided primarily for the seeding of pseudo-random number generators or other applications where the highest-quality randomness is needed.
- RDRAND obtains a random pattern from a deterministic random-number generator built into the hardware and stores it into a register. The generator used by RDRAND is periodically reseeded from the source used by RDSEED.
RDRAND is recommended for most uses; it is faster and can significantly stretch the amount of entropy that is available to the system. Within the kernel, RDSEED is used to seed the kernel's random-number generator, but for little else. An important point to note is that neither of these instructions is privileged; they are equally available to user-space code. Also important is the fact that either instruction can fail if there is not sufficient entropy available. (See this page for lots of details on how these instructions work).
On the x86 architecture, the kernel has a pair of functions, rdseed_long() and rdrand_long(), that make use of the above instructions. Kirill Shutemov recently observed that, while rdrand_long() will retry ten times if RDRAND fails, rdseed_long() gives up immediately. He posted a patch adding a retry loop to rdseed_long(), and another to issue a warning if even ten tries were not enough to get a successful result. That is where the discussion began.
Can they fail (and do we care)?
Initially, there was some uncertainty over whether those instructions can fail at all. RDSEED is able to produce entropy at a high rate, and RDRAND multiplies that entropy considerably, so exhausting them seems unlikely. Jason Donenfeld, who maintains the kernel's random-number generator, worried that, if the CPU's random-number generator could be driven to failure, that denial-of-service problems could result. Dave Hansen, though, thought that such worries were misplaced; he spoke authoritatively:
Despite the SDM allowing it, we (software) need RDRAND/RDSEED failures to be exceedingly rare by design. If they're not, we're going to get our trusty torches and pitchforks and go after the folks who built the broken hardware.Repeat after me:
Regular RDRAND/RDSEED failures only occur on broken hardware
It seems, though, that there may just be a use for those pitchforks. Donenfeld wrote a little program to stress the hardware by repeatedly executing RDRAND and RDSEED instructions; he observed that RDSEED failed nearly 30% of the time — with a single-threaded test. The hardware random-number generator, though, is shared between all CPUs in a socket, so a multi-threaded test can be expected to show worse results. Indeed, Daniel P. Berrangé reported that, with a multi-threaded test, RDSEED only succeeded 3% of the time. Nobody was able to demonstrate RDRAND failures but, as Berrangé pointed out later in the discussion, that situation could change with a larger number of threads. Elena Reshetova also acknowledged that it may be possible to force RDRAND to fail.
The clear outcome is that, with a determined effort, RDSEED can be made to fail and the reliability of RDRAND is not guaranteed. The logical next question is: how much of a concern is this? For most use cases, there is little to worry about. The kernel's random-number generator will continue to produce unpredictable output if the hardware random-number generator fails occasionally, even in the absence of other entropy sources. As Ted Ts'o pointed out, the kernel makes use of any other entropy sources it can find (such as the variation in interrupt timings) and is intended to be robust even if RDRAND were to turn out to be entirely compromised. Since most applications get their random data from the kernel, even an unreliable RDRAND should not be a concern.
There is, however, one noteworthy exception: the use case known as confidential computing (sometimes referred to as "CoCo"), which is intended to guarantee the security and confidentiality of virtual machines even if they are running on a compromised or hostile host computer. Techniques like secure enclaves and encrypted memory have been developed to protect virtual machines from a prying host; these technologies may, someday, work as intended, but they are absolutely dependent on the availability of trustworthy random data. If a "confidential" virtual machine can be launched with a known random seed, the game may be over before it starts.
Availability of entropy at boot time has long been a problem for Linux systems, so a number of mechanisms have been developed to seed the random-number generator as quickly as possible. These can include using random data injected by the bootloader and collecting entropy from the environment. A confidential-computing system, though, cannot trust inputs like that. The bootloader is under the host's control, but so is environmental entropy. As Reshetova explained, the host is able to control events like the expiration of timers and the delivery of interrupts. The only source of entropy that is not, at least theoretically, under the host's control is the hardware random-number generator. If that, too, is compromised, the entire confidential-computing dream falls apart.
What to do
That dream has been somewhat controversial in kernel circles from the
beginning, and this revelation has not helped; at one point Donenfeld asked
directly: "Is this CoCo VM stuff even real? Is protecting guests from
hosts actually possible in the end?
" Most of the ensuing discussion,
though, was focused on what the appropriate response should be.
Outside of the confidential-computing use case, the consensus seems to be that little needs to be done. Adding a warning if RDSEED or RDRAND fail (as Shutemov proposed at the beginning of the discussion) is being considered, but even that is not clearly the right thing to do. Many systems run with panic_on_warn enabled; on such systems, a warning will cause a system crash. That would turn a random-number-generator failure into a denial-of-service problem. Even in this case, though, a failure during early boot seems worth a warning; if that happens, either the random-number-generator is simply broken, or there is clearly some sort of attack underway.
When the system is running in a confidential-computing mode, though, the situation is a bit more complicated. Among other things, that is not a mode that the kernel as a whole recognizes; Ts'o suggested adding a global flag for this purpose, since other parts of the kernel are eventually likely to need it as well. But, even with that in place, there are a number of alternatives to consider; Donenfeld spelled them out in detail. They come down to whether the kernel should warn (or panic) on failure, whether that should happen at any time or only during early boot, and whether this response should change in "confidential-computing mode".
The emerging consensus would seem to be that the first step is simply retrying a failing operation, as is already done for RDRAND. Even if a single RDRAND can be made to fail, doing so ten times in a row is a more difficult prospect. That said, one should remember that, as Reshetova pointed out, the host controls a guest's scheduling and could, in theory, interfere with a retry loop as well. If retries are forced to happen when the random-number generator is exhausted, they will never succeed.
At a minimum, a warning should be issued if these instructions fail during early boot, since that is a clear sign that something is going wrong. For the confidential-computing case, a randomness failure means that the system is unable to provide the guarantees upon which the whole edifice is built, so the system should simply refuse to run. That could be achieved with a panic, or by simply looping on RDRAND indefinitely, locking up the virtual machine if the instruction never succeeds.
Beyond that, there is not a whole lot that needs to be done.
There is one part of the discussion that is not visible, though: this concern seems to have created a scramble within the CPU vendors to characterize the extent of the problem and figure out what, if anything, needs to be done about it. Lest one think this is an Intel-specific issue, Berrangé reported the ability to force high failure rates on some AMD processors as well. The outcome of those discussions may be some combination of documentation, microcode updates, and design changes in future processors.
Meanwhile, the prospect of random-number-generator exhaustion need not be a big worry for most users; it seems unlikely that it can be used to threaten real-world systems. For the confidential-computing crowd, it is just another one of what is sure to be an unending list of potential threats that need to be mitigated. Fixes will be put into place, and we can all put our pitchforks away and go back to watching the argument over whether confidential computing is achievable in the first place.
A look at dynamic linking
The dynamic linker is a critical component of modern Linux systems, being responsible for setting up the address space of most processes. While statically linked binaries have become more popular over time as the tradeoffs that originally led to dynamic linking become less relevant, dynamic linking is still the default. This article looks at what steps the dynamic linker takes to prepare a program for execution.
Invoking the linker
When the Linux kernel is asked to execute a program, it looks at the start of the file to determine what kind of program it is. The kernel then consults both its built-in rules for shell scripts and native binaries and the binfmt_misc settings (a feature of the kernel allowing users to register custom program interpreters) to determine how to handle the program. A file beginning with "#!" is identified as the input of the interpreter named on the rest of the line. This faculty is what lets the kernel appear to execute shell scripts directly — in reality executing an interpreter and passing the script as an argument. A file starting with "\x7fELF", on the other hand, is recognized an an ELF file. The kernel first looks to see whether the file contains a PT_INTERP element in the program header. When present, this element indicates that the program is dynamically linked.
The PT_INTERP element also specifies which dynamic linker the program expects to be linked by — also called the interpreter, confusingly. On Linux, the dynamic linker is usually stored at an architecture-specific path such as /lib64/ld-linux-x86-64.so.2. It is not allowed to itself be dynamically linked, to avoid an infinite regress. Once the dynamic linker has been found, the kernel sets up an initial address space for it in the same fashion as any other statically linked executable and gives it an open file descriptor pointing to the program to be executed.
Relocation
The dynamic linker's job is to arrange for the process's address space to contain both the main executable itself, and all of the libraries upon which it depends. For modern executables, that almost certainly means loading position-independent code, which is designed to be run from a non-fixed base address. Many parts of the code do still need to know where the different sections of memory are located, however, so position-independent executables contain a list of "relocations": specific places in the binary that the dynamic linker needs to patch with the actual address of various components in memory. In most programs, the majority of relocations are patches to the Global Offset Table (GOT), which contains pointers to global variables or loaded sections and link-time constants.
Giving the dynamic linker the flexibility to load different components at different addresses serves several purposes. The oldest form of position-independent code was on architectures where memory addresses were accessed relative to a base register, allowing for multiple copies of a single program to reside in memory at different addresses. The invention of dynamic address translation obviated that motivation. Modern toolchains use this flexibility for address-space layout randomization (ASLR), by allowing the dynamic linker to add an element of randomness to the chosen locations. Statically linked programs with position-independent code, including the dynamic linker itself, are loaded at a random address chosen by the kernel. Therefore, the first major task of the dynamic linker is to read its own program header and apply relocations to itself. This process includes patching the subsequent code with the location of global structures and functions — prior to applying these relocations, the linker is careful not to call any functions from other compilation units or access global variables.
Once the dynamic linker has applied relocations itself, it can then perform OS-specific setup by calling the functions for its particular platform. On Linux, it calls brk() to set up the process's data segment, including allocating some space there for its own state, and informing malloc() to put allocations in the newly-allocated space (which is separate from the program's eventual heap). At this point, malloc() refers to a temporary implementation, which cannot even free memory, that is used while the dynamic linker is identifying and applying relocations to necessary dependencies.
The first step in identifying dependencies is to set up the link map — the structure that records information for use by dlinfo(). Once that is done, the dynamic linker finds where the kernel mapped the vDSO, a shared object includes code that can service some system calls without needing to switch to the kernel, a technique that is most often used for the gettimeofday() system call. The dynamic linker places the vDSO in the link map, so that the same code that handles linking other dependencies can handle shared objects that call into the vDSO.
By this point, it may seem as though the stage is set to actually read and link the shared objects upon which the program depends, but there is one more step to complete. Users can override functions at run time by specifying shared objects in the LD_PRELOAD environment variable. Overriding functions in this way can be used for many purposes, such as to debug an application, to use an alternate allocator such as the Boehm-Demers-Weiser conservative garbage collector or jemalloc, or to fake the time and date for the application using libfaketime, for example. The dynamic linker resolves preloaded libraries first, so that when it is linking later libraries it can direct them to the overridden function definitions in one pass.
Now the dynamic linker has everything that it needs to finish arranging the address space of the program. Starting with the program being loaded, the linker looks for DT_NEEDED declarations in the program's header, which indicate that the program depends on another shared object. The linker searches for these DT_NEEDED declarations recursively, building a list of all shared objects that will be required by any of the program's transitive dependencies. DT_NEEDED entries can include absolute paths, but they can also include relative paths that are resolved by consulting the directories in the LD_LIBRARY_PATH environment variable, or a default set of directories. The dynamic linker then traverses this list of dependencies backward, so that a shared object's dependencies will be loaded before it is.
For each shared object, the linker opens the resolved file, loads it into a newly allocated (and, with ASLR, random) location in the address space, and then performs the set of relocations listed in the shared object's header. Then it adds the shared object to the link map.
The exception is the dynamic linker itself. Programs are allowed to depend on the dynamic linker as a library, to supply functions such as dlinfo(), but the linker would be broken if it applied relocations to itself in the middle of the loop, so it excludes itself from the dependency list. As described above, it has already applied relocations to itself. But the linker occasionally makes use of functions that can be overridden by preloaded libraries. Therefore, once all of the program's dependencies have been slotted into the address space, the linker applies relocations to itself one final time, so that now it will refer to the overridden version of any functions it uses. Now the malloc() implementation that the dynamic linker uses switches from the simplified implementation used so far to the generic implementation used by the rest of the program.
With all the shared libraries loaded, the dynamic linker has completed most of its work. Before jumping to the main program, however, it sets up thread-local storage (TLS) and performs any initialization required by the C library. Then it restores the state of the program's arguments and environment that the kernel supplied and jumps to the entry point of the program.
User code
One might expect that the duties of the dynamic linker end where the main program begins, but this is not so. Not only must it service requests to dlopen() new shared objects that are required at run time, but there is one more part of the dynamic linker that runs during the lifetime of the loaded program: updating the Procedure Linkage Table (PLT).
While programs could simply use relocations to directly patch CALL instructions in the text of the program, "text relocations" like this cause two performance problems. Firstly, since the number of relocations required would depend on the number of calls to the given function (which may be large), the initial application of those relocations to a shared object can be slow. Secondly, since text relocations involve dirtying the pages of memory containing a program's executable code, different processes running the same program can no longer share the same underlying memory, increasing the memory usage of the program. These performance problems mean that text relocations are largely frowned on by the maintainers of dynamic linkers.
Modern compilers and linkers work around this problem by creating a PLT — a special standalone section that contains an indirection for each externally defined function. Calls to these functions from within the program are compiled to a call to the PLT, which then contains a jump to the real location of the external function. The PLT could also directly use text relocations, but most architectures instead use indirect jumps through function pointers stored in a second GOT (separated from the program's normal GOT in its own section called ".plt.got"). Separating out the function pointers is useful on architectures that have no way to compactly encode a direct jump to an arbitrary part of the address space, but it is also useful for one final performance trick: lazy linking.
Unlike with relocations that point to data, which need to be resolved before the program starts running because the dynamic linker cannot know when they will be accessed, the relocations in the PLT's GOT don't need to be applied immediately. The dynamic linker initially fills the PLT's GOT with the address of a function from the dynamic linker itself. This function consults the link map to determine where the external symbol in question is located, and then rewrites the corresponding entry of the PLT's GOT. Linking is only performed for external functions that are actually called, and only once the program has started up. For most programs, this improves performance and makes the initial startup speed of the program faster.
Lazy linking can be turned off using the LD_BIND_NOW environment variable, or by compiling the program with the "-z now" linker option. As of the new glibc release, version 2.39, glibc's dynamic linker also supports rewriting elements of the PLT to use direct jumps instead of indirect jumps, on systems where that would have a performance benefit. The article covering the announcement claimed that the introduction of PLT rewriting was security-motivated, but a commenter pointed out that the existence of the former two methods suggests that PLT rewriting is mostly useful as a performance-tuning setting. There is one security benefit of disabling lazy linking, though: it permits "Relocation Read-Only" (RELRO), a security mitigation where the dynamic linker remaps the GOT (and the PLT's GOT) as read-only once they have been filled out, preventing an attack from overwriting them to gain control of the process's control flow.
The dynamic linker is largely invisible to most programs, but it performs a critical role in establishing the address space of every process. Most programmers will never need to interact with the exact process by which it sets up a program to be run. However, the dynamic linker's apparent stability belies a surprising amount of complexity required to ensure that programs can be prepared for execution efficiently.
Gnuplot 6 comes with pie
Gnuplot 6.0 was released in December 2023, bringing a host of significant improvements and new capabilities to the open-source graphing tool. Here we survey the major new features, including filled contours in 3D, adaptive plotting resolution, watchpoints, clipping of surfaces, sector plots for making things like pie charts, and new syntax for conditionals in gnuplot's scripting language. In addition, there are detailed examples of the features described.
Gnuplot is a self-contained C program for creating scientific and technical visualizations. It remains popular with scientists and engineers because of its speed, power, and flexibility. It interoperates nicely with LaTeX, which makes it adept at preparing graphics for publication. While there are interface libraries for all popular languages, it can also be controlled over a socket, which means it can be integrated into programs written in nearly anything, and it contains its own scripting language.
Gnuplot uses sophisticated algorithms for 3D plotting that allow controlling attributes such as translucency, hidden-line removal, and lighting. Every element of its plots can be individually styled; for example, text for plot labels can use any installed font. Gnuplot can produce output for anything from antiquated graphics terminals to interactive web pages. Let's examine the major new features in gnuplot v6.0.
Filled 3D contours
Contour plots are a longstanding gnuplot feature, including the ability to place contour lines on a curved surface. A new plot style, contourfill, (which can be combined with contour lines) delineates function values with areas of constant color. The next two commands show how to create a surface plot with filled contours, which is shown in the illustration. The first set command tells gnuplot to pick ten contour levels automatically. The second line uses gnuplot's surface plotting command, splot, giving the plot ranges in square brackets, then the function to be plotted and the plot style.
set contourfill auto 10 splot [-pi:pi] [-pi:pi] sin(x)+cos(y) with contourfill
The angle of view for a 3D plot can be changed with the "set view" command. It has a special form, "set view map", which draws a surface as it would appear looking down from the top. This can be combined with the contourfill style to create a 2D filled contour plot, as in the next figure. Gnuplot's "replot" command redraws the most recent plot using whatever settings are in effect.
set view map; replot
Sharpening
Gnuplot plots functions by sampling them at a set of equidistant points. The user can set the number of samples with the "set samples" command. When plotting a function with singularities or isolated regions of rapid change, one might try to use a large number of samples to improve the problem areas. However, this is not ideal. It tends to waste samples in quiescent regions of the function, using more total samples than required. More seriously, it usually doesn't add enough resolution where it's needed to solve the problem, unless the user resorts to a huge number of plotting points.
A new keyword, "sharpen", can follow a line plot command to automatically adjust the sampling of a function in regions of high gradients. Here is a normal plot of the tangent function, which includes singularities. It uses gnuplot's default sampling of 100 points:
plot [-2*pi:2*pi] tan(x)
The next command adds the new keyword; the figure shows the improvement:
plot [-2*pi:2*pi] tan(x) sharpen
Watchpoints
A line plot can now be followed by the "watch" keyword and a value. The plot command will then generate, along with the plot, an array of coordinates where the plotted function or data crosses the specified value. More than one watch value may be specified.
Exact crossings are found for function plots using iterative bisection, and values for data plots are estimated using linear interpolation. These processes are applied to the line segment that joins the two samples that bracket the requested value; therefore, if the user sets the number of samples too low, some hits may be lost.
Gnuplot stores the results in an array called WATCH_1. If more than one watchpoint is requested, results are stored in WATCH_2, etc. Each element of these arrays is a complex number, with the x-coordinate of each crossing stored in the real part and the y-value in the imaginary part. Yes, this is a bit of a kludge, but gnuplot's scripting language provides a limited collection of data types.
The next figure shows a plot of sin(1/x) with a watchpoint set to collect the locations of its zero crossings. We've used a large number of samples to capture the rapid oscillations near the origin. The second line sets some details for the plot legend; fonts in gnuplot are specified with the font name and size separated by a comma, but the name can be omitted to use a default.
set samples 10000 set key font ",18" center right plot [pi/300:1] sin(1/x) watch y=0
The next command generates a plot of the locations where sin(1/x) crosses the x-axis (stored in the real parts) versus the index of the array entry; plotting an array of complex numbers plots the real parts versus the indexes. Using the plot command "pt 7" selects the point type 7, which is a filled circle for this terminal type. Other point types include unfilled circles, stars, triangles, etc.
unset key set title "Zero crossings of sin(1/x)" plot WATCH_1 pt 7
In addition to watching for y values, we can set watchpoints at x values and for conditions described by any user-defined function of x and y. This last option permits, for example, solving numerically for the intersection of two functions. The next figure shows an example of this kind of calculation, finding the intersection of sin(1/x) and cos(x) within a defined domain. It also demonstrates how to plot labels for watchpoints.
c(x,y) = cos(x) - sin(1/x) set style watch label font ",18" rotate by -45 offset 0,-1 unset key plot [0.2:1] sin(1/x), cos(x) watch c(x, y) = 0
The label shows that the intersection occurs at x = 0.486 and y = 0.884.
The watchpoint feature is still considered experimental, which means some details of its use or behavior may change in future releases.
Zclipping
With version 6, gnuplot can limit the display of a surface to a specified interval of heights, or z-values, using a new keyword, "zclip", followed by a range. The following command plots the same surface as in the "Filled 3D contours" section above, but using the pm3d plot style for continuous coloring of the surface, and with a zclip specification:
splot [-pi:pi] [-pi:pi] sin(x)+cos(y) with pm3d zclip [0.2:1.6]
Combining z-clipping with set view map would allow us to examine slices of a 3D data set, which is a type of visualization often seen in medical imaging.
Kitty graphics
The high-performance, low-latency terminal emulator, kitty, comes with a useful bag of tricks, including its own graphics protocol that is an alternative to sixel, a venerable format for embedding images within terminal windows. With kitty one can ssh into a remote machine and look at an image on its filesystem embedded directly into the terminal session.
The new release added support for kitty graphics. Running a command in the gnuplot read-eval-print loop (REPL) will set up kitty output:
set term kittycairo scrollAfter that, all plotting output is embedded in the terminal and scrolls with the rest of the content, rather than appearing in a new window. This creates a kind of notebook that allows the user to scroll back and see the commands that created each plot.
Another option is:
set term kittycairo anchorThat fixes the plot to the upper-left of the terminal window. Using this option affords some interactivity; for example, the arrow keys rotate 3D surface plots in place.
Spotlight
Version 5.2 of gnuplot arrived with a new feature for ambient lighting of surfaces, as we reported in 2017. Version 6.0 has added an additional light source, which the developers call a spotlight, to the existing primary and specular sources. The spotlight can have a color, be pointed, and even has a parameter that is used by the Phong shading algorithm; it's basically a measure of how shiny or diffuse the surface is modeled to be.
Establishing lighting including a spotlight requires two settings before the actual plotting command:
set pm3d lighting primary 0.2 spec 0.2 spec2 1 set pm3d spotlight rgb "blue" rot_x -45 rot_z 0 Phong 7
The first line sets the relative intensities of the primary (diffuse) lighting, a colorless specular light, and a second specular light that will be the spotlight. The second line establishes the characteristics and angles of the spotlight.
Next we choose a gray color palette to make the effect of the colored spotlight more obvious, and set the range of the color map. After that, we're ready to plot:
set palette gray; set cbrange [0:3] splot [-pi/2:pi/2] [-pi/2:pi/2] cos(x)*cos(y) with pm3d
Sectors
Since gnuplot has always been focused on the needs of technical users, it has not provided certain types of visualizations popular in newspapers and the boardroom, such as pie charts. But people often ask how to make such things with gnuplot, and some of its users, including me, have met this need with a variety of scripts and programs to process tabular data and cajole the program into presenting it in the desired form.
Now all these efforts have been rendered obsolete. The latest release includes a new plotting style called sectors that's a solution for all types of pie charts and, in general, visualizations that map data onto regions of a circle. As it's designed to work with tabular data, rather than functions, we need to supply some. Using gnuplot's here-document syntax, we can define a data array in the REPL:
$data << END 0 0 20 4 0 20 0 45 4 1 65 0 30 2 2 65 2 30 2 3 95 0 90 4 4 185 0 100 4 5 285 1 75 3.3 6 END
Each row defines one wedge or sector of the plot. The columns are interpreted as starting angle, inner radius, angular extent, radial extent, and a value that can optionally be mapped to the color of the sector. Thus we can have overlapping sectors if we want, or a variety of complex charts. This graph has two overlapping sectors and one offset.
Before the plotting command we need some settings for the coordinate system and its extent; we also do not want a legend:
set polar; set angle degrees set xrange [-5:5]; set yrange [-5:5] set key off set size square
The last command is useful for plots where circles need to look like circles, rather than ellipses; it tells gnuplot to scale the vertical and horizontal axes identically.
Here is the final plotting command and its result:
plot $data with sectors lc variable fill solid
The final phrase tells gnuplot to fill each sector with a solid color, and to pick the color (lc is the abbreviation for linecolor) from the fifth column of the data, which is the default for this plot type. If the data is arranged differently in the array, additional specifications would be needed to describe that to gnuplot.
Hulls
Given a collection of points, gnuplot 6 can draw a convex or concave hull around it. This is a polygon, or a smooth curve derived from a polygon, that encompasses all of the points. A convex hull is described by its name: it has no "indentations"; it is formed by connecting the outermost points, the points on the edges of the collection. A concave hull follows the boundary of the collection more closely. Its shape is fine-tuned by the chi-length parameter; smaller values of this parameter produce a bumpier hull, while larger values make the hull more convex. The figures below should make these ideas clearer.
I created the array (rd), which consists of complex numbers with random real and imaginary parts ranging from 0 to 1. The real parts will be our x-values, and the imaginary parts will become our y-values, as in the kludge described in the "Watchpoints" section. When plotting an array of complex numbers column 2 refers to the real parts and column 3 to the imaginary parts (column 1 is the element index), as used in the following example.
plot rd using 2:3 pt 7 ps 2 convexhull smooth path with filledcurve,\ rd using 2:3 with points pt 7 ps 2 lc "green"
The command makes two plots: first the convex hull, using smoothing and filling the interior with a solid color, then the points in rd, with pointsize (ps) 2, on top of that. The using 2:3 clause is what tells gnuplot to use the real parts for x and the imaginary parts for y.
The following image illustrates gnuplot's concave hull on the same data, with a particular value of the chi-length parameter.
chi_length = 0.3 plot rd using 2:3 pt 7 ps 2 concavehull smooth path with filledcurve,\ rd using 2:3 with points pt 7 ps 2 lc "green"
If we increase chi-length to 0.5, we approach the convex hull, as shown in the next figure:
The most common use of plotting with hulls is to distinguish groups of data points in a scatter plot. This is easier to interpret than with the use of point color or shape alone. The following command plots two arrays of random numbers, rd and sd, the second array shifted up and to the right by adding 0.5 to the real and imaginary components of its elements:
plot rd using 2:3 pt 7 ps 2 convexhull smooth path with filledcurve,\ rd using 2:3 with points pt 7 ps 2 lc "green",\ sd using 2:3 pt 7 ps 2 convexhull smooth path with filledcurve fc rgb "#553333AA",\ sd using 2:3 with points pt 7 ps 2 lc "red"
In the hull for the sd array we specified a fillcolor (abbreviated fc) using one of gnuplot's several color-specification formats; this variety uses a hexadecimal number in the form #AARRGGBB, where the AA segment gives the opacity, and the other segments are for red, green, and blue. The following figure shows the result.
Conditionals
With each major release of gnuplot, the syntax of its scripting language inches closer to something like C. The current release improves the control flow that uses conditionals by adding a block structure. We can now have a chain of if ... else blocks with a bracket-delimited group of commands inside each part of the conditional:
if (a > 9) { print "a is too big" a = a - 1 } else if (a < 1) { print "a is too small" a = a + 1 } else { print "a is just right" }
Installation
The recommended way to install gnuplot is to download the source and compile it. While binaries can be found for various platforms and distributions, compiling is preferable for several reasons. It's usually the only way to get the most recent release, for example. The source tarball comes with a configure script that allows selecting some features and excluding others, so gnuplot can be customized for a particular set of uses. Compiler error messages (some can be expected on the first attempt) provide information about what libraries might be missing in order to link-in the desired features.
Consulting the included INSTALL file for further information is strongly recommended. Some examples are the need to have the Lua language installed to get full LaTeX capabilities, because gnuplot uses a Lua program for that, and the Qt 5 packages installed to get gnuplot's default interactive plotting window. Users also have a choice between GNU readline and a built-in version.
The process is not as daunting as the above might suggest. Actual compilation takes on the order of a minute, and any missing libraries can be remedied with a quick trip to your distribution's package manager.
How to learn more
Gnuplot's home page contains links to the official manual in online (HTML) form and as a 300-page PDF. The manual is full of deep detail, but, due to its organization and frequent use of terms and concepts before they are defined (if they are defined at all), it's hard to learn how to use the program from this resource. Much of what's in this article I have discovered through experimentation, as some new features are quite sparsely documented.
The gnuplot team maintains a page with links to various articles on the tool in nine different languages. Beyond that, gnuplot has detailed, useful help available from its REPL. This usually repeats material from the manual, but sometimes contains bits of information not found there.
The home page links to the two books about gnuplot: my own, Gnuplot 5: an interactive graphics cookbook, which takes the form of a collection of recipes for accomplishing specific tasks, and the excellent Gnuplot in Action by Philipp K. Janert, which takes a wider view of data analysis, using gnuplot for examples. Finally, over the decades a substantial body of web literature has grown up around the program. Although, necessarily, most of these resources refer to older versions, a focused web search has a good chance of surfacing a gnuplot script to do what you need.
Conclusion
Gnuplot has been publicly distributed since 1986; earlier versions were in use years before that. The program therefore predates the GNU project by a few years, with which, despite the name, it has no relationship. Gnuplot is an early specimen of free software, emerging before the establishment of any of the familiar licenses in wide use today. Its authors wanted the source and compiled versions to be freely modifiable and distributable, but wanted some protection against misappropriation of their code. The result was a somewhat eccentric copyright statement by today's standards, but one that's served its purpose for almost 40 years.
This article describes only the major new features that I think are the most generally useful. The new version comes with a host of other additions and improvements, including many new special functions, more ways to use polar coordinates, the ability to name color palettes (and a new built-in palette), more functions to handle colors and strings, WebP graphics including animations, better control of multi-plots, and more.
When gnuplot was created there was almost nothing else available with its capabilities, and certainly nothing free—either as in beer or as in freedom. Today we have a wealth of free tools for every kind of plotting task, both standalone and in the form of libraries for many programming languages. I've used an assortment of them, but keep returning to gnuplot for my scientific-plotting needs. What is the special combination of properties that keeps me coming back?
Some of them I've mentioned above and in previous articles. Gnuplot is fast and unlikely to crash if you give it a large data set. Its language independence and communication through sockets, FIFOs, and files means that I can use it in any environment; the time I take to develop a visualization solution will be part of my permanent tool set. It's completely configurable; I don't have to settle for the program's idea of what a plot should look like. It can do anything, from embedding vector fields into surfaces to plotting live data streamed from a remote server. Like another tool I can't do without, TeX, it's old, open source, eccentric, and it can be cantankerous, but it does its particular job better than anything else.
Page editor: Jonathan Corbet
Next page:
Brief items>>