LWN.net Weekly Edition for August 27, 2020
Welcome to the LWN.net Weekly Edition for August 27, 2020
This edition contains the following feature content:
- Fedora IoT becomes an edition: a new variant of Fedora approaches first-class status.
- Fuzzing in Go: tools for fuzz-testing Go programs.
- Rethinking fsinfo(): simplifying a complex proposed system call may not be a simple task.
- CAELinux 2020: Linux for engineering: an Xubuntu derivative with a focus on specialized tools.
- The programmer's CAD: OpenSCAD: a programming language and tool set for creating real-world things.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Fedora IoT becomes an edition
The Fedora 33 release is currently scheduled for late October; as part of the process of designing this release, the deadline for system-wide change proposals was set for June 30. This release already has a substantial number of big changes in the works, so one might be forgiven for being surprised by a system-wide change proposal that appeared on August 4, which looks to be pre-approved. Not only that, but this proposal expands the small set of official Fedora "editions" by adding the relatively obscure Fedora Internet of Things Edition.The Fedora distribution is released in a number of forms, including a fair number of "Fedora spins" that skew the distribution toward a specific use case. The flagship Fedora products, though, are the editions, of which there are currently only two: Fedora Workstation and Fedora Server. The former is obviously aimed at desktop deployments, while the latter is meant to be useful on back-end systems. This set of editions has been stable for some time.
There are a few "emerging editions" in the works, including Fedora CoreOS and
Silverblue. Also on
that list is Fedora IoT which is now poised to become
the third edition to be part of
the Fedora 33 release. The proposal notes that this is "largely
a paperwork exercise at this point
". While the remaining work may
be confined to paperwork, the project may want to put some effort into
documentation sooner or later; actual information about what Fedora IoT is
and how to work with it is relatively hard to find.
Digging in
As one might expect, Fedora IoT is meant to be deployed on devices that make up the "Internet of Things". The vision statement behind Fedora IoT, as found in the associated "product requirement document", is:
The distribution is available for both the x86 and arm64 architectures. An attempt to run the Arm version on a Libre Computer "La Frite" single-board computer failed at boot time; that board is not on the list of supported platforms, so this outcome is arguably not surprising. The x86 version runs nicely enough in a virtual machine, though, so that is the version that your editor played around with.
Relative to its peers, the Fedora IoT edition is a small (but not tiny) distribution; installed, it requires a little under 3GB of disk space. It runs within 1GB of RAM with memory to spare — as long as one is not actually doing anything, anyway. The resource constraints of IoT systems force the set of installed packages to be relatively small; if one is expecting a GNOME desktop or a choice of relational database management systems out of the box, one will be disappointed. This is still Fedora, though, so those packages are readily available for installation later; Fedora IoT need not remain a small system.
Veteran Fedora administrators will have to adjust to a new way of managing those packages, though; one does not just run "dnf install" on a Fedora IoT system. Indeed, there is no dnf command available. This distribution, instead, is built using rpm-ostree; packages still show up in the traditional RPM format, but the way they are managed has changed considerably.
The rpm-ostree system is built around the concept of an immutable base image, perhaps augmented by one or more overlay images. The running system is never modified by package-management operations like installations or system upgrades. Instead, rpm-ostree maintains a set of parallel system images, two of them by default. The running image is immutable; any package-management operations will, instead, apply to the next image in the series (being simply the other image in a two-image setup). Thus, for example, the system administrator can install an essential package (Emacs, say), but that package will go into the alternative image and the action will not immediately yield a usable emacs command.
So, for example, the command to add Emacs is:
# rpm-ostree install emacs
This command will fetch the required packages from the repository in the usual way and install them. Afterward, one can query the state of the system with:
# rpm-ostree status State: idle Deployments: ostree://fedora-iot:fedora/stable/x86_64/iot Version: 32.20200817.0 (2020-08-17T15:28:05Z) BaseCommit: 4fa6233db3448c1ee5bb73b89fbddf54167108929d71248ece63ad6969ea308c GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0 Diff: 183 added LayeredPackages: emacs * ostree://fedora-iot:fedora/stable/x86_64/iot Version: 32.20200817.0 (2020-08-17T15:28:05Z) Commit: 4fa6233db3448c1ee5bb73b89fbddf54167108929d71248ece63ad6969ea308c GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
This output shows the two "deployments" (system images) managed by rpm-ostree; the one marked with the "*" is the currently running image. We see the other image with an additional package (emacs) layered onto it. Actually running that image (and, thus, being able to use Emacs) requires a reboot; at that point, the system will switch to the new image. The image that was running previously will remain in its current form in case a rollback is needed. Once the administrator starts making more changes, though, the previous image will be set up as a copy of the (new) running image, so changes like the addition of Emacs will be preserved going forward.
The idea behind all this mechanism is to make system updates safer. Since an update is applied to an idle image, it cannot break the running system even if it is interrupted at an especially inopportune time; this is good, since updates in general seem to be more than adequately endowed with inopportune times. The updated image will only be set up to run on the next boot after the update completes successfully. If, for some reason, the update as a whole proves to be inopportune, rolling back to the previous (and working) image is a trivial operation; the bootloader can be configured to do so automatically if appropriate. The result is, hopefully, an unbrickable system.
It is also relatively easy to implement a "factory reset" operation that simply removes any layered packages, stripping the system back to the original image. Naturally, rpm-ostree is able to check and enforce package signatures if the intent is to lock the system down.
Rolling releases, containers, and more
While Fedora can be a nice system to work with, administrators tend to
reach for something else when the time comes to deploy a production
system. One of the main reasons for that is Fedora's update cycle; any
given Fedora release is supported for a year, after which the system must
be updated to a newer release. That can be inconvenient on any production
system; a similar requirement on a distribution for embedded devices could
be a serious impediment to the project's "multi-million device
deployment
" ambitions.
To avoid this problem, Fedora IoT apparently opts out of the normal release cycle; instead, the project has committed to producing a rolling release with monthly updates. With rpm-ostree configured to pull down updates automatically, IoT systems can be kept up to date with little effort. If an update goes bad, the rollback mechanism is there to help users recover.
One other significant difference with Fedora IoT is a relatively strong focus on the use of containers to install applications. The podman tool is provided for this purpose; it's meant to look a lot like Docker, but without the need for any background daemons. Podman comes configured to pull images from docker.io by default. Your editor attempted to use it to install a few versions of NetHack that must all surely be legitimate, but none of them consented to run correctly — thus saving your editor a considerable amount of wasted time.
Beyond those changes, though, Fedora IoT feels much like any other Fedora system. The commands work in the same way, and the usual packages are available. This makes for a relatively rich and comfortable environment for embedded-systems work.
One can't help wonder about the ultimate objective, though. Fedora comes with no support guarantees, a fact that is sure to give pause to any companies thinking about which operating system to install in their million-device products. If Fedora is to have any chance of being deployed in such systems, some sort of commercial support option will have to materialize. When that happens, it may well go under the name of "Red Hat IoT" or some such. Fedora itself may not make it onto all of those devices, but Fedora users will have played with the technology first and helped to make it better.
Fuzzing in Go
Fuzzing is a testing technique with randomized inputs that is used to find problematic edge cases or security problems in code that accepts user input. Go package developers can use Dmitry Vyukov's popular go-fuzz tool for fuzz testing their code; it has found hundreds of obscure bugs in the Go standard library as well as in third-party packages. However, this tool is not built in, and is not as simple to use as it could be; to address this, Go team member Katie Hockman recently published a draft design that proposes adding fuzz testing as a first-class feature of the standard go test command.
Using random test inputs to find bugs has a history that goes back to the days of punch cards. Author and long-time programmer Gerald Weinberg recollects:
We didn't call it fuzzing back in the 1950s, but it was our standard practice to test programs by inputting decks of punch cards taken from the trash. We also used decks of random number punch cards. We weren't networked in those days, so we weren't much worried about security, but our random/trash decks often turned up undesirable behavior.
More recently, fuzz testing has been used to find countless bugs, and some notable security issues, in software from Bash and libjpeg to the Linux kernel, using tools such as american fuzzy lop (AFL) and Vyukov's Go-based syzkaller tool.
The basic idea of fuzz testing is to generate random inputs for a function to see if it crashes or raises an exception that is not part of the function's API. However, using a naive method to generate random inputs is extremely time-consuming, and doesn't find edge cases efficiently. That is why most modern fuzzing tools use "coverage-guided fuzzing" to drive the testing and determine whether newly-generated inputs are executing new code paths. Vyukov co-authored a proposal which has a succinct description of how this technique works:
start with some (potentially empty) corpus of inputs for { choose a random input from the corpus mutate the input execute the mutated input and collect code coverage if the input gives new coverage, add it to the corpus }
Collecting code coverage data and detecting when an input "gives new coverage" is not trivial; it requires a tool to instrument code with special calls to a coverage recorder. When the instrumented code runs, the fuzzing framework compares code coverage from previous test inputs with coverage from a new input, and if different code blocks have been executed, it adds that new input to the corpus. Obviously this glosses over a lot of details, such as how the input is mutated, how exactly the coverage instrumentation works, and so on. But the basic technique is effective: AFL has used it on many C and C++ programs, and has a section on its web page listing the huge number of bugs found and fixed.
The go-fuzz tool
AFL is an excellent tool, but it only works for programs written in C, C++, or Objective C, which need to be compiled with GCC or Clang. Vyukov's go-fuzz tool operates in a similar way to AFL, but is written specifically for Go. In order to add coverage recording to a Go program, a developer first runs the go-fuzz-build command (instead of go build), which uses the built-in ast package to add instrumentation to each block in the source code, and sends the result through the regular Go compiler. Once the instrumented binary has been built, the go-fuzz command runs it over and over on multiple CPU cores with randomly mutating inputs, recording any crashes (along with their stack traces and the inputs that caused them) as it goes.
Damian Gryski has written a tutorial showing how to use the go-fuzz tool in more detail. As mentioned, the go-fuzz README lists the many bugs it has found, however, there are almost certainly many more in third-party packages that have not been listed there; I personally used go-fuzz on GoAWK and it found several "crashers".
Journey to first class
Go has a built-in command, go test, that automatically finds
and runs a project's tests (and, optionally, benchmarks). Fuzzing is a type
of testing, but without built-in tool support it is somewhat cumbersome to
set up. Back in February 2017, an issue was filed on the
Go GitHub repository on behalf of Vyukov and Konstantin
Serebryany, proposing that the go tool "support fuzzing
natively, just like it does tests and benchmarks and race detection
today
". The issue notes that "go-fuzz exists but it's not as
easy as writing tests and benchmarks and running
go test -race
". This issue has garnered a huge
amount of support and
many comments.
At some point Vyukov and others added a motivation
document as well as the API
and tooling proposal for what such an integration would look like. Go
tech lead Russ Cox pressed for a prototype version of "exactly what
you want the new go test fuzz mode to be
". In January 2019
"thepudds" shared
just that — a tool called fzgo that implements most of
the original proposal in a separate tool. This was well-received at the
time, but does not seem to have turned into anything official.
More recently, however, the Go team has picked this idea back up, with Hockman writing the recent draft design for first-class fuzzing. The goal is similar, to make it easy to run fuzz tests with the standard go test tool, but the proposed API is slightly more complex to allow seeding the initial corpus programmatically and to support input types other than byte strings ("slice of byte" or []byte in Go).
Currently, developers can write test functions with the signature TestFoo(t *testing.T) in a *_test.go source file, and go test will automatically run those functions as unit tests. The existing testing.T type is passed to test functions to control the test and record failures. The new draft design adds the ability to write FuzzFoo(f *testing.F) fuzz tests in a similar way and then run them using a simple command like go test -fuzz. The proposed testing.F type is used to add inputs to the seed corpus and implement the fuzz test itself (using a nested anonymous function). Here is an example that might be part of calc_test.go for a calculator library:
func FuzzEval(f *testing.F) { // Seed the initial corpus f.Add("1+2") f.Add("1+2*3") f.Add("(1+2)*3") // Run the fuzz test f.Fuzz(func(t *testing.T, expr string) { t.Parallel() // allow parallel execution _, _ = Eval(expr) // function under test (discard result and error) }) }
Just these few lines of code form a basic fuzz test that will run the calculator library's Eval() function with randomized inputs and record any crashes ("panics" in Go terminology). Some examples of panics are out-of-bounds array access, dereferencing a nil pointer, or division by zero. A more involved fuzz test might compare the result against another library (called calclib in this example):
... // Run the fuzz test f.Fuzz(func(t *testing.T, expr string) { t.Parallel() r1, err := Eval(expr) if err != nil { t.Skip() // got parse error, skip rest of test } // Compare result against calclib r2, err := calclib.Eval(expr) if err != nil { t.Errorf("Eval succeeded but calclib had error: %v", err) } if r1 != r2 { t.Errorf("Eval got %d, calclib got %d", r1, r2) } }) }
In addition to describing fuzzing functions and the new
testing.F type, Hockman's draft design proposes
that a new coverage-guided fuzzing engine be built that "will be
responsible for using compiler instrumentation to understand coverage
information, generating test arguments with a mutator, and maintaining the
corpus
". Hockman makes
it clear that this would be a new implementation, but would draw
heavily from existing work (go-fuzz and fzgo). The mutator would generate
new randomized inputs (the "generated corpus") from existing inputs, and
would work automatically for built-in types or structs composed of built-in
types. Other types would also be supported if they implemented the existing
BinaryUnmarshaler
or TextUnmarshaler
interfaces.
By default, the engine would run fuzz tests indefinitely, stopping a particular test run when the first crash is found. Users will be able to tell it to run for a certain duration with the -fuzztime command line flag (for use in continuous integration scripts), and tell it to keep running after crashes with the -keepfuzzing flag. Crash reports will be written to files in a testdata directory, and will contain the inputs that caused the crash as well as the error message or stack trace.
Discussion and what's next
As with the recent draft design on filesystems and file embedding, official discussion for this design was done using a Reddit thread; overall, the feedback was positive.
There was some discussion about the testing.F interface. David
Crawshaw suggested
that it should implement the existing testing.TB interface
for consistency with testing.T and testing.B (used for
benchmarking); Hockman agreed, updating
the design to reflect that. Based on a suggestion
by "etherealflaim", Hockman also updated
the design to avoid reusing testing.F in both the top level and
the fuzz function. There was also some bikeshedding
over whether the command should be spelled go test -fuzz or go
fuzz; etherealflaim suggested that reusing go test
would be a bad idea because it "has history and lots of folks
have configured timeouts for it and such
".
Jeremy Bowers recommended that the mutation engine should be pluggable:
I think the fuzz engine needs to be pluggable. Certainly a default one can be shipped, and pluggability can even be pushed to a "version 2", but I think it ought to be in the plan. Fuzzing can be one-size-fits-most but there's always going to be the need for more specialized stuff.
Hockman, however, responded
that pluggability is not required in order to add the feature, but might be
"considered later in the design phase
".
The draft design states up front that "the goal of circulating this
draft design is to collect feedback to shape an intended eventual
proposal
", so it's hard to say exactly what the next steps will be
and when they will happen. However, it is good to see some official energy
being put behind this from the Go team. Based on Cox's feedback on Vyukov's
original proposal, my guess is that we'll see a prototype of the updated
proposal being developed on a branch, or in a separate tool that developers
can run, similar to fzgo.
Discussion on the Reddit thread is ongoing, so it seems unlikely that a formal proposal and an implementation for a feature this large would be ready when the Go 1.16 release freeze hits in November 2020. Inclusion in Go 1.17, due out in August 2021, would be more likely.
Rethinking fsinfo()
The proposed fsinfo() system call, which returns extended information about mounted filesystems, was first covered here just over one year ago. The form of fsinfo() has not changed much in that year, but the debate over merging it continues. To some, fsinfo() is needed to efficiently obtain information about filesystems; to others, it is an unnecessary and over-engineered mechanism. Changes will probably be necessary if this feature is ever to make it into the mainline kernel.Linux has long supported the statfs() system call (usually seen from user space as statvfs()) as a way of obtaining information about mounted filesystems. As has happened so often, though, the designers of statfs() made a list of all the filesystem attributes they thought might be interesting and limited the call to those attributes; there is no way to extend it with new attributes. Filesystem designers, though, have stubbornly refused to stop designing new features in the decades since statfs() was set in stone, so there is now a lot of relevant information that cannot be obtained from statfs(). Such details include mount options, timestamp granularity, associated labels and UUIDs, and whether the filesystem supports features like extended attributes, access-control lists, and case-insensitive lookups.
As it happens, the kernel does make much of that information available now by way of the /proc/mounts virtual file. The problem with /proc/mounts, beyond the fact that some information is still missing, is that it is inefficient to access. Reading the contents of that file requires the kernel to query every mounted filesystem for the relevant information; on systems with a lot of mounted filesystems, that can get expensive. Systems running containerized workloads, in particular, can have vast numbers of mounts — thousands in some cases — so reading /proc/mounts can be painful indeed. For extra fun, the only way to know about newly mounted filesystems with current kernels is to poll /proc/mounts and look for new entries.
David Howells proposes to solve the polling problem with a new notification mechanism, but that mechanism, in turn, relies on fsinfo(), the 21st revision of which was posted on August 3. Howells requested that both notifications and fsinfo() be pulled during the 5.9 merge window, but that did not happen. Instead, the request resulted in yet another discussion about whether fsinfo() makes sense in its current form.
fsinfo()
The API for fsinfo() is comprehensive and extensible; there should never be a need for an fsinfo2() to add new attributes in the future. But it is also complex. On the surface, the interface looks like this:
int fsinfo(int dfd, const char *pathname, const struct fsinfo_params *params, size_t params_size, void *result_buffer, size_t result_buf_size);
Where the params structure is defined as:
struct fsinfo_params { __u64 resolve_flags; /* RESOLVE_* flags */ __u32 at_flags; /* AT_* flags */ __u32 flags; /* Flags controlling fsinfo() specifically */ __u32 request; /* ID of requested attribute */ __u32 Nth; /* Instance of it (some may have multiple) */ __u32 Mth; /* Subinstance of Nth instance */ };
There are four different ways to use dfd, pathname, and params->at_flags to specify which filesystem should be queried; see this patch changelog for details. The rest of the params structure describes the actual information request; the results end up in result_buffer.
There are numerous possibilities for params->request, including:
- FSINFO_ATTR_STATFS returns more-or-less the same information that would be obtained from statfs().
- FSINFO_ATTR_LIMITS returns various limits of the filesystem, including maximum file size, inode number, user ID number, hard links to a file, file-name length, etc. These are returned in an fsinfo_limits structure.
- FSINFO_ATTR_TIMESTAMP_INFO yields information about timestamps on files as a set of binary structures; this information includes the maximum values and granularity of timestamps expressed in a unique (to the kernel) mantissa-and-exponent format.
- FSINFO_ATTR_MOUNT_POINT generates a string showing where the filesystem is mounted.
- FSINFO_ATTR_MOUNT_CHILDREN gives an array of structures identifying the filesystems mounted below the filesystem being queried.
The full list of possible requests is rather longer than the above. Each returns data in a different format, usually a specific binary structure for the information requested. For some attributes, a query might return an arbitrary number of elements; in this case, the Nth and Mth fields in the fsinfo_params structure can be used to identify which should be returned. This patch contains a sample program that exercises a number of fsinfo() features to produce a listing showing the mount topography of the current system.
Complaints and alternatives
There are a couple of points of resistance to the fsinfo()
proposal, starting with whether it is needed at all. Linus Torvalds called
it "engineering for its own sake, rather than responding to
actual user concerns
" and wondered why it was needed now after Linux
has done without it for so many years. Torvalds tends to worry about
adding system calls that end up being used by nobody, so it is not unusual
for him to push for justification for the addition of new interfaces. It didn't take
long for potential users to make their needs clear; Steven Whitehouse described
it this way:
Karel Zak, maintainer of the util-linux package, described the needs of systems with thousands of mount points. Lennart Poettering provided a long list of attributes he would like to learn about filesystems and why they would be useful. The end result of all this discussion is that the need for some sort of filesystem-information system call is not really in doubt.
The complexity of fsinfo() still gives some developers something to worry about, though; to them, it looks like yet another multiplexer system call that tries to do a large number of things. But it's not entirely clear what an alternative would look like. There was a brief digression in which Torvalds suggested an API where attributes of a file could be opened as if that file were actually a directory; so, for example, opening (with a special flag) foo/max_file_size would allow the reading of the maximum file size supported by the filesystem hosting the plain file foo. This idea strongly resembles the controversial approach to metadata implemented by the reiser4 filesystem back in 2004, though nobody seemed to think it was politic to point that out in the discussion.
What was pointed out was that there are numerous practical difficulties associated with implementing this sort of mechanism. Even precisely defining its semantics turns out to be hard. So this idea was put aside; it will languish until somebody else surely suggests it again several years from now.
That leaves open the question of what a new API for obtaining filesystem
information should look like. Torvalds called
fsinfo() "confusing and over-engineered
" and asked: "Can we just make a simple extended statfs() and be done
with it, instead of this hugely complex thing that does five different
things with the same interface and makes it really odd as a result?
"
He further suggested
that a number of the binary structures used by fsinfo() could be
replaced by ASCII data. He pointed
out that a number of filesystem interfaces use ASCII for the more
complex attributes already and expressed hope that a kernel interface
exporting information in ASCII would make life easier for code that is
parsing that information out of /proc/mounts now.
So the end result of this discussion is likely to be an attempt to redesign fsinfo() along those lines. There is a problem here, though: the information needed is, like the systems it is representing, inherently complex. By the time a statfs()-like API that can represent all of this information and which can be extended in the future is designed, chances are that this design will start to look a lot like what fsinfo() is now. Replacing a few binary structures with ASCII seems unlikely to change the picture significantly. The end result of this whole exercise may be something that strongly resembles the current design.
CAELinux 2020: Linux for engineering
CAELinux is a distribution focused on computer-aided engineering (CAE) maintained by Joël Cugnoni. Designed with students and academics in mind, the distribution is loaded with open-source software that can be used to model everything from pig livers to airfoils. Cugnoni's latest release, CAELinux 2020, was made on August 11; readers with engineering interests may want to take a look.
CAELinux's first stable version was released in 2007 and was based on PCLinuxOS 2007. The distribution was created to make the GPL-licensed finite element analysis tool Salome-Meca easier to obtain. CAELinux 2020 is now the eighth release of the distribution, which is based on Xubuntu 18.04 LTS, and has expanded its focus over the years into an impressive array of open-source CAE-related tools.
The minimum requirements for CAELinux 2020 are a x86-64 platform with 4GB of
RAM for "simple analysis.
" For professional use, the project
recommends 8GB of RAM or more with a "modern AMD/NVidia graphic
card.
" The entire distribution can be run from an 8GB USB memory
drive, with the option to install it to disk (35GB minimum). For those users
(like me) who wanted to run the distribution as a virtual machine, the
project recommends the commercial VMware Player over the open-source VirtualBox project due to "some graphical
limitations
" of VirtualBox.
There are too many different software packages unique to the CAELinux distribution to cover them all in a single article. Since the distribution is built on top of Xubuntu, CAELinux comes with all of the standard tools available in the base distribution. In addition to the standard packages, however, CAELinux bundles CAE pre/post processors, CAD and CAM software, finite element solvers, computational fluid dynamics applications, circuit board design tools, biomedical image processing software, and a large array of programming language packages. A review of the release announcement provides a full list of the specific open-source projects available, including a few web-based tools that merely launch the included browser to the appropriate URL.
It would be impossible for me to claim familiarity with the full range of tools provided, but I was familiar with many. For example, FreeCAD has been written about at LWN, and CAMLab was used in our article on open-source CNC manufacturing. I have personally used other bundled packages like FlatCAM for isolation routing of homemade circuit boards and Cura to slice 3D models for printing. What was particularly neat about exploring the distribution was getting introduced to new open-source software that matched my interests. I discovered KiCad EDA's PCB Calculator utility (simple, but handy), and I am looking forward to checking out CAMotics as another CAM alternative for my CNC router.
Others are doing interesting things with CAELinux, as described in
this research paper [PDF] by Kirana Kumara that was published in the
International Journal of Advancements in Technology. The paper, entitled
"Demonstrating the Usefulness of CAELinux for Computer Aided
Engineering using an Example of the Three Dimensional Reconstruction of a Pig
Liver
", describes the process of taking a collection of computed tomography scan (CT
scan) cross-section images of a pig liver and converting it into a
three-dimensional model using tools in CAELinux 2018. The paper indicates
Kumara used ImageJ to mirror and
concatenate the original image collection. That data was then imported into
ITK-SNAP, which is
"used to segment structures in 3D medical images
"; it produces a
3D model of the images provided from ImageJ. Unfortunately, CAELinux 2020
appears to have removed ITK-SNAP from its distribution and attempts to
install binaries from the project didn't work due to an incompatibility with
libpng. Still, interested readers may have more success building the
tool from source code — or using CAELinux 2018 instead.
The testing of the distribution prior to release appears to be limited to the capabilities of Cugnoni alone, who wrote for the 2020 release:
I have checked that it works properly on all my available machines (8 PCs & laptops), from AMD Phenom X4, Intel Core2Quad to Intel Xeon E5 and AMD Threadripper servers as well as latest gen Intel I5/I7 laptops, all this with a mix of AMD, Intel or NVidia GPU.
In my review of CAELinux, it is worth mentioning that a handful of the distribution software packages did not work properly in testing. For example, the KiCAD EDA bitmap2component tool would not run for me at all. This tool is used to convert an image into a component for placement on a printed circuit board. Other packages, like JavaFoil, started with warnings about an incompatible version of Java but otherwise appeared functional. The issues seem mostly limited to the less mature, highly specialized projects; more mature projects like FreeCAD or LibreCAD ran without issue. Regardless, it would be nice if it could be reported that everything worked as expected in a distribution — especially for one that is specifically focused on these tools. In general, the distribution relies on Xubuntu for packages for the provided software; but that is not true for everything. From a security perspective, the distribution relies on Xubuntu for updates. I was able to find a bug tracker as part of the SourceForge hosting for the project, but it did not appear to be in active use.
To try out CAELinux 2020, the ISO can be obtained
from the web site, though it is oddly broken into three separate segments
compressed using 7-Zip.
According to Cugnoni, a challenge for the 2020 release was that the base OS
and software "have increased massively in size over the year
",
making it "nearly impossible to do something really complete within the
4Gb file size limit for regular ISO images.
" Overcoming this
limitation was a significant effort, according to Cugnoni, and adds
complexities to creating the image. Due to these complexities, the project
recommends Ventoy to
create a live USB drive
from the image. If problems are encountered, the project has a forum available to help
solve them.
This article doesn't explore every CAE-related project included in the release; readers are encouraged to try the distribution themselves. CAELinux, while somewhat flawed, is a pretty interesting Linux distribution that provides a great example of the wide variety of engineering-related open-source projects out there.
The programmer's CAD: OpenSCAD
OpenSCAD is a GPLv2-licensed 3D computer-aided design (CAD) program
best described as a "programmer's CAD"; it is available for
Linux, Windows, several flavors of BSD, and macOS. Unlike the majority of 3D-modeling software packages which are
point-and-click, the OpenSCAD website describes the project as
"something like a 3D compiler
", where models are generated using
a scripting language. It is a unique way of approaching CAD and has many
real-world applications that may be of interest.
Like the FreeCAD project we have previously looked at, OpenSCAD can be used to build 3D-models suitable for everything from 3D-printing to CNC machining. Unlike FreeCAD, however, the solitary way to create models is by programming them using the OpenSCAD scripting language. Once programmed, models produced by OpenSCAD can be exported in a variety of formats, including notably STL, SVG, and PNG.
The Qt-based interface provided by OpenSCAD is fairly simple; on one side a code editor is provided to write scripts, while the other provides a view of the generated model and a console for messages. Making a model starts with coding modules that generate primitives like cylinders and cubes, then those primitives are manipulated and combined in code to build more complicated objects. Notably, OpenSCAD is a unit-less CAD program; it leaves the units to be decided once the model is exported. This article will focus on the process of creating one of the three components for the following model:
The image above shows OpenSCAD rendering a model for a 3D-printable blast gate, which is a device that connects inline to a pipe for a vacuum system to help focus vacuum pressure where it is needed. It is a reproduction I created from scratch of this design by the user Jimbobareno on Thingiverse. It consists of three parts: the flanges (grey), the spacer between them (orange), and the movable gate (green). The model is viable for several different pipe sizes with only minor modifications, but those modifications are more complicated than merely scaling the model; doing so would change not only the pipe size, but also the holes for the bolts that ultimately hold the parts together. To make this model adaptable, it needs to be able to scale certain aspects while leaving others alone; this makes it a great candidate for OpenSCAD. In fact, the entire model can be built using an OpenSCAD script based on three variables (outer pipe diameter, screw hole size, and wall thickness) to produce a 3D-printable model of any size blast gate.
Writing OpenSCAD code is often done using the editor provided in the interface, but those who want to use an external editor can do so. As an example of how to program a model using OpenSCAD, we will focus on the flange of the the blast gate. Here is the code to generate one:
module flange(pipeDiameter, pipeWallsize, fastenerSize) {
$_outerPipeRadius = ((pipeDiameter / 2) + pipeWallsize);
$_flangeSize = ($_outerPipeRadius + 20) * 2;
difference() {
union() {
cube([$_flangeSize, $_flangeSize, 4], center = true);
translate([0,0,2 - 0.001])
cylinder(r=$_outerPipeRadius, h=26);
}
translate([0,0,-4])
cylinder(r = $pipeDiameter / 2, h=32);
translate([-($_flangeSize / 2) + 8, -($_flangeSize / 2) + 8, -3])
cylinder(r = (fastenerSize / 2), h=6);
translate([($_flangeSize / 2) - 8, ($_flangeSize / 2) - 8, -3])
cylinder(r = (fastenerSize / 2), h=6);
translate([($_flangeSize / 2) - 8, -($_flangeSize / 2) + 8, -3])
cylinder(r = (fastenerSize / 2), h=6);
translate([-($_flangeSize / 2) + 8, ($_flangeSize / 2) - 8, -3])
cylinder(r = (fastenerSize / 2), h=6);
}
}
The flange module accepts three parameters: pipeDiameter, which is the outer diameter of the pipe to be inserted, pipeWallsize, which is the wall thickness of the pipe fitting, and fastenerSize which is the diameter of the hole for the bolt. When reading OpenSCAD code, it is easiest to read the program "inside out." Consider this segment of the script from above:
union() {
cube([$_flangeSize, $_flangeSize, 4], center = true);
translate([0,0,2 - 0.001])
cylinder(r=$_outerPipeRadius, h=26);
}
This snippet defines a cube (rectangular solid) by calling the cube function with a width and height defined by the $_flangeSize variable calculated at the start of the module, and a fixed depth of four. The last parameter, center, indicates that the cube should render centered at the origin of the workspace. Following the cube function, we have a translate statement and its target, the cylinder statement; these operations can be considered a single action of creating a cylinder with a radius of $_outerPipeRadius and a height of 26, then moving that cylinder to the x,y,z coordinates of (0, 0, 1.999) in the workspace. $_outerPipeRadius is the radius of the desired pipe (pipeDiameter), plus the thickness of the wall specified by pipeWallsize. These statements place the cylinder centered on top of the cube (and 0.001 units "inside" of it). Shown below is a rendering of the model for a pipeDiameter of 61:
At this point in the process, the cylinder and cube are independent, overlapping solids to OpenSCAD. To join them together, we wrap the code with a union operation, which takes all of the solids defined within it and combines them into a single solid in the model. The union operation is why the cylinder was placed 0.001 units "inside" of the cube: without an overlap, OpenSCAD's union operation won't join them.
The process of creating primitive solids programmatically, placing them with other solids, and then joining them together into a new solid can also happen in reverse. Looking at the full code for the flange module, readers will note the difference operation, which works in an opposite fashion to union by subtracting one solid from another. In example, the difference operation contains the following immediately after the union:
translate([0,0,-4])
cylinder(r = $pipeDiameter / 2, h=32);
This operation creates a second cylinder at the origin that is the exact size of the desired pipe, which will always be smaller in diameter than the one generated in the union (by pipeWallsize units). This new cylinder's height is arbitrary; what is essential is that it be taller than the model's height. Combining the union and the new cylinder within the difference operation causes the new cylinder to be subtracted from the union, creating a hole. An animated example of this process for a pipeWallsize of 1.5 units may help visualize the operation.
The same difference process is then repeated for each of the bolt holes created on the flange: create a cylinder of correct diameter and sufficient height, position it where the hole should be, and then the difference operation subtracts it from the solid created by the union code block.
The full script for the blast gate example used in this article is available to readers under an MIT license. It includes the code used to construct the flange shown here, the spacer, and the gate. It additionally provides examples of scripting language features not covered in this article.
One of the most interesting abilities of OpenSCAD is a feature called the Customizer, which parses an OpenSCAD script and generates a "customization panel" that allows others to customize the model without having to do any programming. OpenSCAD provides a special syntax for variables placed at the top of a script, which is then translated into various form elements used by the Customizer. For the blast gate model, our customizable values consist of four variables at the top of the script:
// The outer diameter of the pipe being used.
$pipeDiameter = 61;
// The wall thickness of the pipe portion of the flange
$pipeWallsize = 1.5; // [1.5:0.1:10]
// The diameter of the fastener holes
$fastenerSize = 3.7; // [1:0.1:14]
// The part of the blast gate to render
$render = "all"; // [all, flange, spacer, gate]
By using the Customizer tool's comment-based syntax, our model can be quickly (and correctly) customized via the user interface. Above, the comments directly before each variable provide a description, and the inline ones define the acceptable values (i.e. any value between 1.5 to 10 with a step of 0.1 for $pipeWallsize). For variables that do not provide an indication one way or another, a generic form element is created from the data type of the variable. Here is the interface the Customizer tool generates based on this metadata:
Note that while OpenSCAD provides a graphical user interface, it can also be used from the command line for several tasks. For example, the Customizer can be used in an automated fashion from the command line to export a model based on a text file containing the customization values.
There are many different OpenSCAD libraries of open-source parametric objects available online. The OpenSCAD MCAD Library is one example of a community-contributed collection of SCAD scripts to generate various useful components like gears, nuts, bolts, and other standard constructions. Cloning this repository into the OpenSCAD library directory makes them available for import:
use <MCAD/boxes.scad>
roundedBox(size = [10, 20, 30], radius = 3, sidesonly = false);
The above uses the MCAD roundedBox module to generate a box with rounded edges:
OpenSCAD was initially released in 2010, with 19 releases since; the latest stable version was released in May 2019. There does not appear to be a steady cadence of releases by the project's 136 contributors, but a review of the OpenSCAD GitHub page hints at a new release in 2020. In addition to the MCAD library we discussed, various other user-contributed open-source libraries of models for OpenSCAD are also available.
The examples we have looked at in this article have been trivial when compared to what is possible. In addition to the simple operations, OpenSCAD supports conditionals, loops, trigonometric functions, and more. The language reference provides complete documentation of the scripting language's capabilities, and a helpful cheat sheet is also available. If additional assistance is needed, the project provides a mailing list (that is bidirectionally connected to its web-based forum) and the #openscad IRC channel on irc.freenode.net. For programmers looking for a 3D-modeling tool for their next project, OpenSCAD is worth a look.
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Briefs: X.Org server security update; Kernel entry code fuzzing; Quotes; ...
- Announcements: Newsletters; conferences; security updates; kernel patches; ...