Leading items
Welcome to the LWN.net Weekly Edition for August 23, 2018
This edition contains the following feature content:
- Redis modules and the Commons Clause: Redis Labs responds to "predatory" use of its software by going non-free.
- The sidechannel LSM: a proposed Linux Security Module aimed at efficiently closing off cache-based side channels.
- Batch processing of network packets: a (with hindsight) obvious change with impressive performance benefits.
- The first half of the 4.19 merge window: what has been merged for the next kernel release.
- 3D printing with Atelier: a report from Akademy on a free system for the control of 3D printers.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Redis modules and the Commons Clause
The "Commons Clause", which is a condition that can be added to an open-source license, has been around for a few months, but its adoption by Redis Labs has some parts of the community in something of an uproar. At its core, using the clause is meant to ensure that those who are "selling" Redis modules (or simply selling access to them in the cloud) are prohibited from doing so—at least without a separate, presumably costly, license from Redis Labs. The clause effectively tries to implement a "no commercial use" restriction, though it is a bit more complicated than that. No commercial use licenses are not new—the "open core" business model is a more recent cousin, for example—but they have generally run aground on a simple question: "what is commercial use?"
Redis is a popular in-memory database cache that is often used by web applications. Various pieces of it are licensed differently; the "Redis core" is under the BSD license, some modules are under either Apache v2.0 or MIT, and a handful of modules that Redis Labs created are under Apache v2.0, now with Commons Clause attached. Cloud services (e.g. Amazon AWS, Microsoft Azure, Google Compute Engine, and other smaller players) provide Redis and its modules to their customers and, naturally, charge for doing so. The "charge" part is what the adoption of the clause is trying to stamp out—at least without paying Redis Labs.
The clause itself is admirably brief, just three paragraphs that are meant to be tacked on as an additional restriction to a permissive license, such as the Apache License 2.0. It overrides the license text to prohibit selling the software and defines what it means by "sell":
One can immediately see some "wiggle room" that will have to be evaluated
by lawyers (and, eventually, judges) to define various pieces of that
sentence. "Value derives
", "entirely or
substantially
", and even "from the functionality
" are
all open to interpretation. The Redis Labs announcement tries to
make it clear what is being targeted:
Redis is an example of this paradigm. Today, most cloud providers offer Redis as a managed service over their infrastructure and enjoy huge income from software that was not developed by them. Redis' permissive BSD open source license allows them to do so legally, but this must be changed. Redis Labs is leading and financing the development of open source Redis and deserves to enjoy the fruits of these efforts. Consequently, we decided to add Commons Clause to certain components of open source Redis. Cloud providers will no longer be able to use these components as part of their Redis-as-a-Service offerings, but all other users will be unaffected by this change.
That provides some of the reasoning behind the move, but it may make others
who are outside of the target zone leery of using the Redis modules that
are now covered by the clause. The "this must be changed
"
wording about the BSD license may also make some worry about the license for
the Redis core
(which remains under the BSD without the addition of the clause) down the
road. There is a contributor
license agreement [PDF] for at least some contributions to the project,
which might allow relicensing if Redis Labs—or some company that buys
it—decides that is in its interest. It should be noted that the agreement
allows Redis Labs to make money on any contributions made under it, which
is the norm for such things but might be seen as a tad hypocritical.
The Redis Labs page clearly disclaims
the possibility of changing the license, though that assurance may not be
ironclad:
So it is not quite a "no commercial use" clause, at least as interpreted by Redis Labs, but that brings problems of its own and may provide ways for the cloud providers to evade the clause entirely. As both the Commons Clause and Redis Labs pages clearly note, adding the clause to an open-source license does not result in something that falls under the Open Source Definition. That means that the Redis modules in question are no longer open source, thus Linux distributions and others may not be willing or able to distribute them any longer. The issue has already been raised for Fedora; Debian is looking into it as well and others will likely follow. That alone shows some of the collateral damage that can occur when licenses are changed this way.
Redis Labs is not alone in using the clause for licenses; other projects are adopting it as well. Neo4j Enterprise has added the clause to the AGPLv3 and Dgraph has switched from AGPLv3 to Apache v2.0 with the Commons Clause, which it called a move to a "liberal license". The clause is addressing a real problem, but the cure could be worse than the disease.
Permissively licensed code (e.g. BSD or Apache v2.0) is subject to the "abuse" that is being claimed—in fact, that is much of the point of those licenses. Permissive licensing means that the code can be changed and distributed without making any of those changes public. But copyleft-style licensing wouldn't necessarily help the problem that Redis Labs is complaining about. Large (or small) cloud providers probably do not make substantive changes to the Redis modules in question—if they do, it seems likely they would be unfazed by having to release the changes if they were required to. The AGPL was meant to help with this "as a service" loophole in the GPL, but it is not meant to stop people from running the software any way they want—quite the opposite.
And that is really the crux of the matter. Being a part of the open-source world means accepting some things, including that code you release under those terms might be used in ways you don't like. It may also be used in ways that make money for someone else. It is part and parcel of what open source is all about.
There was a time when licenses with no commercial use clauses were relatively common. Before the turn of the century or thereabouts, lots of software was distributed under those terms (e.g. Linux 0.01, the Majordomo mailing-list manager). That was an accepted practice, mostly because people weren't really paying that much attention to the terms under which all of this free (as in beer) software was being made available. That changed along the way; as it did, the perils of no commercial use clauses became more apparent.
Redis Labs has tried to clarify what it means by selling the modules and others have tried to do so with their licenses as well (e.g. the NonCommercial interpretation from Creative Commons). But a restriction of that sort, with all of its various gray areas, rarely actually hits the target sought. It is the smaller cloud providers that will be affected by this move more than Amazon, Google, or Microsoft will be. It will also split off distributions and users that are not willing to get involved with non-open-source software. Restricting the use cases for a piece of software just makes it harder to actually use that software because no one truly knows which uses are blessed and which aren't.
One of the reasons that Redis is open source is presumably for attracting a community of users, developers, and others who will help broaden the reach of the project. Redis Labs appears to want to have its cake and eat it too. Perhaps this move will give the company some time to find a way to appease its investors, but it is not a community move—and the community has noticed. A look at threads on Hacker News or Reddit will show that many are not pleased with this change. Not surprisingly, longtime free-software advocate Bradley M. Kuhn has also criticized the clause.
The clause was written by Heather Meeker, who has been involved in multiple open-source disputes (on both sides) along the way. It is being pushed by FOSSA, which is a company that provides license-compliance tools. While the problem of financially supporting open-source development is real, and that is what FOSSA/Commons Clause are trying to promote, doing so with a clause restricting the scope of open-source licenses is a non-starter.
Buried in the text of the FAQ at the Commons Clause site may be a clue to what the real goal is. In two places, it mentions conversations that it is hoping will start:
[...]
The Commons Clause was drafted by a group of developers behind many of the world's most popular open source projects who feel pressure from rapidly-developing projects and ecosystems. Honestly, we're not entirely sure what the best long-term solution is. However, we need to start a conversation on what we can do to meet the financial needs of commercial open source projects and the communities behind them.
These conversations may truly be the goal, though it may be more difficult
to have that first conversation with those that have been labeled as
"predatory
".
The latter conversation is welcome, though it is hard to see licensing as
much of a tool to use in pursuit of that goal. Smaller open-source
projects (or even critical infrastructure projects like OpenSSL prior to Heartbleed) often struggle to
make ends meet, which is a shame. Finding better ways to fund open-source
development (for projects small and large, company-backed or not)
would be fabulous;
changing licenses in a way that violates one of the core tenets of
open-source software seems like the wrong way to go about it.
The sidechannel LSM
Side-channel attacks are a reasonably well-known technique to exfiltrate information across security boundaries. Until relatively recently, concerns about these types of attacks were mostly confined to cryptographic operations, where the target was to extract secrets by observing some side channel. But with the advent of Spectre, speculative execution provides a new way to exploit side channels. A new Linux Security Module (LSM) is meant to help determine where a side channel might provide secrets to an attacker, so that a speculative-execution barrier operation can be performed.
In current kernels, a context switch from one process to another often necessitates a flush of the translation lookaside buffer (TLB) contents, which is done in switch_mm_irqs_off(). For x86, after the Spectre v2 mitigations, that function calls indirect_branch_prediction_barrier() when switching away from a process that is not allowed to core dump (i.e. does not have SUID_DUMP_USER set). The barrier (which is known as IBPB) is an expensive operation, so it is only done for "sensitive" processes that have turned off core dumps (e.g. GPG). Core dumps of a process can contain secrets of various sorts, such as keys or passwords.
However, there may be other sensitive processes that do not turn off core dumps but are still susceptible to this side channel, so a patch set from Casey Schaufler would allow LSMs to offer an opinion on whether the IBPB should be done. It adds a new LSM hook (task_safe_sidechannel()) that will return zero if there are no known side-channel worries or -EACCES if the LSM considers the context switch to be potentially sensitive. The patch set provides an LSM to check some security attributes of tasks and also adds checking to the SELinux and Smack LSMs so that they can report whether the security attributes they maintain indicate a potential side-channel concern.
The SELinux
and Smack
changes add an entry for the new hook. Each looks at
the current task and the task to be switched to and renders a verdict on
the side-channel safety of the switch. The SELinux hook considers the
switch to be safe against side channels if the current task has
FILE__READ access to the new task. For Smack, it is similar:
"Smack considers its private task data safe if the current task
has read access to the passed task.
"
The bulk of the patch set, though, is the new "sidechannel" LSM. It is enabled with the SECURITY_SIDECHANNEL kernel configuration option, but requires other options in order to actually do any checking. One of them assumes that all task switches are subject to side channels (SECURITY_SIDECHANNEL_ALWAYS), so it simply always returns -EACCES. The other three enable various checks:
- SECURITY_SIDECHANNEL_UIDS: checks if the tasks have different effective UIDs and reports side-channel susceptibility if so; this could have a high performance impact since most context switches are between tasks with different effective UIDs.
- SECURITY_SIDECHANNEL_CAPABILITIES: checks if the tasks have different sets of capabilities, which may mean the new task would be subject to side-channel attacks.
- SECURITY_SIDECHANNEL_NAMESPACES: checks if the tasks live in different user, PID, or control-group namespaces and returns -EACCES if so.
The comments on the patch set have been relatively light. Jann Horn has made several suggestions, most of which Schaufler has adopted; the patch set is now up to v3. One comment that has not been addressed in the patch set is Horn's request that the security checks look at the previous non-kernel task when switching away from the kernel. He went into more detail in a posting on v2 of the patch set:
I very much dislike the idea of adding a mitigation with a known bypass technique to the kernel.
The test in switch_mm_irqs_off() to decide whether to do the IBPB looks at the task structure; if it is a kernel thread, thus does not have an mm pointer to a process address space, the rest of the checks are shorted out. Schaufler didn't change that, though he did "touch" it by adding the new LSM hook call, so Horn's complaint is really about the existing test. Horn suggested keeping a copy of the metadata for the most recent non-kernel task in order to do that test, but Schaufler has not made that change; his argument was that those who are concerned about that kind of attack should probably simply enable the "always" option.
Schaufler was also concerned with finding a good mechanism to save the task
metadata. Horn offered
some suggestions, but noted that the obvious way to do so might not be
favored in a hot path like context switching:
"The obvious solution would be to take a refcounted reference on the
old task's objective creds, but you probably want to avoid the
resulting cache line bouncing...
"
It certainly seems reasonable for the LSMs to get involved in the decision on whether a process might be susceptible to a side-channel attack from another process. The current "dumpable" test is a simple one, but likely ignores many sensitive processes. But context switching is an important function of the kernel and one that should be done as quickly as possible. Adding complexity there may not be particularly welcome, but there have been no complaints so far. Speculative execution is done as a performance optimization but clearly we are having to give some of that improvement back to work around the shortcomings of its implementation in some CPUs.
Batch processing of network packets
It has been understood for years that kernel performance can be improved by doing things in batches. Whether the task is freeing memory pages, initializing data structures, or performing I/O, things go faster if the work is done on many objects at once; many kernel subsystems have been reworked to take advantage of the efficiency of batching. It turns out, though, that there was a piece of relatively low-hanging fruit at the core of the kernel's network stack. The 4.19 kernel will feature some work increasing the batching of packet processing, resulting in some impressive performance improvements.Once upon a time, network interfaces would interrupt the processor every time a packet was received. That may have worked well with the kind of network interfaces we had in the 1990s, but an interface that worked that way now would be generating many thousands of interrupts per second. That, in turn, would swamp the CPU and prevent any work from getting done. The response to this problem in network circles was the adoption of an API called "NAPI" (for "new API") during the long 2.5 development series.
Old-timers on the net — like your editor — used to have their computers beep at them every time an email arrived. Most of us stopped doing that long ago; the beeps were nonstop, and things reached a point where we simply knew there would be email waiting anytime we got over our dread and opened a mail client. NAPI follows a similar approach: rather than poke the processor when packets arrive, the interface just lets them accumulate. The kernel will then poll the interface occasionally, secure in the knowledge that there will always be packets waiting to be processed. Those packets are then processed in a batch, with the batch size limited by the "weight" assigned to the interface.
At this level, we can see that batching of packet processing was added some fifteen years ago. But that is where the batching stops; when the NAPI poll happens, the device driver will pass each packet into the network stack with a call to netif_receive_skb(). From that point on, each packet is handled independently, with no further batching. In retrospect, with all of the effort that has gone into streamlining packet processing, one might wonder why that old API was never revisited, but that is how things often go in the real world.
Eventually, though, somebody usually notices an issue like that; in this case, that somebody was Edward Cree, who put together a patch set changing how low-level packet reception works. The first step was to supplement netif_receive_skb() with a batched version that reads, in its entirety:
void netif_receive_skb_list(struct list_head *head)
{
struct sk_buff *skb, *next;
list_for_each_entry_safe(skb, next, head, list)
netif_receive_skb(skb);
}
Now, rather than calling netif_receive_skb() for every incoming packet, a driver can make a list out of a batch of packets and pass them upward with a single call. Not much has changed at this point, but even this tweak improves performance by quite a bit, as it turns out.
The rest of the patch series is occupied with pushing the batching further up the network stack, so that packets can be passed in lists as far as possible. That gets a little trickier at the higher levels, since some packets have to be handled in fundamentally different ways. For example, some may have been allocated from the system's memory reserves (part of a mechanism to avoid deadlocks on network block devices); those require special handling. When such situations are encountered, the list of packets must be split into smaller lists, but the batching is preserved as far as possible.
The benchmark results (included in this merge commit) are interesting. In one test case, using a single receive queue, a kernel with these patches (and a suitably patched driver) showed a 4% improvement in packet-processing speed. That would certainly justify the addition of this bit of infrastructure, but it turns out that this number is the worst case that Cree could find. In general, just adding and using netif_receive_skb_list() improves performance by 10%, and the performance improvement with the entire patch series centers around 25%. One test showed a 35% speed improvement. In an era where developers have sweated mightily for much smaller gains, this is an impressive performance improvement.
One might well wonder why even the simplest batching shown above can improve things by so much. It mostly comes down to cache behavior. As Cree notes in the patch introduction, the processor's instruction cache is not large enough to hold the entire packet-processing pipeline. A device driver will warm the cache with its own code, but then the processing of a single packet pushes that code out of cache, and the driver must start cold with the next one. Just eliminating that bit of cache contention by putting the packets into a list before handing them to the network stack thus improves things considerably; creating the same sort of cache efficiency through the network stack improves things even more.
Networking also uses a lot of indirect function calls. These calls were never cheap, but the addition of retpolines for Spectre mitigation has made things worse. Batching replaces a bunch of per-packet indirect calls with single per-list calls, reducing that overhead.
There is a problem that often comes with throughput-oriented optimizations, and which can often be seen with batching: an increase in latencies. In the networking case, though, that cost was already paid years ago when NAPI was added. The new batching works on bunches of packets that have already been accumulated at the NAPI poll time and doesn't really add any further delays. So it's an almost free improvement from that point of view.
This code has been merged for the 4.19 kernel, so it will be generally available when the release happens. As of this writing, only the Solarflare network interfaces use the new netif_receive_skb_list() API. The necessary changes at the driver level are quite small, though, so it would be surprising if other drivers were not updated in the relatively near future, possibly even before the 4.19 release. This particular fruit is hanging too low to go unpicked for long.
The first half of the 4.19 merge window
As of this writing, Linus Torvalds has pulled just over 7,600 non-merge changesets into the mainline repository for the 4.19 development cycle. 4.19 thus seems to be off to a faster-than-usual start, perhaps because the one-week delay in the opening of the merge window gave subsystem maintainers a bit more time to get ready. There is, as usual, a lot of interesting new code finding its way into the kernel, along with the usual stream of fixes and cleanups.
Core kernel
- The scheduler's load-tracking subsystem has been enhanced with an improved awareness of the amount of time taken by realtime processes, deadline processes, and interrupt handling; this information is used to select more appropriate operating frequencies for the system's processors.
- The "jprobes" tracing mechanism has been removed from the kernel; it has long been superseded by the ftrace infrastructure. Those who are curious about what jprobes did can find a description in this 2005 article.
- The asynchronous I/O polling interface has been added again, after having been reverted out of 4.18. The internal implementation has changed into a more Linus-friendly form, so this feature should actually make it into the release this time around.
Architecture-specific
- Support for Intel's "cache pseudo locking" feature has been added. With this feature, a portion of a processor's memory cache can be populated with data of interest, then locked against further changes. The result is consistent low-latency read access to the locked memory range. See this commit for documentation on this feature.
- 32-Bit x86 systems finally have kernel page-table isolation support.
- A large set of mitigations for the recently disclosed L1TF vulnerability has been merged.
- The arm64 architecture has gained support for restartable sequences and the "stackleak" GCC plugin.
Filesystems and block layer
- The XFS filesystem has removed the barrier and nobarrier mount options. Those options have not actually done anything for years; hopefully everybody has removed them from their fstab files by now.
- The block I/O latency controller has been added; it allows administrators to provide minimum I/O latency guarantees to specific control groups.
- The asynchronous bsg (SCSI generic) interface has been removed due to persistent and unfixable design issues.
Hardware support
- Audio: Realtek RT5682 codecs, Everest ex7241 codecs, Amlogic AXG sound cards, and Qualcomm WCD9335 codecs.
- Clock: Renesas R9A06G032 clock controllers, Maxim 9485 programmable clock generators, Meson AXG audio clock controllers, Actions Semi S700 SoC clock controllers, and Qualcomm SDM845 display clock controllers.
- Graphics: Ilitek ILI9881C-based panels, Iletek ILI9341 display panels, and Qualcomm SDM845 display processing units.
- Hardware monitoring: Mellanox fan controllers, Maxim MAX34451 voltage/current monitors, and Nuvoton NPCM750 PWM and fan controllers.
- Media: Dongwoon DW9807 lens voice coils, Asahi Kasei Microdevices AK7375 lens voice coils, and Socionext MN88443x demodulators.
- Network: Vitesse VSC7385/7388/7395/7398 switches, Realtek SMI Ethernet switches, and Theobroma Systems UCAN interfaces.
- Pin control: Intel Ice Lake pin controllers, NXP IMX8MQ pin controllers, and Synaptics as370 pin controllers.
- Miscellaneous: NVIDIA Tegra NAND flash controllers, Socionext UniPhier SPI controllers, Qualcomm last-level cache controllers, Qualcomm RPMh regulators, Hisilicon SEC crypto block cipher accelerators, Mediatek MT7621 GPIO controllers, and MediaTek CMDQ mailbox controllers.
Networking
- The time-based packet transmission patch set has been merged. This feature allows a program to schedule data for transmission at some future time.
- The CAKE queuing discipline, which works to overcome bufferbloat and other problems associated with home network links, has been merged.
- The new "skbprio" queuing discipline can schedule packets according to
an internal priority field. This feature is naturally undocumented;
in the
commit adding it the author says: "
Skbprio was conceived as a solution for denial-of-service defenses that need to route packets with different priorities as a means to overcome DoS attacks
". - Devices that can offload the receive side processing of TLS-encrypted connections are now supported.
Security-related
- There is now a kernel configuration option that can be used to make the system fully initialize the entropy pool from the hardware random-number generator at boot time. This should allow for better early-boot random-number generation at the cost of placing a bit of trust in the CPU manufacturer's hardware.
Internal kernel changes
- The simple wait queue API has been changed by renaming a number of functions to reflect the fact that it only implements exclusive waits. So prepare_to_swait() becomes prepare_to_swait_exclusive(), swake_up() becomes swake_up_one(), and so on.
- There is a new initiative to translate kernel documentation into Italian, with an initial set of translations merged for 4.19.
If the usual schedule holds, the 4.19 merge window can be expected to remain open until August 26. There are still quite a few trees to be pulled, so one can expect a number of interesting changes will still find their way into this merge window. The final 4.19 release can be expected in mid-October.
3D printing with Atelier
During this year's Akademy conference, Lays Rodrigues introduced Atelier, a cross-platform, open-source
system that allows users to control their 3D printers. As
she stated in her talk
abstract, it is "a project with a goal to make the 3D
printing world a better place
".
Akademy is the KDE community's annual
conference. This year it took place in Vienna and the program included a
number of hardware-related talks as part of the conference portion held
during the weekend of August 11 and 12.
When you get a 3D printer, she began, the first interface you can
access is the set of
menus on the printer's own screen; see, for example, the screen on the
left (taken from Rodrigues's
slides [PDF]). They can be used to to perform basic
operations and check how the printing operation is going, but there are
better ways to control the device, Rodrigues explained.
Most of the technology related to 3D printing is open source. It starts with G-Code files that describe the movements and actions of the printer using a kind of a programming language; examples include where to move the head, at what speed, and what temperature to use. Another important part of the ecosystem is the firmware running in the printer itself, most of which is open source too. There are printing host solutions but "the most popular is not open source", she said. This referred to Repetier-Host from the RepRap project, which started the 3D-printing movement. Repetier-Host started as an open-source system, but became closed source in 2014.
The goal set by Rodrigues and her team was to fill the gap of missing open-source 3D printer host software. Their work consists of two modules: AtCore is the core library and Atelier is the user interface. Both of them are open source and can be downloaded, compiled, and tested right now.
The AtCore library's function is to provide an abstraction for the serial communication with the printer and control of it. It provides a generic layer that is independent from the user interface. AtCore can thus work with any interface, "including QML", she added. AtCore uses pure C++ with Qt for performance reasons. Rodrigues gave memory usage when printing as an example: Atelier requires 200MB of memory while other, similar programs may require 2GB. AtCore supports most open-source 3D-printer firmware using a plugin architecture to handle differences between different firmware implementations. Rodrigues showed at one point the list of the supported printer firmware, which corresponds to the list of supported printer models.
The second part of the team's work is the "test client": Atelier. However, it is a full 3D host system, not just a test program. It uses the KDE libraries in addition to Qt — and the AtCore library, of course. Rodrigues ran a demonstration of a number of Atelier features. The configuration she used included a laptop running Atelier and a small embedded system with the printer firmware. The demo included all stages of the printing process.
Working with a 3D printer starts with connecting to the printer itself.
Rodrigues highlighted that Atelier is the first printer host that can
connect to multiple printers at the same time.
Atelier includes a preview mode that displays the object that is
to be printed, in 3D. The design can be seen in detail. Rodrigues
said that this view requires more work, without listing the details of the
improvements her teams plans to do. This feature is based on Qt 3D,
she said in response to an audience question.
Monitoring the printing process is the second important feature. There
are, for example, profiles for different materials. Temperature control is
essential "to not burn your house", Rodrigues explained. A 3D printer may
cause damage when badly controlled as the temperature is very high. Atelier
shows graphs of the main parameters over time.
Today, the basic controls of the printer are done. If a user has custom firmware "we can support it too", because the host software is open source. She suggested that the team could support other printers too if they had one.
While Atelier can be already used successfully to control a 3D printer, it still requires some tweaks, Rodrigues said. The team wanted to launch it officially at Akademy. Now she hopes to do so later this year, without giving a specific date. The Atelier team is currently in contact with companies in Brazil that do not want to pay a license fee on each 3D printer they ship. Their feedback is that they want to control multiple printers from the same host. In addition they want to do it remotely. That is the work her team will focus on next.
The project is two years old. The team started it to develop an open-source solution for 3D printing; Atelier is currently working on Linux, Mac OS X, and Windows. The Windows port was possible thanks to help from users, Rodrigues added. Atelier adapts to the platform it is running on with different look. Most people use it on Windows, Rodrigues said. "And I can't force them" to change systems, she added. For Linux hosts, Atelier is distributed as an AppImage to allow easy installation. Source code is available from the KDE Git repositories and from the GitHub mirrors of Atelier and AtCore, for those who prefer to compile on their own.
The project has currently more than 100 binary downloads, she concluded, mostly on Windows, then AppImage, and OS X last. The source code repositories count several hundred of commits each for both AtCore and Atelier. The team working on Atelier is currently Rodrigues and three other developers.
A long session of questions followed in the nearly full room. The first question asked about how many printers are supported. Rodrigues explained that most 3D printers have open-source firmware. In practice, that means that they are all supported. Printers with proprietary, closed-source firmware do exist, but they are rare — and those are currently not supported. She added that they could be supported if the vendors donated the printer or paid them to add the support. Then, to clarify, she said that most common printers "you buy in China" are open source and will work with Atelier.
The next person was curious about the camera mode that she enabled for a moment during the demo session. Rodrigues explained that its intended usage is to watch the printer remotely as it prints. It allows you to to be sure that everything is working correctly. She also explained that the industry does not care about a desktop version of the printer host software; instead, they want to drive printers remotely from small, embedded systems.
The last question was about competition to Atelier. Rodrigues explained that the main competing program was open source before, but it is closed source now. She said that she is "not making too much fuss" about Atelier right now. However, her team has contacts with industry and they hope to see Atelier used in the industry in Brazil.
Page editor: Jonathan Corbet
Next page:
Brief items>>
