Leading items
Finding inactive openSUSE members
Projects organize their governance in different ways; often that governance depends on a definition, or formal recognition, of a "member" of the project. Members can generally vote on the membership of boards and committees, sometimes on technical or policy questions, and on changes to the governance itself. Typically, membership is granted to those who are active in the project, but members can become less active (or completely inactive) over time. What to do about inactive members is a question that the openSUSE project has been struggling with recently.
The openSUSE Members wiki page provides information on how to become a member of the project and what the benefits are. Continued and substantial contributions to the project are the criteria for membership; the benefits include an "@opensuse.org" email address, an IRC cloak, some blogging privileges, eligibility for the board, and voting rights. There is a proposed entry on that page about how to maintain membership, but the only way listed to lose membership status is to resign or be kicked out by the board for repeated violations of the guiding principles.
Some would like to establish a way to remove inactive members from the project. It has been discussed on the opensuse-project mailing list for some time—starting back in June in this iteration—but there have been earlier discussions as well. As a result of those mid-year discussions, Jean-Daniel Dodin (also known as "jdd") proposed a rather complicated manual method to scan for project activity to try to narrow down which openSUSE members are active and which might be contacted to try to better determine their status. In response, there were suggestions of ways to better automate measurement of the activity level of members, but there were also complaints about the whole idea.
Cornelius Schumacher took exception with
expending any real effort in trying to find active versus inactive
members. He called it "very creepy
" to scan for members'
activity, which "has more potential to destroy community than to
build community
". One of the attributes of a volunteer community
is that people can drift in and out of active participation without being
removed from the community, he said. Furthermore:
Several followed up with support for Schumacher's statement, but others are concerned that having a large pool of voters that do not vote makes it appear that there is less support for proposals that pass than there really is. Richard Brown, who is the chair of the openSUSE Board, noted that any changes to governance or membership rules would require a vote of the members:
We don't want a situation, as we've had before, where the results are cast doubt upon due to low turnout.
But Schumacher remained unconvinced.
Inactive people don't vote and, in general, aren't affected by the outcome
of the votes,
he said, so the number of inactive members doesn't really matter. Board
member Robert Schweikert called that
"a bit too simplistic
"; since the inactive members
could vote, they might have an undue influence given their status.
In addition, without knowing how many inactive members there are, there is
no way to distinguish between active members that choose not to vote versus
those who are inactive and didn't vote.
Schumacher thought the idea of inactive members voting was purely a theoretical concern. He reiterated his belief that it is much better to spend time and energy on the active people and delivering openSUSE to its users. But Dodin pointed out that it would be useful to know why members become inactive in case their inactivity points to problems that the project could try to address. Schumacher agreed with that point.
The project board exists to "provide guidance and support existing governance structures, but shouldn't
direct or control development
", Schumacher said, quoting the guiding
principles. So the board does not really "influence the direction of
the project
", thus voting for the board is not as consequential as
some would have it. Both Brown and Schweikert, who are on the board,
disagreed with that, however.
Brown stated: "The Board today is involved in
far more decisions and influence of the Project than the Boards when
those Principles were laid out
". He also noted that the
boards for KDE e.V. and the GNOME Foundation are both elected from the
members of those projects, requiring a quorum of member votes, and having
requirements for members to maintain their status. Those are all things
that might make sense for openSUSE as well, he said, but for now the focus
should be
on getting a level set on where things stand:
Schweikert also raised another concern. There is a quorum of sorts required for calling early elections for the board:
He corrected the figure to 20% in another post, but the point is still valid. At some point, the number of inactive members may reach a level where it is impossible to change the board in an early election, which certainly seems suboptimal.
The thrust of Schumacher's argument is that the project should not spend time and energy on more formal governance structures and the like, but should instead focus on delivering openSUSE Leap and Tumbleweed. Others have a somewhat different, but not entirely incompatible, view. Overall, the project has gone through some changes lately, so it is not really surprising that there might be some differences of opinion on some of the steps moving forward. The good news is that those differences have been discussed openly and without rancor—which bodes well for everything resolving amicably. So far, at least, what that resolution might be is up in the air.
Developing an inexpensive, visible-light network
Most of us are accustomed to networks that run over radio links or via electrical signals on a physical wire, but those are not the only options. At Embedded Linux Conference Europe 2015 in Dublin, Ireland, Stefan Schmid presented a new networking technology, developed at Disney Research, that operates in the visible light spectrum and uses commodity hardware. The effort is still considered a research project, but once it matures, it may open the door to a variety of new short-range networking uses that will be well-served by staying out of the increasingly crowded unlicensed radio spectrum bands.
For background, Schmid reminded the audience that there are already other optical network technologies, including fiber optics and the direct laser signaling used between some satellites. There are even IP-based network stacks using visible or infrared light, such as Li-Fi. All have advantages—like requiring no spectrum licensing and the ability for users to see when an attacker attempts to intercept or man-in-the-middle a transmission. Disney's new technique is more modest than the others in what it attempts; it uses a bit rate that is slow by wired or radio-based signaling standards and focuses on point-to-point links. On the up side, it has the advantage of using inexpensive components and it can be deployed anywhere there is a standard light socket.
Disney first became interested in visible-light networking for use in toys, where it can provide advantages over Bluetooth or WiFi for communications. LEDs are inexpensive compared to radio modules, and many manufacturers (as well as retailers) remain concerned about the health effects that radio exposure has on young children.
Called visible-light communication (which Disney abbreviates to VLC; no relation to the media player), the technique encodes information in light pulses that are too rapid for the human eye to detect—indeed, the pulses are shorter than the pulse-width modulation (PWM) cycles typically used to set the perceived brightness level of an LED, so the information can be inserted into the PWM pattern without affecting brightness. So far, two hardware options have been explored in the project. The first uses the same directly attached LED to both send and (with some clever trickery) receive signals. The second uses separate LEDs and photosensors. Initially, the team used Arduino boards to control the LEDs, and accessed the Arduino from a Linux box over USB. Later, it began working with ARM-based single-board computers (SBCs) running Linux instead to simplify deployment.
The single-LED method requires the LED to be connected to an analog I/O pin. For transmitting data, the LED is simply pulsed on or off. But most people do not realize that LEDs can be used to detect light as well, Schmid said. They have the same properties as a photosensing diode, but with lower sensitivity. To read from an LED, he explained, one charges the LED and then measures the subsequent voltage drop-off. If a photon strikes the LED, the discharge rate is accelerated. By polling regularly during the LED's "off" cycle, one can decide if a bit has been received.
Disney's VLC system is based on a physical network layer that can transmit about 1 Kbps, using differential encoding to send each bit in a separate frame. These frames are transmitted during the PWM "off" cycle, during which the LED would normally be inactive. Each VLC frame is split into six intervals:
- A sync interval, S1. The sync intervals are used to keep multiple devices in clock synchronization. Both devices are off and measuring light during the sync intervals. If both intervals are equal to the ambient-light measurement, then the devices are in sync. If either interval is too high, then too much light is hitting during one end of the frame, so the two devices have drifted out of synchronization. Re-synchronizing the devices is handled higher up in the network stack, at the MAC layer.
- A guard interval, G, during which the LED is off, in order to avoid light leakage between slots.
- The first half of the differential bit pair, D1.
- A second guard interval.
- The second half of the bit pair, D2.
- Another guard interval.
- The final sync interval, S2.
To send a zero, a transmitter turns on the LED during D1 and turns it off during D2. To send a one, it turns the LED on in D2 and off in D1. This differential encoding allows the receiver to detect the bit regardless of what other ambient light is also present in the room; if D1 is greater than D2, the bit is 0. If the reverse is true, the bit is 1.
The last piece of the puzzle is how to account for the extra illumination produced by the VLC signaling when the LED is supposed to be in its PWM "off" cycle. The solution is to compensate by dimming the LED by the equivalent amount during the subsequent PWM "on" cycle. With the total amount of light produced thus equalized, the LED should appear just as bright, to the unaided eye, as a normal LED.
The Disney team has designed this protocol with a nominal frame size of 500 microseconds. The range is on the order of a few meters, although it depends quite a bit on the equipment used. Both the physical-layer VLC protocol and the MAC layer were implemented entirely in software, though commercial products would presumably require hardware NICs many times cheaper than Arduinos and Linux SBCs. The MAC layer reuses many of the same techniques used in WiFi (such as Request to Send / Clear to Send) for basic features like collision avoidance. Although VLC-enabled toys do not require a full IP stack, the team used the standard Linux networking stack to develop the protocols and to explore other use cases.
Schmid played video demonstrations of several toys using the single-LED-and-microcontroller implementation, such as a toy car that receives commands from a light bulb and an LED-equipped magic wand that can activate other toys with VLC signals.
But Disney believes the fundamental idea has applications beyond entertainment, Schmid said. The team's more recent work involved modifying standard socket-sized LED light bulbs to embed a Linux-based control board, so that the light bulbs can communicate directly to one another. Such light bulbs could obviously be used in Internet-of-Things "smart home" systems, Schmid said, but they could have other uses, too. For example, they could permit geolocation indoors where GPS is unusable. Each bulb can transmit an ID that can be mapped to a location in the building, with finer granularity than WiFi-based location.
The light bulb implementation necessitates using separate LEDs and photosensors, for two reasons. First, "white" LED bulbs actually use blue LEDs that excite a phosphorescent coating, and that coating interferes with the ability to measure the LED's discharge voltage. Second, the bulbs are usually coated with a diffusion material that shortens the effective range of the VLC signal significantly. For the bulb project, the team initially implemented VLC as an Ethernet device, but Schmidt said they were now exploring 6LoWPAN and other ideas. He showed a demonstration video of a series of light bulbs relaying a "turn off" message as one might use in a smart-home system.
In response to the first audience question, Schmid said that he did not expect Disney to release its code implementation as open-source software, since it is an internal research project. That elicited quite a bit of grumbling from the attendees in the rather packed room. But Schmid added that Disney is interested in turning the VLC protocol into an ISO or IEEE standard if there is sufficient interest, and pointed audience members to the VLC project site, which includes all of the group's published papers and extensive documentation.
Hearing that no code was available put a distinctly down note on the end of a presentation that, up until that point, had engaged the ELCE crowd. On the other hand, if the scheme—or something like it—becomes an open standard, there may be plenty of Linux and free-software developers eager to work with it.
[The author would like the thank the Linux Foundation for travel assistance to attend Embedded Linux Conference Europe.]
3D video and device mediation with GStreamer
When GStreamer 1.6 was released in September, the list of new features was lengthy enough that it could be a bit overwhelming at first. Such is an unfortunate side effect of a busy project coupled with a lengthy development cycle. Fortunately, the 2015 GStreamer Conference provided an opportunity to hear about several of the new additions in detail. Among the key features highlighted at the event are 3D video support and a GStreamer-based service to mediate access to video hardware on desktop Linux systems.
Entering the third dimension
Jan Schmidt of Centricular presented a session about the new stereoscopic 3D support in GStreamer 1.6. The term "stereoscopic," he said, encompasses any 3D encoding that sends separate signals to each eye and relies on the user's brain to interpret the depth information. That leaves out exotic techniques like volumetric displays, but it still includes a wide array of ways that the two video signals can be arranged in the container file.
There could be a single video signal that is simply divided in half, so that left and right images are in every frame; this is called "frame-packed" video. Or the stream could alternate left and right images with every frame, which is called "frame-by-frame" video. There could also be two separate video streams—which may not be as simple as it sounds. Schmidt noted that 3D TV broadcasts often use an MPEG-2 stream for one eye and an H.264 stream for the other. Finally, so-called "multi-view" video also needs to be supported. This is a scheme that, like 3D, sends two video signals together—but multi-view streams are not meant to be combined; they contain distinct streams such as alternate camera angles.
GStreamer 1.6 supports all of the 3D and multi-view video modes in a single API, which handles 3D input, output, and format conversion. That means it can separate streams for playback on 3D-capable display hardware, combine two video streams into a 3D format, and convert content from one format to another. Schmidt demonstrated this by converting 3D video found on YouTube between a variety of formats, and by converting a short homemade video captured with two webcams into a stereoscopic 3D stream.
GStreamer does its 3D processing using OpenGL, so it is fast on modern hardware. There are three new elements provided: gstglviewconvert rewrites content between the formats, gstglstereoview splits the two signals into separate streams, and gstglstereomix combines two input streams into a single 3D stream. For display purposes, 3D support was also added to the existing gstglimagesink element. In response to an audience question, Schmidt said the overhead of doing 3D conversion was negligible: one extra copy is performed at the OpenGL level, which is not noticeable.
Most of the video processing involved is backward-compatible with existing GStreamer video pipelines (although a filter not intended for 3D streams may not have the desired effect). The metadata needed to handle the 3D stream—such as which arrangement (left/right, top/bottom, interleaved, etc.) is used in a frame-packed video—is provided in capabilities, Schmidt said. GStreamer's most-used encoder, decoder, and multiplexing elements are already 3D-aware; most other elements just need to pass the capabilities through unaltered for a pipeline to work correctly. And one of the supported output formats is red-green anaglyph format, which may be the easiest for users to test since the equipment needed (i.e., plastic 3D glasses) is cheap.
Multi-view support is not as well-developed as 3D support, he said; it works fine for two-stream multi-view, but there are few test cases to work with. The technique has some interesting possibilities, he added, such as the potential for encoded multi-view streams to share inter-frame prediction data, but so far there is not much work in that area.
Don't call it PulseVideo
GStreamer founder Wim Taymans, now working at Red Hat, introduced his work on Pinos, a new Linux system service designed to mediate and multiplex access to Video4Linux2 (V4L2) hardware. The concept is akin to what PulseAudio does for sound cards, he said, although the developers chose to avoid the name "PulseVideo" for the new project since it might incorrectly lead to users assuming there was a connection between the projects.
The initial planning for Pinos began in 2014, when developer William Manley needed a way to share access to V4L2 hardware between a GStreamer testing framework and the application being tested. Around the same time, the GNOME app-sandboxing project was exploring a way to mediate access to V4L2 devices (specifically, webcams) from sandboxed apps. The ideas were combined, and the first implementation written by Taymans and several other GStreamer and GNOME developers in April 2015.
Pinos runs as a daemon and uses D-Bus to communicate with client applications. Using D-Bus, the clients can request access to camera hardware, negotiate the video format they need, and start or stop streams. GStreamer provides the media transport. Initially, Taymans said, they tried using sockets to transfer the video frames themselves, but that proved too slow to be useful. Ultimately, they settled on exchanging the media with file descriptors, since GStreamer, V4L2 hardware, and OpenGL could all already use file descriptors. A socket is still used to send each client its file descriptor, as well as timestamps and other metadata.
The implementation uses several new elements. The most important are pinossrc and pinossink, which capture and send Pinos video, respectively, gstmultisocketsink, which is used by the daemon to pass data to clients, and gstpinospay, which converts a video stream into the Pinos format. Taymans said he tried to make the client-side API as simple as possible, then rewrote it to be even simpler. A client only needs to send a connection request to the Pinos daemon, wait to receive a file descriptor in return, then open a file-descriptor source element with the file descriptor handed back by the Pinos daemon. At that point, the client can send the start command and begin reading video frames from the file descriptor, and send the pause or stop commands as needed. Frame rates, supported formats, and other details can be negotiated with the V4L2 device through the daemon.
Basic camera access is already working; as a test case, the desktop webcam application Cheese was rewritten to use Pinos, and the new elements worked "out of the box." The Pinos branch is expected to be the default in the next Cheese release. At that point, Pinos will need to be packaged and shipped by distributions. The sandboxed-application use case, however, still requires more work, since the security policy needed by the sandbox has not been defined yet. It is also not yet been decided how best to handle microphone access—which may fall under Pinos's purview because many webcams have built-in microphones. And there are other ideas still worth exploring, Taymans said, such as allowing Pinos clients to send video as well as receive it. He speculated that the service could also be used to take screenshots of a Wayland desktop, which is a feature that has been tricky to handle in the Wayland security model.
Looking even further out, Taymans noted that because GStreamer handles audio and video, Pinos could even replace PulseAudio for many application use cases. It may make sense, after all, to only worry about managing one connection for both the audio and video. He quickly added, however, that this concept did not mean that Pinos was going to replace PulseAudio as a standard system component.
Pinos support and stereoscopic 3D support are both available in GStreamer 1.6. In both cases, it may still be some time before the new features are accessible to end users. Taymans noted at the end of his talk that packaging Pinos for Fedora was on the short list of to-do items. Experimenting with 3D video requires 3D content and hardware, which can be pricey and hard to locate. But, as Schmidt demonstrated, GStreamer's ability to combine two camera feeds into a single 3D video is easy to use—perhaps easy enough that some users will begin working with it as soon as they install GStreamer 1.6.
[The author would like the thank the Linux Foundation for travel assistance to attend GStreamer Conference.]
Page editor: Jonathan Corbet
Next page:
Security>>
