User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for October 27, 2011

LCE2011: Kernel developer panel

By Jake Edge
October 26, 2011

In what is becoming a tradition at Linux Foundation conferences, a handful of kernel developers participated in a hour-long discussion of various topics at the first LinuxCon Europe (LCE), which is being held in Prague on October 26-28. The format typically doesn't change, with four kernel hackers and a moderator, but the participants do regularly change. The LCE edition featured Linus Torvalds, Paul McKenney, Alan Cox, Thomas Gleixner, and Lennart Poettering as the moderator. A number of timely topics were discussed (security, ARM, control groups), as well as a few timeless ones (user-space interfaces, aging kernel hackers, future challenges).

[Kernel panel]

While Torvalds really doesn't need to introduce himself, he did so by saying that he writes very little code anymore and just acts as a central point to gather up the work of the other kernel hackers. He is "sadly, mainly just a manager these days", he said. McKenney maintains the read-copy-update (RCU) code in the kernel, Cox has done many things over the years but mostly works on System-on-Chip (SoC) support for Intel these days, while Gleixner works on "parts of the kernel nobody wants to touch" and maintains the realtime patch set.

User-space interfaces

Poettering said that he wanted to start things off with something a bit controversial, so he asked about user space compatibility. Clearly trying to evoke a response, he asked if it was hypocritical that certain user space interfaces have been broken in the past without being reverted. One of his examples was the recent version number change (from 2.6.x to 3.y) which broke a number of user space tools yet wasn't reverted.

The version number change is a perfect example, Torvalds said. Because it broke some (badly written) user-space programs, code was added to the kernel to optionally report the version as 2.6.x, rather than 3.y (where x = 40 + y). It is "stupid of us to add extra code to lie about our version", but it is important to do so because breaking the user experience is the biggest mistake that a program can make, he said.

From the very beginning, Torvalds's mantra has been that "breaking the user experience is a big no-no". The most important aspect of a program is how useful it is to users and "no project is more important than the user". Sometimes user space does get broken by mistake because someone writes code they think is making an improvement but breaks something. It is unavoidable and he feels bad about it when it happens.

Not only does Torvalds feel bad, but he makes sure that the other kernel developers feel bad about it as well, Gleixner said. Torvalds readily agreed that he is quite vocal about telling other kernel developers that they messed up when there is breakage of user space. "There is a saying that 'on the internet, no one can hear you being subtle'", so he doesn't even try, he said. Cox noted that the current unhappiness with GNOME 3 is a demonstration of why you don't suddenly change how things work for users.

There is pressure from hardware and software makers to constantly upgrade, but Cox said that if you ask users, they just want what worked before to continue working. Torvalds noted that he used to have some old a.out binaries that he would periodically test on new kernels to ensure they worked. One of them was an old shell that used deprecated system calls, but he wanted to ensure that even though he had moved on to newer shells, the old stuff would still work. It's fine to add new things, he said, as long you "make sure that the old ways of working still work".

The sysfs interface has been changed many times in ways that could have broken user-space programs, according to Poettering, and he wondered why that was allowed to happen. Cox said that it comes down to a question of whether users care. If no one reports the problem, then no one really sees it as one.

McKenney noted that his approach is to "work on things so far from user space that I can't even see it", but that he was concerned about adding tracepoints to the RCU code. Maintaining the exact same tracepoints could make it difficult to change RCU down the road.

An audience member asked if it was worth it to continue maintaining backward compatibility "forever", and wondered if it doesn't lead to more kernel complexity. Torvalds was unconcerned about that and said that the complexity problems in the kernel rarely arise because of backward compatibility. Open source means that the code can be fixed, he said. Cox echoed that, saying that if the kernel developers give user space better interfaces, the programs will eventually switch, and any compatibility code can be deprecated and eventually removed.

Aging hackers

After noting that many of the Kernel Summit (KS) participants were in bed by 9 or 10, Poettering asked whether the kernel community is getting too old or, as he put it, whether it has become an old man's club. Cox said that he saw a lot of fresh blood in the community, more than enough to sustain it. Torvalds noted, though, that the average age at the summit has risen by one year every year. It's not quite that bad, he said, but he also believes that the KS attendees are not an accurate reflection of the community as a whole. It tends to be maintainers that attend the summit, while many of the younger developers have not yet become maintainers.

Part of the problem is one of perception, according to Torvalds. The Linux kernel crowd used to be notably young, because the older people ignored what those crazy Linux folks were doing, he said. The kernel hackers had a reputation of being ridiculously young, but many of those same people are still around, and are just older now. Cox noted that the kernel is now a stable project and that it may be that some younger folks are gravitating to other projects that are more exciting. Those projects will eventually suffer the same fate, he said.

ARM

In response to an audience question, Torvalds wanted to clear something up: "everyone seems to think I hate ARM", but he doesn't, he said. He likes the architecture and instruction set of the ARM processor, but he doesn't like to see the fragmentation that happens because there is no standard platform for ARM like there is for x86. There are no standard interrupt controllers or timers, which is a mistake, he said. Because of the lack of a standard platform, it takes ten times the code to support ARM vs. x86.

ARM is clearly the most important architecture other than x86, he said, and some would argue that the order should be reversed. The good news is that ARM Linux is getting better, and the ARM community seems to be making progress, so he is much happier with ARM today than he was six months ago. It's not perfect, and he would like see more standardization, but things are much better. Torvalds said that he doesn't necessarily think that the PC platform is wonderful, but "supporting only a few ways to handle timers rather than hundreds is wonderful".

Security

Poettering asked if the panel would feel comfortable putting their private key on a public Linux machine that had other users. In general, the clear consensus was that it would be "stupid" (in Torvalds's words) to put private keys on a system like that. What you want, Torvalds said, is layers of security, and the kernel does a reasonable job there. "But there will always be problems", he said, so it is prudent to have multiple layers and realize that some of the layers will fail. He noted that there are three firewalls between his development machine and the internet, none of which accept incoming ssh.

Cox agreed, saying that in security, depth is important. You might have really good locks on the doors of your house, but you still keep your money in the bank, he said. His primary worry is other local users on the system, rather than remote attackers.

Challenges

Torvalds and Cox disagreed a bit about the biggest challenge facing the kernel in the coming years; Torvalds worries about complexity, while Cox is concerned about code quality. The kernel has always been complicated, Torvalds said, but it is not getting less complicated over time. There is a balancing act between the right level of complexity to do the job right and over-complicating things.

Cox believes that the complexity problem is self-correcting for the most part. Once a subsystem becomes too complicated, it eventually gets replaced with something simpler. But code quality is a problem our industry has faced for sixty years. It is not just an open source problem, but the whole industry struggles with it. When better solutions are found, it may well require language changes which would have an enormous impact on Linux.

Control groups

Poettering next asked about control groups: are they irretrievably broken or can they be fixed? Cox thought that the concept behind cgroups was good, but that some of the controller implementations are terrible. The question in his mind is whether or not the controllers can be fixed and still continue to manage the resource they are meant to. To Torvalds, controllers are a necessary evil because they provide a needed way to manage resources like memory, CPU, and networking across processes. But it is a "fundamentally hard problem to dole out resources when everyone wants everything", he said.

Cox agreed, noting that managing resources at a large scale will introduce inefficiencies, and that's the overhead that some are complaining about. Those who don't use the controllers hate the overhead, but those who need them don't care about the impact, Torvalds said. Gleixner agreed that some kind of resource management is needed.

According to Torvalds, part of the problem is that the more heavily used a feature is, the more its impact is noticed. At one point, people said that caches were bad because they impose some costs, but we got past that, he said. Cox pointed to SMP as another area where the changes needed added costs that some were not interested in paying. Torvalds noted that when Cox started working on SMP for Linux, he thought it was an interesting project but didn't have any personal interest as he couldn't even buy the machines that Cox was using to develop the feature.

Conclusion

It is always nice to get a glimpse inside of the thinking of Torvalds and some of his lieutenants, and these kernel developer panels give people just that opportunity. It is one of the few forums where folks outside of the kernel community can interact with Torvalds and the others in a more or less face-to-face way.

[ I would like to thank the Linux Foundation for supporting my travel to Prague. ]

Comments (11 posted)

An update on UEFI secure boot

October 26, 2011

This article was contributed by Joe 'Zonker' Brockmeier.

The "secure boot" feature that will soon be appearing in PC firmware—partly due to a mandate from Microsoft—has garnered a number of different reactions. On one hand, you have the Free Software Foundation (FSF) calling for signatures to "stand up for your freedom to install free software". On the other hand, you have pundits accusing "Linux fanatics" of wanting to make Windows 8 less secure. The truth of the unified extensible firmware interface (UEFI) secure boot situation lies somewhere in between those two extremes, as might be guessed.

The problem started to come into focus earlier this year. The UEFI specification provides for an optional "secure boot" feature that makes use of a "platform key" (PK) and "key exchange keys" (KEKs) stored in a database in firmware. The corresponding private keys would be used to sign bootloaders, drivers, and operating systems loaded from disk, removable media, or over the network. The database could also contain blacklisted keys, in the event previously valid keys are revoked because they've fallen into malicious hands.

The idea is that only signed drivers and operating systems could be loaded. This could prove a useful feature, given that it could prevent malware from infecting the signed components — but it has also been seen as a threat against open source operating systems (like Linux) and could thwart attempts to jailbreak those systems.

When LWN covered the feature in June, there was concern that "a fair amount of pressure" would be applied by proprietary OS makers to enable this feature. That concern was realized when Microsoft announced that it intended to require secure boot to award the Windows 8 logo to OEMs. Since most OEMs are going to want to qualify for this, and the marketing funds that likely accompany the program, Microsoft requiring secure boot makes it more or less mandatory.

Problems

The first and most obvious problem with secure boot is that it has the potential to only allow Microsoft operating systems to boot. As Matthew Garrett wrote in September, "A system that ships with Microsoft's signing keys and no others will be unable to perform secure boot of any operating system other than Microsoft's. No other vendor has the same position of power over the hardware vendors."

But it's more complex than simply preventing non-Microsoft operating systems from booting on hardware with UEFI secure boot enabled. In all likelihood, users will be able to boot OSes of choice for the foreseeable future. But will they actually control their own hardware?

On October 19th, Garrett provided a follow-up to his earlier posts on secure boot, and said that the problem is whether the end user will have the ability to manage their own keys:

Every approach that involves us signing with a trusted key comes with associated problems. There's also a common problem with all of them — you only get to play if you have your own trusted key. Corporate Linux vendors can probably find a way to manage this. Everyone else (volunteer or hobbyist vendors, end users who want to run their own OS, anyone who wants to use a modified bootloader or kernel) is shut out.

As Garrett explains, the workaround is to turn off secure boot, but that's not very palatable:

It benefits nobody for Linux installation to require disabling a legitimate security feature. It's also not likely to be in a standard location on all systems and may have different naming. It's a support nightmare.

He suggested that, rather than requiring secure boot to be disabled, it would be better to work on how the feature could be properly supported for Linux installations.

Solutions

The solution? Garrett says that a proposal has been put forward to the UEFI Forum that would allow users to install their own keys from removable media. According to Garrett, this avoids problems that would have associated with booting untrusted binaries. Because it requires removable media, malware won't be able to trigger the key installation. Instead, the system with secure boot would simply refuse to boot and fall back to system recovery procedures.

The only way it could get on the system would involve the user explicitly booting off removable media, which would be a significant hurdle. If you're at that stage then you can also convince the user to disable secure boot entirely.

It is certainly possible that malware could infect USB keys or other removable media, but part of allowing users control is accepting some risk.

Meanwhile at the UEFI Forum

Mark Doran, president of the UEFI Forum, says that he's "trying to work with some folks in the Linux community, the Linux Foundation board for example, to see how Linux and other operating systems can participate in using this facility." He says that there are "a couple of ways that we could do that," but didn't offer specifics. Doran also said that, despite Microsoft being out front pushing for adoption, it was "absolutely intended for it to be implemented and implementable with any operating system."

Doran was also emphatic that he thought it unlikely that "many" systems would ship without the ability to turn off secure boot.

I would challenge you to find one of the theoretical systems that has no disable [feature]. I've taken a pretty good temperature, and I can't find anybody that's planning to build a system with no enable/disable feature, with the exception of kiosks.

The reason is pretty simple, says Doran. Too many vendors have to support legacy Windows. At least for the foreseeable future, none of the OEMs are likely to ship systems that can't boot older versions of Windows.

This echoes what Ed Bott reported when slinging mud at "Linux fanatics". Bott asked "a spokesperson for AMI" and the response was "AMI will advise OEMs to provide a default configuration that allows users to enable / disable secure boot, but it remains the choice of the OEM to do (or not do) so." So everyone reports that it's "unlikely" that systems will ship that don't have a disable feature — but that's not quite the same as guaranteeing that users will be able to control their systems. Being able to disable secure boot is better than nothing, but that feature, alone, does not enable Linux to make use of the secure boot capability.

History Repeating Itself?

The UEFI secure boot situation is not unlike the once-feared trusted platform module (TPM) or "trusted computing" features. When first introduced, the concept for TPM was that it could provide additional system security by verifying that only authorized code could run on a system. At first glance, this seemed like a good idea — except that it could be misused to secure a system against its owner.

Microsoft and a number of other companies argued strongly in favor of TPM, while digital rights advocates argued against it. The EFF's Seth Schoen wrote in 2003:

The TCPA [Trusted Computing Platform Alliance] security design allows software publishers to [...] create encrypted network protocols and file formats — with keys tucked away in hardware rather than accessible in memory. Thus, the same security features that protect your computer and its software against intruders can be turned against you.

As Schoen noted then, the EFF and others lobbied for user override, but it was initially resisted by the Trusted Computing Group (TCG). This time around, Microsoft and the UEFI Forum are saying that it's in the hands of the vendors to decide whether to implement a bypass to UEFI secure boot, and trying to downplay concerns about the implementations and the effects on users of other platforms.

Ultimately, trusted computing did not turn out to be the nightmare that many feared. We can probably thank the EFF and other organizations that lobbied against TPM that we aren't seeing widespread misuse of TPM against users.

Don't panic... yet

The worst-case scenario — a flood of "restricted boot" systems hitting the market that are incapable of booting Linux or anything other than signed Windows 8 — does seem unlikely. But we're a long way from Garrett's proposal as well. Users interested in complete control of their systems will need to keep an eye on this process, and make sure that OEMs are aware that having the ability to disable secure boot is not enough. In order to truly control your system, you must have a way to install your own trusted keys as well.

Comments (23 posted)

GStreamer 1.0 and 0.10

By Jake Edge
October 26, 2011

GStreamer has come a long way in the ten-plus years it has been in existence. The push is on to finally put out a 1.0 release, possibly before the end of this year, while work has not stopped on the existing 0.10.x code base. In two keynotes from the second-ever GStreamer conference, which was held in Prague October 24-25, Wim Taymans outlined the future, while Tim-Philipp Müller looked at developments in 0.10.x.

History and background

[Wim Taymans]

Taymans started his talk with a bit of history and background. GStreamer is a library that makes it easy to create multimedia applications. It does so by providing a pipeline architecture with multiple components that can be plugged together in many different ways, which is what provides GStreamer with its "power and flexibility", he said.

Applications that use GStreamer have generally set the tone for the direction of the framework. GStreamer came into the GNOME project in 2002 with the Rhythmbox music player, and the audio capabilities worked well at that time. Video was a different story, but by 2004, the Totem video player ensured that the video in GStreamer worked, Taymans said.

Since GStreamer is a library, it can be integrated with many different applications, frameworks, toolkits, development environments, and so on. Whatever gets integrated with GStreamer gains "many new features". That has led to browsers that use GStreamer for playing content in <video> and <audio> tags. In addition, GStreamer is integrated with (and ported to) Android to bring its capabilities to that mobile platform. Integration is the "strong point of GStreamer", he said.

Taymans then went through all of the different types of applications that GStreamer is being used for, some where it shines and other areas where it still needs some work. For example, transcoding is in good shape; he pointed to Transmageddon as one example. Communication tools that provide voice and video calls, such as Empathy, are another area where GStreamer works well.

On the other hand, complicated audio production, like what Buzztard is doing, is an area that needs more work. Streaming and media distribution are areas for improvement as well. Various companies are using parts of GStreamer to do media distribution, but it is "not quite there yet", he said. Overall, GStreamer is "everywhere" in the multimedia space, but there is plenty more to be done.

Getting to 1.0

Taymans then turned to the future and wanted to "look at what needs to be done to get to 1.0". A 1.0 release has been discussed since 2007 or 2008, he said, but there has been a problem: 0.10 (and earlier) has been "doing things very well", so there was a great temptation to just keep extending that code.

But there are "new challenges", Taymans said. There are things that can be done with 0.10 but aren't easy or optimal, partly because they weren't envisioned back when the project started. Embedded processors aren't really a new challenge, but GStreamer could be better on today's embedded hardware, especially in the areas of power consumption and offloading work to other specialized processors (like DSPs) that are often available.

Video cards with graphical processing units (GPUs) are another resource that could be used better by the framework. Moving video decoding to GPUs when possible would help with power consumption as well. It can be done with 0.10, but "it's hard to do", Taymans said. Other things like video effects could be off-loaded to the GPU. Effects can be done on the main CPU, and are today, but it consumes more power and cannot handle higher resolution video.

Memory management is another area where improvements are needed. GPUs have their own memory, so the challenge is to avoid copying memory "from the CPU to GPU and back again and again". It is important for performance and GStreamer doesn't currently handle the situation well, Taymans said.

The formats of video in memory can't be specified flexibly enough to match what the hardware is producing or consuming. That means that GStreamer elements have to copy the data to massage it into the right format. Better memory management leads of better integration with the hardware and increased performance, all of which leads to better battery life, he said.

Dynamic pipelines, where the elements in a multimedia pipeline are removed or changed, are another feature that is possible with 0.10, but difficult to do correctly. Examples that Taymans gave include the Cheese webcam application, which can apply various effects to the video data and effects can be added or changed on the fly. Another example is the PulseAudio pass-through element, which may be decoding mp3 data in software when a Bluetooth headset is plugged in. The headset can do the decoding in hardware, so the pipeline should be changed to reflect that. There are many more examples of dynamic pipelines, he said, but making them work right has been difficult for application developers—something that should change in 1.0.

The 0.11 branch (which will eventually become 1.0) opened after last year's GStreamer conference (in Cambridge, UK in October 2010). By June it had "everything needed" to start porting elements (aka plugins) and applications to use the new features. A 0.11 release was made in August, followed by 0.11.1 in September.

There are "many many cleanups" in the 0.11 branch, Taymans said, "too many to list". Over the years, the API has accumulated lots of "crazy" things that needed changing. Methods were removed, signatures changed, parameters removed, and so on. In fact, Taymans said that they "implemented everything we said we would do at the last GStreamer conference—more actually". Part of the "more" was the memory management changes which were not fully thought out a year ago.

The core, base, and FFmpeg parts of GStreamer are 100% working and passing the unit tests in 0.11. The "good", "bad", and "ugly" sets of plugins are respectively 68%, 76%, and 11% ported to the new API. The video mixer is not yet ported as the team "focused first on [video] playback so we can start shipping something". Lots of other applications have been ported, but there is still plenty to do. Many of the changes, both for plugins and applications, can be done semi-automatically, Taymans said.

All of that puts the project on track for a 1.0 release later this year possibly, though that will all be worked out at the conference, he said. All of the elements will not be ported in that time frame, as doing one per day would finish the job "sometime in June". So there will need to be a prioritization of the plugins and applications that get ported before the release. "People who want their application to work are welcome to help porting", Taymans said.

The plan is for 0.10 and 1.0 to co-exist; both can be installed and applications will choose the right one for their needs. That will allow for a gradual transition to 1.0 over "half a year or maybe a year". Taymans would like to see 1.0 make it into the next releases of the major distributions along with some applications ported to 1.0, specifically mentioning Cheese and Totem.

Back on the 0.10 branch

[Tim-Philipp Müller]

On day two of the conference, Tim-Phillip Müller updated attendees on what has been going on in the stable branch and noted that "almost everything applies to the new branch" because it will all be merged over. By way of introduction, Müller said that his work for the project has been "managing releases — or not — trying anyway". He also likened GStreamer to Lego, noting that it allows applications to connect up all kinds of complicated "multimedia handling" pieces to provide the right experience for its users.

The 0.10 series has been API stable for five or six years, he said, but 0.11 changes the API to fix some longstanding problems. Both can be installed together with some applications using the old and some the new, which "works fine". There have been "some possibly interesting changes" that have gone on in the 0.10 series and since everyone can't follow bugzilla, mailing lists, and IRC, Müller's talk was meant to fill them in.

He presented some statistics on the development of 0.10 since last year's conference. There were 962 bugzilla bugs fixed and 2747 files changed. That resulted in 350K lines inserted and 100K deleted in a code base of 1.4 million lines of C and C++ code. Müller said that he was surprised to find that there had been 201 unique patch contributors in that time.

Since Taymans has been working on the new development branch, development in the core has slowed down, Müller said, but there are some significant new core library features. The gst_pad_push() function (which pushes out a buffer to the next stage in the pipeline) has been made lockless by using atomic operations. Previously, multiple locks were taken.

Progress and quality of service (QoS) messages have been added so that elements can post status information to applications. Progress messages could be the status of a data transfer or a mounting operation. QoS messages allow the internal QoS events (which indicate if a frame was dropped, rendered too late, or too early) to propagate out to applications, which will allow them to display statistics.

There have been new base classes added, for the obvious advantage of code reuse, he said. There is a need for APIs that work for all of the different use cases and are reasonably CPU-efficient, rather than having each plugin or element write its own. The reason that it makes sense to add new base classes so close to the end-of-life for 0.10 is that it will make porting to 1.0 easier because some of the new code can be used early, then you can "go to 1.0 for free", he said.

GstBaseParse is a new class to move the code that parses the "caps" (capabilities of elements), which has been repeated in many different places. The quality of the parsers has increased dramatically because of the change, but they have already seen some drawbacks in the new base class. That may result in a change to the class or adding a new GstBaseParse2.

GstVideoEncoder and GstVideoDecoder are new base classes to move some of the repeated code into one place. Right now, they live in the "bad" plugins, and need some API changes before moving into "good". Likewise, GstCollectPads2 (for notification of buffer availability) and GstAudioVisualizer are new base classes.

There are some base classes that may be added in the future including GstMuxer and GstDemuxer to encapsulate multiplexing (and de-multiplexing) functionality. Some "proper filter base classes" and a N-to-1 synchronization base class (for syncing various streams like video and a text overlay) are also possibly on the horizon.

There is also a "new and improved high-level API", Müller said. The playbin2 high-level playback API has some additional features including progressive download, new buffering methods for network streams, and support for non-raw streams. camerabin2 replaces camerabin as the camera input high-level API. The latter "wasn't bad, but was tricky to use". camerabin2 is a new design with new features including support for things like viewfinder streams, snapshotting, and other new hardware capabilities.

GstDiscoverer is a new "fast(ish)" metadata extraction API. It allows queueing files of interest and will give the stream topology, which allows better decisions regarding decoding, transcoding, and transmuxing. There are still some issues with it, he said, including getting an accurate duration for a stream without decoding it. There is also no way yet to tell an application whether the duration reported is an estimate and how good of an estimate it is.

Müller also mentioned the GStreamer Editing Services (g-e-s), which is a "very nice API" that is used by the PiTiVi video editor. The GStreamer RTSP server (which is really a library, he said) that allows you to "share your video camera or stream your files". There are Python and Vala bindings for the library and it comes with an example program that means that you can do those things "really easily".

The "middleware" layer (which is the "plumbing" of GStreamer) has also seen improvements, especially in the areas of parsing (using the new base class) and moving parsing out of decoders and into parsers that get plugged in automatically to handle metadata extraction. The PulseAudio pass-through was mentioned again as part of the middleware changes. In addition, Orc code has been added to produce "just in time" machine code for things like video scaling and conversion.

Replacing the FFmpeg color space code is currently a work in progress. It was forked from the FFmpeg code six years ago and GStreamer should probably apologize to the project because the fork gives FFmpeg "a bit of a bad name" because of how old it is, he said. Supporting the VA API and VDPAU video acceleration APIs is also under development and will eventually automatically plug in on systems that support video cards with those APIs. Lastly, the wrapper for OpenMAX (which is multimedia framework for embedded devices including Android) is being worked on. Müller said that no matter what GStreamer folks might think of OpenMAX, "it's there and people want to use it".

There were a number of other future features that he listed. Adding support for playlist handling, so that each application doesn't need to do its own, proper chapter and track handling, and device discovery and probing were all on that list. 3D video is something that can currently be done with 0.10, but it doesn't integrate well with hardware and other components and will likely be worked on for 1.0.

Müller wrapped up his talk with a vote of confidence in both the 0.10 and 0.11 branches. He thinks that people will fairly quickly be switching away from 0.10 once 1.0 is out, which is in keeping with what was seen at the last release which changed the API (from 0.08 to 0.10). He said that Taymans had "undersold 1.0 yesterday" in his talk, "it's gonna rock". All in all, it seems like that the GStreamer project is very healthy and making strides in lots of interesting directions.

[ I would like to thank the Linux Foundation for travel assistance to attend "conference week in Prague". ]

Comments (5 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: STEED: End-to-end email encryption; New vulnerabilities in asterisk, dbus, freetype2, java, ...
  • Kernel: Reports from the 2011 Realtime Minisummit and the Kernel Summit.
  • Distributions: Creating a Fedora ARM distribution part 2: Bootstrapping; OpenELEC, openSUSE, Ubuntu, Yocto, ...
  • Development: Krita 2.4; Android source returns, QT Project.
  • Announcements: John McCarthy, Creator Of Lisp died, Fellowship interview with Rikard Fröberg, ...
Next page: Security>>

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds