LWN: Comments on "GStreamer: Past, present, and future" https://lwn.net/Articles/411761/ This is a special feed containing comments posted to the individual LWN article titled "GStreamer: Past, present, and future". en-us Mon, 13 Oct 2025 16:51:54 +0000 Mon, 13 Oct 2025 16:51:54 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net GStreamer: Past, present, and future https://lwn.net/Articles/413680/ https://lwn.net/Articles/413680/ alankila <div class="FormattedComment"> Perhaps I should clarify that when I mean sample format, I mean the particular way to encode the value of a sample. What I was proposing was the use of a single format when processing, for implementation simplicity and guaranteed high-quality output.<br> <p> I do not have a principal objection to using a different sampling rate or number of channels. It's just that there are useful gains to be had from limiting the number of sample formats. As an example, processing 16-bit integer audio with the volume plugin will currently cause quantization, because the volume plugin does not do dithering.<br> <p> And when I said that 44.1 kHz and 16 bits, I was talking about mobile context, I admit android flashed through my mind. Did you know that it does not even support any other output format at all? For a mobile device, it is an entirely reasonable output format, and given its other constraints it should be extremely well supported because it's simply the most important input and output format. As we learnt in this thread, N900 people made a ridiculous mistake with selecting audio hardware that apparently uses native sample rate of 48 kHz because that will force them to do resampling for vast majority of world's music. It is possible to do, but doesn't really strike me as especially smart thing to have done.<br> </div> Sat, 06 Nov 2010 11:08:30 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413679/ https://lwn.net/Articles/413679/ alankila <div class="FormattedComment"> True, true.<br> </div> Sat, 06 Nov 2010 10:55:26 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413416/ https://lwn.net/Articles/413416/ tpm <div class="FormattedComment"> There's already an SVG version of the logo at <a href="http://gstreamer.freedesktop.org/artwork/">http://gstreamer.freedesktop.org/artwork/</a><br> </div> Fri, 05 Nov 2010 08:17:01 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413414/ https://lwn.net/Articles/413414/ tpm <div class="FormattedComment"> The video of the keynote is up now on the UbiCast GStreamer Conference Video Portal: <a href="http://gstconf.ubicast.tv/categories/conferences/">http://gstconf.ubicast.tv/categories/conferences/</a><br> <p> See <a href="http://gstreamer.freedesktop.org/wiki/GStreamerConference2010">http://gstreamer.freedesktop.org/wiki/GStreamerConference...</a> for slides and other links.<br> </div> Fri, 05 Nov 2010 08:09:49 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413344/ https://lwn.net/Articles/413344/ bazzargh <div class="FormattedComment"> Optima Bold Italic with a hand-tweaked "g". Or something very like that.<br> </div> Thu, 04 Nov 2010 23:01:15 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413193/ https://lwn.net/Articles/413193/ frazier <div class="FormattedComment"> I really need to get you guys a vector build of the GStreamer logo. I gave one to Erik years ago (about a decade ago now!) and it apparently disappeared in the mist of time. I'll probably have to recreate it, but provided I can figure out what the core font is, it won't be that difficult.<br> <p> -Brock<br> </div> Thu, 04 Nov 2010 07:58:21 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/413128/ https://lwn.net/Articles/413128/ paulj <div class="FormattedComment"> Gah, yeah.. And even at the cinema - least I've suffered through uncomfortably loud movies at Cineworld in the UK a few times, and block my ears with fingers and/or shoulder.<br> </div> Wed, 03 Nov 2010 21:03:09 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412966/ https://lwn.net/Articles/412966/ cmccabe <div class="FormattedComment"> I built an RC oscillator, chained it with an op-amp, and used it to drive a speaker. Then I cranked it up to the 20 kHz range. So I can tell you that I can hear above 22 kHz. We did "double-blind tests" where someone else was turning the sound on and off. I could always tell.<br> <p> Some people can hear it, some people can't. Unfortunately, the "can't" people designed the Red Book audio format, apparently. I forget the exact frequency at which it became inaudible.<br> <p> P.S. A lot of people have hearing damage because they listen to music at a volume which is too loud. You need earplugs at most concerts to avoid this.<br> </div> Wed, 03 Nov 2010 02:42:17 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412755/ https://lwn.net/Articles/412755/ Spudd86 <div class="FormattedComment"> Err, generally not sinc, it's usually windowed so as to have better PSNR<br> </div> Tue, 02 Nov 2010 04:02:58 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412753/ https://lwn.net/Articles/412753/ Spudd86 <div class="FormattedComment"> Nope, you're still implying we can just pick one format and use it, we can't VOIP apps frequently use lower rates and fewer bits per sample. Some people WILL want their system to do 7.1@96KHz and there's no real reason to stop them. Once you do all the other stuff that GStreamer and pulseaudio have to do ANYWAY just to support common use cases you might as well support all the other stuff.<br> <p> You CAN'T just say 'all audio is 16bit@44.1KHz' because it simply is not the case, 48KHz audio exists, as does 24 bit audio, some people by expensive sound cards to get these sorts of things, and you want to tell them they can't have it?<br> <p> All I was objecting too is the first bit. <br> <p> Getting to the rest of your post:<br> <p> Of COURSE nobody expects their mobile phone to spit out 24bit 192KHz 7.1 channel audio, but some people DO expect it from their desktops, GStreamer is used it a very wide variety of places, and some of them need things your phone doesn't, some of them need things you don't ever need, but that's not a reason for GStreamer to not support them. <br> <p> Certainly 32 bit float is the most (more in fact) sample resolution you'll ever need in a storage format... but GStreamer is sometimes used in a processing pipeline so it MAY at some point have a use for doubles... probably not though.<br> <p> ORC is a perfectly reasonable thing to use for a simple volume scaler, especially on something like a mobile phone where CPU time might be at a premium.<br> <p> I think part of the redesign was to make the format negotiation better and more automatic, however, avoiding conversions is always a good idea (hey large chunks of pulseaudio code are dedicated to doing as few conversions as possible, because of phones and embedded stuff, and even on a desktop rate conversions add error every time you do one since the bandlimiter isn't perfect so it introduces aliasing and noise every time it's run, good ones don't introduce much, but they are expensive to compute even on a desktop)<br> </div> Tue, 02 Nov 2010 03:57:34 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412585/ https://lwn.net/Articles/412585/ nix <div class="FormattedComment"> Ah! So the dolphins have been manipulating our video format work!<br> <p> Mice might need it too, for their supersonic squeaks of delight.<br> <p> Perhaps... Douglas Adams was right?<br> <p> </div> Sun, 31 Oct 2010 13:10:40 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412574/ https://lwn.net/Articles/412574/ alankila <div class="FormattedComment"> It is true that the resampling is typically done with convolving the signal with sinc, but the effect of this convolving is as if the interpolation had occurred with sin waveforms fit through the sampled data points.<br> </div> Sun, 31 Oct 2010 11:27:03 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412540/ https://lwn.net/Articles/412540/ magnus <blockquote>Given unlimited precision samples a signal which has no energy above the the system nyquist is _perfectly_ re-constructable, not just "good". </blockquote> Theoretically, you don't only need unlimited precision on each sample, you also need to have an infinite number of samples, from time -&#8734; to +&#8734;, to perfectly reconstruct the original signal. <p> In practice though, audio signals will have some information (harmonics etc) at higher frequencies and no filters (not even digital ones) can be perfectly brick-wall shaped, so some aliasing will occur plus you will have some attenuation below the Nyqvist frequency. Sampling at 96 kHz might (if well designed) give you a lot more headroom for these effects. <p> I have no experience with 96 kHz audio so I don't know if this is actually audible or just theory+marketing. <p> Since human hearing is non-linear it's also possible that people can pick up harmonics at higher frequencies even if they can't hear beeps at these frequencies. The only way to know is double blind-testing I guess... Sat, 30 Oct 2010 16:27:39 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412538/ https://lwn.net/Articles/412538/ corbet Sinc waveforms, actually (sin(&theta;)/&theta;) :) <p> I knew all those signal processing classes would come in useful eventually... Sat, 30 Oct 2010 15:04:33 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412537/ https://lwn.net/Articles/412537/ corbet For all of our experience with audio, there was a small subset of us who were driven absolutely nuts by the weird high-pitched chirper things that the Japanese seem to like to put into doorways for whatever reason. Everybody else wondered what we were griping about. Some people hear higher than others. <p> The other thing that nobody has pointed out: if you're sampling at 44KHz, you need a pretty severe low-pass filter if you want to let a 20KHz signal through. That will cause significant audio distortion at the upper end of the frequency range, there's no way to avoid it. A higher sampling rate lets you move the poles up much higher where you don't mess with stuff in the audio range. <p> That said, I'm not such an audiophile that I'm not entirely happy with CD-quality audio. Sat, 30 Oct 2010 15:01:26 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412536/ https://lwn.net/Articles/412536/ alankila <div class="FormattedComment"> Let's just say that I remain skeptical.<br> <p> Your specific example "20 kHz signal playing with 44 kHz samples, and played at 96 kHz samples" is a particularly poorly example. I assume you meant a pure tone signal? Such a tone can be represented by any sampling with a sampling rate &gt; 40 kHz. So, 44 kHz and 96 kHz are equally good with respect to representing that signal. If there is any difference at all favoring the 96 kHz system, it arises from relatively worse engineering involved with the 44 kHz system -- poorer quality of handling of frequencies around 20 kHz, perhaps -- and not from any intrinsic difference between the representations of the two signals themselves.<br> <p> Many people seem to think---and I am not implying you are one---that the way digital signals are converted to analog output waveforms occurs as if linear interpolation between sample points were used. From this reasoning, it looks as if higher sampling rates were better, because the linearly interpolated version of 96 kHz signal would look considerably closer to the "original analog waveform" than its 44 kHz sampling interpolated the same way. But that's not how it works. Digital systems are not interpolated by fitting line segments, but by fitting sin waveforms through the sample points. So in both cases, the original 20 kHz sin() could be equally well reconstructed.<br> </div> Sat, 30 Oct 2010 14:42:28 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412535/ https://lwn.net/Articles/412535/ alankila <div class="FormattedComment"> <font class="QuotedText">&gt; I disagree with this statement. something can be reproduced, but not neccessarily _perfectly_</font><br> <p> This may be confusing two ways to look at it: as mathematical issue, or as engineering problem. Mathematically the discrete representation and the analog waveform are interchangeable: you can get from one to the other. The quality of the conversion between the two can be made as arbitrarily high as you desire -- typically design targets are set beyond assumed limits of human perception.<br> <p> <font class="QuotedText">&gt;also, any time you have more than one frequency involved, they are going to mix in your sensor, and so you are going to have energy above this frequency.</font><br> <p> Intermodulation distortion can generate extra tones, and depending on how strong the effect is, they may even matter. Such nonlinearities do not need more than one frequency, though.<br> <p> This is normally an undesirable artifact, and our ADC/DACs have evolved to a point where they are essentially perfect with respect to this problem. In any case, from viewpoint of a digital system, artifacts that occurred in the analog realm are part of the signal, and are processed perfectly once captured.<br> <p> <font class="QuotedText">&gt; I am in no way saying that people hear in the ultrasonic directly, However I am saying that some people listening to a 15KHz sine wave vs a 15KHz square wave will be able to hear a difference.</font><br> <p> The amusing thing is that a 44.1 kHz representation of a 15 kHz square wave will look identical to a 15 kHz sin wave, because none of the pulse's harmonics are within the passband of the system. Do you happen to have a reference where a system such as this was tested with test subjects so that it would be possible to understand how such a test was conducted?<br> </div> Sat, 30 Oct 2010 14:21:56 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412515/ https://lwn.net/Articles/412515/ dlang <div class="FormattedComment"> Quote: Given unlimited precision samples a signal which has no energy above the the system nyquist is _perfectly_ re-constructable, not just "good". <br> <p> I disagree with this statement. something can be reproduced, but not neccessarily _perfectly_<br> <p> also, any time you have more than one frequency involved, they are going to mix in your sensor, and so you are going to have energy above this frequency.<br> <p> sampling faster may not be the most efficient way to get better SNR, but it's actually much easier to sample faster than to sample with more precision.<br> <p> using your example, setting something up to sample 1 bit @ 3MHz may be far cheaper than setting up something to sample 20 bits @ 48KHz. In addition, the low-precision bitstream may end up being more amenible to compression than the high precision bitstream. with something as extreme as the 1bit example, simple run-length encoding probably will gain you much more than a 3x compression ratio. That's not to say that a more sophisticated , lossy, compression algorithm couldn't do better with the 20 bit samples, but again, which is simpler?<br> <p> I am in no way saying that people hear in the ultrasonic directly, However I am saying that some people listening to a 15KHz sine wave vs a 15KHz square wave will be able to hear a difference.<br> </div> Sat, 30 Oct 2010 00:36:10 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412513/ https://lwn.net/Articles/412513/ jspaleta <div class="FormattedComment"> I think the existence of inaudible dog whistles is serious blow against your hypothesis. We've had a much longer experience with audio frequencies near the edge of human perception than you would perhaps realize at first blush. Much of that history pre-dates any attempt at digital sampling. If 99.9% of people can't perceive dog whistles at 22 Khz, they aren't going to hear it played on their Alpine speakers in their car either.<br> <p> Video framing on the other hand is relatively quite new...unless you count thumb powered flipbooks pen and paper animations.<br> <p> -jef<br> <p> <p> <p> </div> Sat, 30 Oct 2010 00:12:38 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412512/ https://lwn.net/Articles/412512/ gmaxwell There is lots of misinformation on this subject out there. <p> Given unlimited precision samples a signal which has no energy above the the system nyquist is _perfectly_ re-constructable, not just "good". <p> If the signal does have energy above the nyquist then it's not "no hope": the system is under-determined and there are a number of possible reconstructions. <p> Of course, we don't sample with infinite precision — but increasing the sampling rate is a fairly poor way of increasing the SNR for lower frequencies if thats your coal. For example, a 1 bit precision 3MHz process can give as much SNR in the 0-20kHz range as a 20 bit 48khz process but it takes about 3x the bitrate to do so. <p> 24bit converters with >110dB SNR are readily and cheaply available. These systems can represent audio as loud as 'dangerously loud' with the total noise still dwarfed by the thermal noise in your ear and the room around you. It's effectively infinite precision. Heck, given reasonable assumptions (that you don't need enough dynamic range to cover hearing damage to the faintest discernible sounds) well mastered CDDA is nearly so too. <p> There has been extensive study of frequency extension into the ultrasonic, and none of the studies I've seen which weren't obviously flawed could support that hypothesis. If this perception exists it is so weak as to be unmeasurable even in ideal settings (much less your common listening environment which is awash in reflections, distortions, and background noise). There also is no real physiological basis to argue for the existence of significant ultrasonic perception— Heck, if you're posting here you're probably old enough that hearing is mostly insignificant even at 18kHz (HF extension falls off dramatically the early twenties for pretty much everyone) much less higher. <p> But hey— if you want to _believe_ I've got some dandy homeopathics to sell you. Sat, 30 Oct 2010 00:09:11 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412511/ https://lwn.net/Articles/412511/ dlang <div class="FormattedComment"> I really question the 'common knowledge' and 'studies show' statements that say that people can't tell the difference between a 20KHz signal playing with 44KHz samples, and played at 96KHz samples.<br> <p> I remember when the same statements were being made about video, how anything over 24Hz refresh rate was a waste of time because we had decades of studies that showed that people couldn't tell the difference.<br> <p> Well, they found out that they were wrong there, at 24Hz people stopped seeing things as separate pictures and saw things as motion instead, but there are still benefits to higher refresh rates.<br> <p> I think the same thing is in play on the audio side.<br> <p> not everyone will be able to tell the difference, and it may even be that the mythical 'average man' cannot, but that doesn't mean that it's not worthwhile for some people. It also doesn't mean that people who don't report a difference in a test won't see a difference over a longer timeframe of useage (for example, going from 30Hz refresh rates to 80Hz refresh rates appears to decrease eye strain and headaches for people over long time periods, even for people who can't tell the difference between the two when they sit down in front of the two side by side.<br> </div> Fri, 29 Oct 2010 23:32:13 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412508/ https://lwn.net/Articles/412508/ dlang <div class="FormattedComment"> by the way, the nyquist limit isn't the limit for where things sound good, it's the limit beyond which there is no hope of getting anything resembling the correct result.<br> </div> Fri, 29 Oct 2010 23:13:38 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412507/ https://lwn.net/Articles/412507/ dlang <div class="FormattedComment"> it's not that the audio frequencies are &gt; 48KHz (the audio frequencies are almost certainly below 20KHz)<br> <p> It's that using more samples to represent the data makes the resulting audio cleaner.<br> <p> remember that you aren't recording the frequency, you are recording the amplitude at specific periods. the more samples you have, the cleaner the result.<br> </div> Fri, 29 Oct 2010 23:12:47 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412505/ https://lwn.net/Articles/412505/ Uraeus <div class="FormattedComment"> I assume that when you say stuff you mean media files. Well the reason is that a media file is a collection of things. For instance most media frameworks and players do their own demuxers (as using library versions makes things like trick modes hard to do) and the demuxer is more often than the decoder the one which has to battle with weird files. The second differentation factor is crash policy. The more broken files your player tries to play, the easier it is for said player to encounter something that makes it crash. This is a security risk. So as a player developer one are made to make a decision on how strict to be with more strict meaning less crashes but also less files being playable.<br> </div> Fri, 29 Oct 2010 22:21:54 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412488/ https://lwn.net/Articles/412488/ nicooo <div class="FormattedComment"> A bottlenose dolphin would need 300 kHz samples<br> </div> Fri, 29 Oct 2010 19:20:37 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412481/ https://lwn.net/Articles/412481/ alankila <div class="FormattedComment"> While that is important to folks that do sampling in the analog domain, once you have actually captured the signal, digital techniques can easily do the rest and represent artifact-free 44.1 kHz audio with frequency response cut at 20 kHz. There are other reasons to prefer high sampling rates during processing, such as the reduction of artifacts due to bilinear transform and having free spectrum available for spectral expansion before aliasing occurs due to nonlinear effects. Not all applications need those things, though.<br> <p> However, the idea of consumer-level 96 kHz audio (as opposed to 44.1 kHz audio) is pointless. It may sell some specialized, expensive equipment at high markup for people who are into that sort of thing, but there appear to be no practical improvements in the actual sound quality.<br> </div> Fri, 29 Oct 2010 18:34:59 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412480/ https://lwn.net/Articles/412480/ alankila <div class="FormattedComment"> Sorry, but we can. Almost all the media in the world is in this format, your examples being special cases or practically irrelevant. In particular, your example of a DVD audio is special case, as it is actually in AC3 codec and it is desirable to pass it through the system as-is, because Dolby wants money for encoder implementations. The 96/192/5.1/7.1 are not really even relevant to the point I was making.<br> <p> I was talking about performance. Nobody expects a mobile phone to spit out a 7.1 stream, ac3 or not, or whatever. I believe my point was that I wanted to argue for the case of simplified internal pipeline of gstreamer, where special case formats could be removed and replaced with more general ones. Your 7.1 192 kHz streams could be just 8 32-bit floating point channels for the purposes of the discussion, but I predict that you'd have severe difficulties transmitting those channels to amplifier.<br> <p> See? This is not a point that is really worth discussing.<br> </div> Fri, 29 Oct 2010 18:14:23 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412461/ https://lwn.net/Articles/412461/ paulj <div class="FormattedComment"> I've wondered that too. Then recently I saw Monty from Xiph explain it in <a href="http://xiph.org/video/vid1.shtml">http://xiph.org/video/vid1.shtml</a> - from about 11min in. Basically, it's cause you don't want any freqs &gt; nyquist frequency for your sample rate to remain in the signal, or it'll cause aliasing. The ultra-high sample rates basically give you more margin for your low-pass filter, making them easier/cheaper to build.<br> </div> Fri, 29 Oct 2010 16:04:54 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412422/ https://lwn.net/Articles/412422/ nix <div class="FormattedComment"> A foolish question, perhaps, but what are all these far-past-48kHz audio samples targetted at? Not even dogs can hear that high. Bats, perhaps?<br> <p> What hardware would you play them back on?<br> </div> Fri, 29 Oct 2010 14:28:10 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412412/ https://lwn.net/Articles/412412/ wookey <div class="FormattedComment"> Right. I have never understood why each different media player supports a different subset of stuff. As a naive geek who knows very little about multimedia it seems to me that once I have libx264 and libogg and libquicktime installed then every media player I have should be able to support those formats. But clearly that's not the case and there must be something else going on. Do I understand from what you say that VLC, mplayer totem etc don't actually use the same codec libraries but each implement their own? But if that's true what _does_ use these libraries (I see them on my system).<br> <p> There seem to be complex interactions between players, lower-level media frameworks and individual codec libraries that I clearly don't understand. Can someone explain (or point to docs that explain)?<br> <p> <p> </div> Fri, 29 Oct 2010 13:19:34 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412388/ https://lwn.net/Articles/412388/ dgm <div class="FormattedComment"> Is that in an open source project?<br> </div> Fri, 29 Oct 2010 10:36:59 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412358/ https://lwn.net/Articles/412358/ Spudd86 <div class="FormattedComment"> "We can make the reasonable assumption that a media system is dealing with 16-bit 44.1 kHz stereo audio."<br> <p> No, we cannot. Want to watch a DVD? Then you're dealing with 5.1 48KHz audio. DVD-Audio? Could be up to 192KHz. Blu-Ray? 7.1/5.1 and IIRC could be 96KHz. DVD-Video also allows 96KHz stereo. <br> <p> And that's not even getting into stuff that's slightly less common than things almost everybody does at some point. (OK DVD-Audio doesn't really come up since there's no software players for it and pulseaudio currently caps it's sample rate at 96KHz so it has something to use as a maximum sample rate)<br> </div> Fri, 29 Oct 2010 00:41:21 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412357/ https://lwn.net/Articles/412357/ Spudd86 <div class="FormattedComment"> Then pulse doing resampling seems pretty likely since so much audio is 44.1KHz<br> </div> Fri, 29 Oct 2010 00:35:14 +0000 -ENOTGST https://lwn.net/Articles/412356/ https://lwn.net/Articles/412356/ Spudd86 <div class="FormattedComment"> Or likely even pulseaudio, it's probably alsa<br> </div> Fri, 29 Oct 2010 00:31:39 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412308/ https://lwn.net/Articles/412308/ oak <div class="FormattedComment"> When playing just music on N900, the dynamic frequency scaling scales the CPU speed down to 250MHz (easiest to see from /proc/cpuinfo Bogomips value changes). Pulseaudio needs to do on N900 more complex/heavier stuff than normally on desktop (sound volume increase, speaker protection...).<br> <p> Oprofile tells that about third of CPU goes to pulseaudio internal workings, rest is sound data manipulation which is accelerated with NEON SIMD instructions (as you can see by objdump'ing the related libraries code).<br> <p> N900 uses TI Omap3 (ARM v7) i.e. it has HW floating point support. Sound HW is AFAIK 48kHz natively.<br> <p> </div> Thu, 28 Oct 2010 20:22:11 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412143/ https://lwn.net/Articles/412143/ wahern <div class="FormattedComment"> It's because none of the frameworks have the proper design. Most of them shoe-horn plugins into a particular producer-consumer model on the premise that it makes them easier to write, but ultimately it just results in balkanization of efforts.<br> <p> The very low-level codec implementations--LAME, mpg123, FAAC, FAAD, etc--all share almost identical APIs, even though there was zero cooperation. Given the evident success of that API scheme, why do all these other frameworks depart from that precedent? They try to bake in all sorts of bells and whistles long before the best API for doing so becomes evident, and the end result is crappy performance and nightmarish interfaces.<br> <p> FFmpeg comes the closest to a good API, and it lies at the heart of many "frameworks", but it has several defects and shortcomings, such as enforcing a threaded pull scheme, and not providing a simple tagged data format which would aid in timing and synchronization. (For my projects I repurpose RTP for this purpose, because IMO it's more valuable to define an interface at the data level than at the function level.)<br> <p> </div> Wed, 27 Oct 2010 21:12:27 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412141/ https://lwn.net/Articles/412141/ wahern <div class="FormattedComment"> Oops. I meant VLC, not Vorbis.<br> <p> </div> Wed, 27 Oct 2010 20:55:29 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412111/ https://lwn.net/Articles/412111/ wahern <div class="FormattedComment"> That's how my audio pipeline works. I have a system written from scratch--i.e. no FFmpeg--that pulls Internet radio (Windows Media, Shoutcast, Flash, or 3GPP; over HTTP or RTSP; in MP3 or AAC) and transcodes the codec and format to a requested format (same combinations as before), resampling and possibly splicing in a third stream.<br> <p> If the decoder produces audio which isn't in a raw format that the encoder can handle (wrong number of channels, sample rate, etc) than the controller transforms it before passing to the encoder. Of course, ideally both the encoder and decoder can handle the widest possible formats, because interleaving and resampling is incredibly slow, mostly because it takes up memory bandwidth, not because the CPU is overloaded in doing the conversions. But sometimes you have to resample or change the channels because that's how its wanted downstream, no matter that the encoder can handle it.<br> <p> The server design can handle close to 20 unique transcoded streams per CPU on something like a Core2 (averaging 3-4% CPU time per stream)--the server doesn't use threads at all, each process is fully non-blocking with an event loop. (It can also reflect internally, which significantly increases the number of output streams possible.)<br> <p> Systems which spin on gettimeofday--or rely on some other tight loop with fine grained timeouts--are retarded, too. There are various way to optimize clocking by being smart about how you poll and buffer I/O; you can usually readily gauge the relationship between I/O and samples. For example, a single AAC frame will always produce 1024 samples*. So even if the size of a particular frame isn't fixed, you can at least queue up so many frames in a big gulp, knowing how many seconds of audio you have, sleep longer, and then do a spurt of activity, letting the output encoder buffer on its end if necessary. If you need a tight timed loop to feed to a device, it should be in its own process or thread, separate from the other components, so it isn't hindering optimal buffering.<br> <p> [*AAC can also produce 960 samples per frame, but I've never seen it in practice, but in any event its in the meta-data; MP3 encodes 384 or 1152 samples per frame; so If you know the sample rate and number of samples you know exactly how many seconds of compressed audio you have.]<br> <p> My pipeline can do double or triple the work that FFmpeg, Vorbis, and others can handle, even though it's passing frames over a socket pair (the backend process decodes protocols, formats, and codecs; but encodes only to a specific codec; the front-end encodes to a particular format and protocol; I did this for simplicity and security). It's a shame because I'm no audiophile, and many of the engineers on those teams are much more knowledgeable about the underlying coding algorithms.<br> <p> Adding video into the mix does add complexity, but you can be smart about it. All the same optimization possibilities apply; and synchronization between the audio and video streams isn't computationally complex by itself; it's all about being smart about managing I/O. Like I said earlier, pipelines should be separated completely from the player (which might need to drop or add filler to synchronize playback). It wouldn't be a bad idea at all to write a player which only knows how to playback RTSP, and then write a back-end pipeline which produces RTSP channels. That's a useful type of abstraction missing entirely from all the players I've seen. RTSP gives you only rough synchronization, so the back-end can be highly optimized. The client can then handle the find-grain synchronization. Overall you're optimizing your resources far better than trying to hack everything into one large callback chain.<br> <p> </div> Wed, 27 Oct 2010 19:23:05 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412108/ https://lwn.net/Articles/412108/ drag <div class="FormattedComment"> Another example is if your using it for something like VoIP. <br> <p> If you want to be able to do voicemail, conferences, transfer phones, put people on hold, work with multiple protocols, hook into a T1 or POTS and all that then your VoIP system is going to require a backend that can handle manipulating audio and transcoding between different formats. <br> <p> Sure most of the audio formats used in VoIP are uncomplicated, to say the least, but if your handling a call center with a 100 phones with the multiple voice bridges and all that stuff then it adds up pretty quick.<br> <p> Then another issue is the sound cards itself. Sounds cards only support certain audio formats and your going to have to be able to support a multitude if your going have a efficient way of outputting to the real world.<br> <p> <p> </div> Wed, 27 Oct 2010 18:09:40 +0000 GStreamer: Past, present, and future https://lwn.net/Articles/412097/ https://lwn.net/Articles/412097/ alankila <div class="FormattedComment"> We can make the reasonable assumption that a media system is dealing with 16-bit 44.1 kHz stereo audio.<br> <p> I disagree on the need for 64 bits. 32-bit floats already have 23 bits of precision in the mantissa, and plenty of range in the exponent. Given the typical quantization to just 16 bits, it is hard to argue for the need of more intermediate precision.<br> <p> I agree it's not necessarily the extra code that hurts me (although I do find gstreamer's modules to be pretty gory, and the use of ORC for a trivial volume scaler was astonishing); what I perceive as poor architecture hurts me. Especially the audio pipeline looks pretty strange to me as an audio person. The need to specify conversions explicitly is baffling. How does one even know that in the future some format doesn't get added or removed from a module, thus either requiring the addition of a conversion step, or making the specified conversion unnecessary and potentially even harmful?<br> <p> I am convinced that a more ideal audio pipeline would automatically convert between buffer types where necessary and possible, and that audio processing would always imply promotion to some general high-fidelity type, be it integer or float (might be compile-time switch) so that at most there is one promotion and one demotion within any reasonable pipeline.<br> </div> Wed, 27 Oct 2010 18:07:34 +0000