LWN: Comments on "LinuxCon: Kernel roundtable covers more than just bloat" https://lwn.net/Articles/354835/ This is a special feed containing comments posted to the individual LWN article titled "LinuxCon: Kernel roundtable covers more than just bloat". en-us Thu, 23 Oct 2025 08:48:29 +0000 Thu, 23 Oct 2025 08:48:29 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net SSD https://lwn.net/Articles/359479/ https://lwn.net/Articles/359479/ wookey <div class="FormattedComment"> Well said David. I thought Linus was talking rot there too. It's bad enough all the manufacturers thinking the black-box approach is a sensible one - we really don't want kernel people who don't know any better getting that idea as well. <br> </div> Thu, 29 Oct 2009 18:11:17 +0000 macro overhead https://lwn.net/Articles/356856/ https://lwn.net/Articles/356856/ gvy <div class="FormattedComment"> <font class="QuotedText">&gt; make linux a micro kernel :-)</font><br> Oh yeah, that would make those pesky 12% a non-issue. :]<br> </div> Wed, 14 Oct 2009 10:03:44 +0000 pride comes before a fall https://lwn.net/Articles/356854/ https://lwn.net/Articles/356854/ gvy <div class="FormattedComment"> <font class="QuotedText">&gt; And pride is what built this kernel.</font><br> I hope *not*.<br> </div> Wed, 14 Oct 2009 09:52:39 +0000 Features no excuse https://lwn.net/Articles/355994/ https://lwn.net/Articles/355994/ bersl2 <blockquote>But if you don't use the features, you should not need to pay the price! If I understand correctly, the slowdown has been seen in repeatable benchmarks that can be run on both old and new kernel versions. Therefore the benchmarked code certainly isn't using any new features, but it still gets slowed down. Not justifiable.</blockquote><p>Then configure out what you don't want already.</p> <p>Really, you think going with your distro's generic kernel is efficient? It doesn't take very long to find /proc/config* and take out some of the above-mentioned features that can't be modular.</p> <p>That, or yell at your distro, for the little good that will do.</p> Thu, 08 Oct 2009 09:10:28 +0000 Features no excuse https://lwn.net/Articles/355995/ https://lwn.net/Articles/355995/ dlang <div class="FormattedComment"> adding SELinux defiantly slows things down, and if you run a benchmark on a system running a kernel compiled with SELinux you will get lower results than if you run the same benchmark on the same kernel without SELinux<br> <p> so to not use the feature of SELinux you would compile a kernel without it.<br> <p> the same thing goes for many fetures, turning them on at compile time increases the cache footprint and therefor slows the system, even if you don't use the feature. but you (usually) do have the option to not compile the code into the kernel to really avoid the runtime cost of them.<br> </div> Thu, 08 Oct 2009 09:03:48 +0000 Features no excuse https://lwn.net/Articles/355991/ https://lwn.net/Articles/355991/ renox <div class="FormattedComment"> [[But if you don't use the features, you should not need to pay the price!<br> If I understand correctly, the slowdown has been seen in repeatable benchmarks that can be run on both old and new kernel versions. Therefore the benchmarked code certainly isn't using any new features, but it still gets slowed down. Not justifiable.]]<br> <p> Linus referred to the icache footprint(size) of the kernel, if you add features, even when not used they increased the size of the generated code so they reduce the performance.<br> Sure if you have an option to remove the code from the kernel at compilation time, then this issue shouldn't happen.. So which configuration did Intel benchmark?<br> <p> Without specific figures, it's difficult to know where the issue is, I wouldn't be surprised that SELinux or virtualisation are the culprit: these features seems quite invasive..<br> <p> </div> Thu, 08 Oct 2009 08:20:43 +0000 The causes of bloat? https://lwn.net/Articles/355985/ https://lwn.net/Articles/355985/ kragil <div class="FormattedComment"> Yeah, it was clear that Linus wasn't happy about how the "huge and bloated" thing was the only focus nearly all media concentrated on. (leaving out the "unacceptable but unavoidable" part.)<br> <p> In the interview he said that "at least Linux isn't this fat ugly pig that should have been shot 15 years ago"<br> <p> I'd like to think that Linus is so bright that the bloat statement was intentional to get the kernel community working on a solution (don't tell me there isn't one that is way too easy), but he probably does not have these mad Sun Tzu communication skillz.<br> <p> Maybe next time add the the pig comment to put things into perspective for the media? <br> </div> Thu, 08 Oct 2009 07:01:46 +0000 SSD https://lwn.net/Articles/355586/ https://lwn.net/Articles/355586/ dwmw2 We should probably take this discussion elsewhere. Your input would be welcome on the MTD list, where I've started a <A HREF="http://lists.infradead.org/pipermail/linux-mtd/2009-October/027469.html">thread</A> about what we want the hardware to look like, if we could have it our way.<P> But just a brief response...<P> <BLOCKQUOTE><I>"- Interoperability without losing flexibility."</I></BLOCKQUOTE> This is still possible with a more flexible hardware design &mdash; you just implement the translation layer inside your driver, for legacy systems. M-Systems were doing this years ago with the DiskOnChip. More recently, take a look at the Moorestown NAND flash driver. You can happily use FAT on top of those. But of course you do have the opportunity to do a whole lot <em>better</em>, too. And also you have the opportunity to fix the translation layer if/when it goes wrong. And to recover your data.<P> <BLOCKQUOTE><I>"- Performance"</I></BLOCKQUOTE> But this isn't being done in hardware. It's being done in software, on an extra microcontroller.<P>Yes, we do need to look carefully at the interface we ask for, and make sure it can perform well. But there's no performance-based reason for the SSD model. <BLOCKQUOTE><I>"- Fast development"</I></BLOCKQUOTE> You jest, surely? We had TRIM support for FTL in Linux last year, developed in order to test the core TRIM code. When do we get it on "real hardware"? This year? Next? <P>Being "wedged between stable interfaces" isn't a boon, in this case. Because it's wedged under an <em>inappropriate</em> stable interface, we are severely hampered in what we can do with it. <BLOCKQUOTE><I>"People that look at SSDs and see them just as disks and don't think about the future will think it's best if the hardware does as much as possible. But if you forget the classic disk model and look at what's really going on it seems obvious that the classic disk model isn't that simple anyway and doesn't fit flash or how the hardware looks like and could be used."</I></BLOCKQUOTE> Agreed. I think it's OK for the hardware to do the same kind of thing that disk hardware does for us &mdash; ECC, and some block remapping to hide bad blocks. But that's all; we don't want it implementing a whole file system of its own just so it can pretend to be spinning rust. In particular, perpetuating the myth of 512-byte sectors is just silly. Tue, 06 Oct 2009 06:23:08 +0000 SSD https://lwn.net/Articles/355440/ https://lwn.net/Articles/355440/ i3839 <div class="FormattedComment"> I agree with you, but I understand the other side too.<br> <p> <font class="QuotedText">&gt; But I see absolutely no reason why we should put up with the "hardware"</font><br> <font class="QuotedText">&gt; actually doing that kind of thing for us, behind our back. And badly.</font><br> <p> Three reasons:<br> <p> - Interoperability without losing flexibility.<br> For a "dumb" block driver to work the translation table must be fixed.<br> And when running in legacy mode the drive will be very slow. There are<br> a heap of problems waiting when it gives write access too.<br> It's very hard to let people switch filesystems. They mostly use the<br> default one from their OS, or FAT. As long as disks are sold as units<br> and not as part of a system this will be true.<br> <p> - Performance.<br> Having dedicated hardware doing everything is faster and uses less power.<br> Not talking about an ARM mc, but a custom asic. (Just compare Intel's <br> SSD idle/active power usage to others.)<br> <p> - Fast development.<br> Currently the flash chip interface is standardized with ONFI and the<br> other end with SATA. All the interesting development happens in-between.<br> Because it's wedged between stable interfaces it can change a lot without<br> impacting anything else (except when it's done badly ;-).<br> <p> So short term the situation is quite hopeless for direct hardware access.<br> The best hope is probably SSDs with free specifications and open source<br> firmware.<br> <p> Long term I think we should get rid of the notion of a disk and go more<br> to a model resembling RAM. You could buy arrays of flash and plug them in,<br> instead of whole disks (they could look like "disks" to the user, but <br> that's another matter). This decouples the flash from the controller<br> and makes data recovery easier in case the controller dies.<br> <p> What is needed is a flash specific interface which replaces SATA and <br> implements the features that all good flash file systems need: Things <br> like multi-chip support, ECC handling, background erases and scatter <br> gather DMA. Perhaps throw in enough support to implement RAID fast <br> in software, if it makes enough sense. Basically a standardized small, <br> simple and fast flash controller, preferably accessible via some kind of <br> host memory interface (PCIe on x86). Make sure to make it flexible enough <br> to handle things like MRAM/PRAM/FeRAM too.<br> <p> Maybe AHCI is good enough after a few adaptations, but it can be probably <br> a lot better. Or perhaps some other interface from the embedded world fits <br> the description.<br> <p> This will make embedding the controller on a SoC easier too, without the<br> need for special support in the OS. SATA is really redundant in such cases.<br> <p> I'm pretty sure most flash controllers already have such chip, but don't <br> expose it. Instead they hide it behind an embedded microcontroller which<br> handles SATA and implements the FTL. They should standardize it and get <br> rid of the power hungry, complexity adding bloat.<br> <p> I also think it's crucial to get optimal performance, flash gets faster<br> and faster. It seems pointless to get faster and faster SATA when PCIe<br> is already fast enough. It's also silly to fake SATA by just implementing<br> a AHCI controller with direct flash access. Keep SATA around for real<br> external storage, not something that doesn't take much space anyway.<br> <p> People that look at SSDs and see them just as disks and don't think about <br> the future will think it's best if the hardware does as much as possible.<br> But if you forget the classic disk model and look at what's really going <br> on it seems obvious that the classic disk model isn't that simple anyway <br> and doesn't fit flash or how the hardware looks like and could be used.<br> <p> </div> Mon, 05 Oct 2009 13:09:43 +0000 How much checking do you need to do? https://lwn.net/Articles/355438/ https://lwn.net/Articles/355438/ alex <div class="FormattedComment"> I'm all for increasing the security of the kernel. However I feel the ideal* case the kernel should be striving for is a compare/branch for the check. Does SELinux do any caching of it's authentication results?<br> <p> For example once you have validated a process can read a given file descriptor do you need to re-run the whole capability checking logic for every sys_read()?<br> <p> Of course any such caching probably introduces another attack vector so care would have to be taken with the implementation?<br> <p> <p> *ideal being a target even if you may never actually reach that goal.<br> </div> Mon, 05 Oct 2009 10:43:48 +0000 Features no excuse https://lwn.net/Articles/355435/ https://lwn.net/Articles/355435/ eru <i>Features costs performance.</i> <p> But if you don't use the features, you should not need to pay the price! If I understand correctly, the slowdown has been seen in repeatable benchmarks that can be run on both old and new kernel versions. Therefore the benchmarked code certainly isn't using any new features, but it still gets slowed down. Not justifiable. <p> <i>You could have a LIGHTNING FAST DOS box today. What good would it do you?</i> <p> Bad comparison. MS-DOS always had severe problems that really did not have much to do with its small footprint. It was brain-damaged already on day one. An OS that does more or less what MS-DOS did, but in a sensible and stable way might still be useful. Mon, 05 Oct 2009 10:34:03 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355432/ https://lwn.net/Articles/355432/ cmccabe <div class="FormattedComment"> <font class="QuotedText">&gt; The Moore's law reference stinks. It is a typical commercial vendor </font><br> <font class="QuotedText">&gt; apologist view. Sorry, our software sucks, we know, but just buy faster hw </font><br> <font class="QuotedText">&gt; and make up for our crap.</font><br> <font class="QuotedText">&gt; </font><br> <font class="QuotedText">&gt; When you argue that way, you take no pride in your code. And pride is what </font><br> <font class="QuotedText">&gt; built this kernel.</font><br> <p> Well, maybe, you can have clean code that runs 12% slower, or you can have code that's #ifdef'ed to hell that runs at the old speed. In that case, which would you rather have?<br> <p> Keep in mind, if you choose route #2, people in 2015 might use your name as a curse...<br> <p> Obviously this is an oversimplification. But still, the point remains: don't criticize the code until you've seen it and understand the tradeoffs.<br> </div> Mon, 05 Oct 2009 08:11:20 +0000 The causes of bloat? https://lwn.net/Articles/355401/ https://lwn.net/Articles/355401/ nevets <div class="FormattedComment"> I've seen this with ftrace traces. Running the function graph tracer, a good amount of time is spent in the selinux code. The price you pay for security.<br> <p> One might argue that we've become 12% slower, but &gt; 12% more secure.<br> </div> Sun, 04 Oct 2009 17:26:01 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355385/ https://lwn.net/Articles/355385/ fuhchee <div class="FormattedComment"> "... Too focused on solving customers' immediate needs, and having too little time to go hunt down whatever catches their attention ..."<br> <p> I suspect it's the other way around. Many customers care deeply about performance, and it is their vendors who must perform code-karate against this kind of "bloat" (slowdown). To justify each new thing, LKML rarely carries data beyond microbenchmarks.<br> </div> Sun, 04 Oct 2009 12:04:54 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355382/ https://lwn.net/Articles/355382/ Los__D <div class="FormattedComment"> Features costs performance. Drops for no reason is, of course, a problem but tradeoffs will need to be made sometimes.<br> <p> <p> You could have a LIGHTNING FAST DOS box today. What good would it do you?<br> </div> Sun, 04 Oct 2009 09:41:15 +0000 SSD https://lwn.net/Articles/355307/ https://lwn.net/Articles/355307/ job <div class="FormattedComment"> Did Ted not read Val's articles on LWN, or does he just not agree?<br> </div> Sat, 03 Oct 2009 07:45:04 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355228/ https://lwn.net/Articles/355228/ simonl <div class="FormattedComment"> The Moore's law reference stinks. It is a typical commercial vendor apologist view. Sorry, our software sucks, we know, but just buy faster hw and make up for our crap.<br> <p> When you argue that way, you take no pride in your code. And pride is what built this kernel.<br> <p> Maybe the kernel devs have become too employed. Too focused on solving customers' immediate needs, and having too little time to go hunt down whatever catches their attention, big or small, with no prospects to please a project manager.<br> <p> But look what Apple just did in their latest release: Nothing spectacular, except cleaning up. Someone has had the guts to nack new features and focus on removal.<br> <p> </div> Fri, 02 Oct 2009 18:09:26 +0000 SSD https://lwn.net/Articles/355149/ https://lwn.net/Articles/355149/ dwmw2 <BLOCKQUOTE><I>"The flash hardware itself is better placed to know about and handle failures of its cells, so that is likely to be the place where it is done, he said."</I></BLOCKQUOTE> I was biting my tongue when he said that, so I didn't get up and heckle. <P> I think it's the wrong approach. It was all very well letting "intelligent" drives remap individual sectors underneath us so that we didn't have to worry about bad sectors or C-H-S and interleaving. But what the flash drives have to do to present a "disk" interface is <em>much</em> more than that; it's wrong to think that the same lessons apply here. <P> What the SSD does internally is a file system all of its own, commonly called a "translation layer". We then end up putting our own file system (ext4, btrfs, etc.) on top of that underlying file system. <P> Do you want to trust your data to a closed source file system implementation which you can't debug, can't improve and &mdash; most scarily &mdash; can't even <tt>fsck</tt> when it goes wrong, because you don't have direct access to the underlying medium?<P> I don't, certainly. The last two times I tried to install Linux to a SATA SSD, the disk was corrupted by the time I booted into the new system for the first time. The 'black box' model meant that there was no chance to recover &mdash; all I could do with the dead devices was throw them away, along with their entire contents. <P> File systems take a long time to get to maturity. And these translation layers aren't any different. We've been seeing for a long time that they are completely unreliable, although newer models are <em>supposed</em> to be somewhat better. But still, shipping them in a black box with no way for users to fix them or recover lost data is a <em>bad idea</em>.<P> That's just the reliability angle; there are also efficiency concerns with the filesystem-on-filesystem model. Flash is divided into "eraseblocks" of typically 128KiB or so. And getting larger as devices get larger. You can write in smaller chunks (typically 512 bytes or 2KiB, but also getting larger), but you can't just overwrite things as you desire. Each eraseblock is a bit like an Etch-A-Sketch. Once you've done your drawing, you can't just change bits of it; you have to wipe the whole block.<P> Our flash will fill up as we use it, and some of the data on the flash will be still relevant. Other parts will have been rendered obsolete; replaced by other data or just deleted files that aren't relevant any more. Before our flash fills up completely, we need to recover some of the space taken by obsolete data. We pick an eraseblock, write out new copies of the data which are still <em>valid</em>, and then we can erase the selected and re-use it. This process is called garbage collection.<P> One of the <em>biggest</em> disadvantages of the "pretend to be disk" approach is addressed by the recent <A HREF="http://lwn.net/Articles/293658/">TRIM</A> work. The problem was that the disk didn't even <em>know</em> that certain data blocks were obsolete and could just be discarded. So it was faithfully copying those sectors around from eraseblock to eraseblock during its garbage collection, even though the contents of those sectors were not at all <em>relevant</em> &mdash; according to the file system, they were free space!<P> Once TRIM gets deployed for real, that'll help a lot. But there are other ways in which the model is suboptimal.<P> The ideal case for garbage collection is that we'll find an eraseblock which contains <em>only</em> obsolete data, and in that case we can just erase it without having to copy anything at all. Rather than mixing volatile, short-term data in with the stable, long-term data we actually want to keep them <em>apart</em>, in separate eraseblocks. But in the SSD model, the underlying "disk" can't easily tell which data is which &mdash; the real OS file system code can do a much better job.<P> And when we're doing this garbage collection, it's an ideal time for the OS file system to optimise its storage &mdash; to defragment or do whatever else it wants (combining data extents, recompressing, data de-duplication, etc.). It can even play tricks like writing new data out in a suboptimal but <em>fast</em> fashion, and then only optimising it later when it gets garbage collected. But when the "disk" is doing this for us behind our back in its own internal file system, we doesn't get the opportunity to do so.<P> I don't think Ted is right that the flash hardware is in the best place to handle <I>"failures of its cells"</i>. In the SSD model, the flash hardware doesn't do that anyway &mdash; it's done by the file system on the embedded microcontroller sitting <em>next</em> next to the flash.<P> I am certain that we can do better than that in our <em>own</em> file system code. All we need is a small amount of information from the flash. Telling us about ECC corrections is a first step, of course &mdash; when we had to correct a bunch of flipped bits using ECC, it's getting on for time to GC the eraseblock in question, writing out a clean copy of the data elsewhere. And there are technical reasons why we'll also want the flash to be able to say <I>"please can you GC eraseblock #XX soon"</I>. <P> But I see absolutely no reason why we should put up with the "hardware" actually doing that kind of thing for us, behind our back. And badly.<P> Admittedly, the need to support legacy environments like DOS and to provide <TT>INT 13h</TT> "DISK BIOS" calls or at least a "block device" driver will never really go away. But that's not a problem. There are plenty of examples of translation layers done in <em>software</em>, where the OS really does have access to the real flash but still presents a block device driver to the OS. Linux has about 5 of them already. The corresponding "dumb" devices (like the M-Systems DiskOnChip which used to be extremely popular) are great for Linux, because we can use real file systems on them directly.<P> At the very least, we want the "intelligent" SSD devices to have a pass-through mode, so that we can talk directly to the underlying flash medium. That would <em>also</em> allow us to try to recover our data when the internal "file system" screws up, as well as allowing us to do things properly from our own OS file system code. Fri, 02 Oct 2009 09:31:43 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355137/ https://lwn.net/Articles/355137/ karthik_s1 <div class="FormattedComment"> make linux a micro kernel :-)<br> </div> Fri, 02 Oct 2009 04:48:33 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355127/ https://lwn.net/Articles/355127/ DOT <div class="FormattedComment"> You'd think Linux-related sites would already be providing Theora videos (self-hosted, or through dailymotion or blip.tv). It's not like their target demographic is likely to be running Internet Explorer. ;)<br> </div> Fri, 02 Oct 2009 02:16:41 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355120/ https://lwn.net/Articles/355120/ giraffedata Linus was talking about some kind of bloat that makes the kernel slower. I don't think removing device driver or filesystem driver modules from the disk or configuring out most of the configurable features speeds things up. Fri, 02 Oct 2009 00:25:30 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355048/ https://lwn.net/Articles/355048/ flewellyn <div class="FormattedComment"> That's a good point: there's no rule that says you have to run a distro-built kernel, even in a production environment. A customized kernel with unneeded features trimmed may help matters. (For instance, I have no need for most of the drivers or filesystems.)<br> </div> Thu, 01 Oct 2009 16:43:50 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355047/ https://lwn.net/Articles/355047/ josh <a href="http://www.techcast.com/events/linuxcon/roundtable/roundtable.flv">Direct link to the Linux kernel roundtable video</a>. Thu, 01 Oct 2009 16:39:48 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355044/ https://lwn.net/Articles/355044/ smoogen <div class="FormattedComment"> I think the issue is defining 'bloat'. Most of the features that the unnamed priestess of Delphi has are also features that the various people running it have wanted if for no other reason than all the business case rules that require them to be audited. Now for ye-old hacker in the basement.. those are things that aren't needed ro wanted. Its going to be in the end up to the ye-old hackers to dive in and fix/remove the bloat to see if it can be regained. My guess though is that for mainstream Linux it will mostly be there for some time.<br> </div> Thu, 01 Oct 2009 16:20:21 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/355026/ https://lwn.net/Articles/355026/ Velmont <div class="FormattedComment"> It's surprising that it doesn't work in Gnash. Many video players do, I should think Linux Magazine thought about that!<br> </div> Thu, 01 Oct 2009 15:28:10 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/354989/ https://lwn.net/Articles/354989/ job <p>Considering what 5% extra performance costs in hardware in the mainstream segment, running a year old kernel would give you the same benefit (if you don't care about the newest features). That's not a good situation!</p><p>Regarding the video, you can wget <a href="http://www.techcast.com/events/linuxcon/roundtable/roundtable.flv">this</a> link and feed it in mplayer/xine if you have a recent ffmpeg with VP6 installed.</p> Thu, 01 Oct 2009 10:26:32 +0000 The causes of bloat? https://lwn.net/Articles/354985/ https://lwn.net/Articles/354985/ alex <div class="FormattedComment"> Listening to this weeks FLOSS weekly which interviewed Linus I noted a lot of the "bloat" comes from features like auditing and security checking. I don't know if it's possible to build a stripped down kernel without these things in them and see if the performance comes back. Not that I'd want to run such a kernel on a production site though...<br> </div> Thu, 01 Oct 2009 09:46:03 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/354982/ https://lwn.net/Articles/354982/ gevaerts <div class="FormattedComment"> If you want to download the video, just visit the page and look at the source. The flv file is easily downloadable.<br> </div> Thu, 01 Oct 2009 09:32:14 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/354955/ https://lwn.net/Articles/354955/ dowdle <div class="FormattedComment"> Over the last 10 releases (3 months x 10 = 30 months) the kernel has gotten 12% slower... but it has a ton more features and supports a lot more hardware. Doesn't sound like much of a loss to me. I mean how much better have processors gotten in the same 30 months? I'm guessing they have gotten more than 12% faster so it is obviously a net gain.<br> <p> That isn't to say I want the kernel to keep getting slower.<br> <p> If you want the video in a different format, I'd recommend visiting the page, pausing the video and waiting until it is completely buffered. Then you can copy /tmp/Flash{random-characters} to ~ and convert it to whatever format you want. Of course it would be nice to have a higher quality source to convert from but it isn't too bad.<br> </div> Thu, 01 Oct 2009 04:31:52 +0000 LinuxCon: Kernel roundtable covers more than just bloat https://lwn.net/Articles/354943/ https://lwn.net/Articles/354943/ flewellyn <div class="FormattedComment"> No plan yet on how to deal with the bloat problem? Well, I hope that doesn't remain the case for long.<br> </div> Thu, 01 Oct 2009 01:02:46 +0000