Not logged in
Log in now
Create an account
Subscribe to LWN
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
GNU virtual private Ethernet
Please, bojan and tytso alike, cease and desist from saying applications are broken, when users have given a clear requirement for a new filesystem: that it not lose data as a matter of course, when the status quo would preserve it.
Know your place
Posted Mar 18, 2009 8:55 UTC (Wed) by bojan (subscriber, #14302)
I don't know. I think a person that created the file system may know a thing a two about POSIX. I did actually go and check and he did appear to be right. But, that's obviously not good enough for you (or you may know of some interpretation we cannot grasp - it is possible). I'm OK with that.
> Please, bojan and tytso alike, cease and desist from saying applications are broken, when users have given a clear requirement for a new filesystem: that it not lose data as a matter of course, when the status quo would preserve it.
I have no intention of doing that (unless LWN editors throw me out). Likewise, you can say what you please.
Ted, being a pragmatic perseon, already did put workarounds in place, so users will be happy.
Posted Mar 30, 2009 12:37 UTC (Mon) by forthy (guest, #1525)
> I think a person that created the file
system may know a thing a two about POSIX.
It's not, and I repeat in bold: NOT about POSIX. It is about
reasonable behavior. Ordered data has been implemented in ReiserFS and
XFS, which both had the reputation of being unstable and prone to eat
files before. This is a quality of implementation issue, not a standard
issue. Maybe we would need a better standard for file systems, so that
quality of implementation is reasonable by default, but that's a
different topic. If you insist that your way-below-average quality of
implementation is "perfectly valid", you are anal-retentive.
I think Ted T'so should read the GNU Coding
Standards. What is written there is mandatory for a core component
of the GNU project (which the Linux kernel is, regardless if it's
officially part of the GNU project). The point in question here is
The GNU Project regards standards published
by other organizations as suggestions, not orders. We consider those
standards, but we do not “obey” them. In developing a GNU
should implement an outside standard's specifications when that makes the
GNU system better overall in an objective sense. When it doesn't, you
What Ted has implemented was a behavior which is standard, but makes
his file system worse, because it has inconvenient side-effects on
robustness in case of a crash. In shorter words: It sucks. And the
GNU Coding Standards clearly say: If the standard sucks, don't follow
it's about the crashes!!!
Posted Mar 18, 2009 17:24 UTC (Wed) by pflugstad (subscriber, #224)
that it not lose data as a matter of course
Honestly, has anyone here, NOT running binary closed source drivers, experienced a crash in a distro provided kernel in what, the last 12 months or longer? Heck, even a bleeding edge (but not -RC) kernel.
Didn't think so. Now, please refrain from hyperbolic statements like that.
I realize the Ted pointed this out in his initial emails and while it's still not good for the system level behavior to change like this, this is a case of ultra bleeding edge kernel, ALPHA distro release, etc. These are not common users in any sense of the word "common".
Posted Mar 18, 2009 20:02 UTC (Wed) by zeekec (subscriber, #2414)
> Honestly, has anyone here, NOT running binary closed source drivers, experienced a crash in a distro provided kernel in what, the last 12 months or longer? Heck, even a bleeding edge (but not -RC) kernel.
Actually, yes I have. I run Gentoo unstable at home, and I am currently having issues with the 2.6.28 kernel and Xorg's intel drivers. All open source. So it does happen. (But I'm running Gentoo unstable and expect it!)
Posted Mar 18, 2009 22:45 UTC (Wed) by xoddam (subscriber, #2322)
The purpose of a journaling filesystem is *only* to ease and speed the task of recovery after an unclean shutdown. I can't emphasise this point strongly enough.
Posted Mar 19, 2009 0:35 UTC (Thu) by butlerm (subscriber, #13312)
That is not quite correct. The primary purpose of journaling in typical
journaling filesystems is to preserve metadata integrity. Filesystem
repair tools cannot repair metadata that has never been written.
The secondary purpose of journaling is to loosen ordering restrictions on
meta data updates. Assuming you want your filesystem to be there after an
unclean shutdown, that is a major advantage.
Finally, journaling filesystems are not metaphysically prohibited from
using their journals to do other useful things, such as store meta-data
undo information, for example.
Posted Mar 19, 2009 5:56 UTC (Thu) by xoddam (subscriber, #2322)
Posted Mar 20, 2009 21:17 UTC (Fri) by butlerm (subscriber, #13312)
Posted Mar 19, 2009 23:25 UTC (Thu) by jschrod (subscriber, #1646)
That said, yes, I had many kernel crashes at the start of this year, using SUSE and no proprietary modules. It took a long time to identify the piece of hardware that caused it. (It was the video card.) I have another system where usage of ionice causes hard lockups of the whole system, reproducable. E.g., running updatedb with ionice. I have never identified the culprit here and finally put it in the closet; my time was worth more than the price of a new system.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds