|| ||Andi Kleen <andi-AT-firstfloor.org> |
|| ||Andrew Morton <akpm-AT-linux-foundation.org> |
|| ||Re: [22.214.171.124] Kernel coredump to a pipe is failing |
|| ||Wed, 27 May 2009 09:31:36 +0200|
|| ||Andi Kleen <andi-AT-firstfloor.org>, paul-AT-mad-scientist.net,
|| ||Article, Thread
On Tue, May 26, 2009 at 05:29:35PM -0700, Andrew Morton wrote:
> On Wed, 27 May 2009 02:11:04 +0200 Andi Kleen <firstname.lastname@example.org> wrote:
> > > I dunno. Is this true of all linux filesystems in all cases? Maybe.
> > Assuming one of them is not would you rather want to fix that file system
> > or 10 zillion user programs (including the kernel core dumper) that
> > get it wrong? @)
> I think that removing one bug is better than adding one.
> Many filesystems will return a short write if they hit a memory
> allocation failure, for example. pipe_write() sure will. Retrying
> is appropriate in such a case.
Sorry but are you really suggesting every program in the world that uses
write() anywhere should put it into a loop? That seems just like really
bad API design to me, requiring such contortions in a fundamental
system call just to work around kernel deficiencies.
I can just imagine the programmers putting nasty comments
about the Linux kernel on top of those loops and they would
be fully deserved.
And the same applies to in-kernel users really.
The memory allocation case more sounds like a bug in these fs and
e.g. the network stack sleeps waiting for memory, perhaps these
file systems should too.
Or it should just always return -ENOMEM. Typically when the
system is badly out of memory you're gonna lose anyways because
a lot of things start failing.
email@example.com -- Speaking for myself only.
to post comments)