User: Password:
|
|
Subscribe / Log in / New account

Whack-a-droid

Whack-a-droid

Posted Aug 5, 2010 13:26 UTC (Thu) by etienne (guest, #25256)
In reply to: Whack-a-droid by xav
Parent article: Whack-a-droid

Not being a specialist, I would say that managing 1,048,576 pages (a desktop with 4 Gb of memory on i386) inevitably takes time.
Switching to all pages of 2 Gbytes (as the only other solution on ia32) is not really possible (most filesystems have pages of 4 Kbytes).
Is there another OS which better manages this amount on pages (on the same hardware), or is it a microprocessor problem and other processor having pages of 64Kbytes (or variable size pages) are the solution?


(Log in to post comments)

Whack-a-droid

Posted Aug 6, 2010 10:11 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

The problem is not that there are a lot of pages to manage. The problem is that application developers, no longer being forced to care about using memory efficiently, have stopped caring about using memory efficiently.

It's further exacerbated by all those exciting, easy-to-learn-to-use dynamic languages that sweep all memory management complexity under someone else's carpet, more or less entirely destroying programmers' ability to intuit or reason about the memory usage of their program.

Whack-a-droid

Posted Aug 6, 2010 10:44 UTC (Fri) by modernjazz (guest, #4185) [Link]

Anyone who lived with the Mac classic environment for a while can tell you that managed memory is nice. In practice, the segfaults never all get fixed.

It was especially awful when the segfaults were my own, because I'd have to reboot the computer 50 times a day when writing and testing my own code. Development became oh-so-much-nicer when I switched to Linux, and I just got the "Segfault" on the command line rather than "beep-reboot." Either way I fixed the problem, but with Linux I didn't spend half the day waiting for the computer to come up.

It's a bit different with power management. In a way, I think policy might be even more important in this case: the developer may not notice or immediately suffer from power management problems in the same way that the developer him/herself suffers from memory errors. So the incentive of someone who has just developed some mobile app to fix the problem may not be as high as you'd like. Do you really want to spend your whole life acting as a cop and beating on people to fix their code? There's not enough time for code review as it is.

Whack-a-droid

Posted Aug 6, 2010 13:01 UTC (Fri) by etienne (guest, #25256) [Link]

Well, I was replying to xav who seems to have a test case which is a lot quicker on other Unix systems.
I can imagine that having a database with 2 Gbytes data to manage would need quite a bit of virtual memory even if it is correctly written.
If that application recompiled for the other Unix is definitely quicker, it may be due to the too small granularity of the pages on ia32, forcing the same treatment (for instance write of copy-on-write pages) to be done hundred of thousands of times.
Even if the code is optimised, it will be slow.
The nice thing of 4 Kbytes memory pages is that the filesystem code (which has 4 Kbytes blocks) is simpler (things like paging-in memory mapped files on read).
Would be nice to know if another Unix on ia32 processor is quicker, or simply another processor is quicker on the test case.
But as I said, I am not a specialist of virtual memory.

Whack-a-droid

Posted Aug 13, 2010 15:19 UTC (Fri) by renox (subscriber, #23785) [Link]

I would add that it is *also* partly the kernel's fault of not providing a way for the virtual memory manager to work with the garbage collector..

Otherwise when a programmer fails to release memory correctly only swap could be used instead of memory (as happen usually in language without garbage collector: old unused pages are swapped to the disk) see:
http://lambda-the-ultimate.org/node/2391


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds