just another example of why kernel programming sometimes looks so hard to outsiders.
This is a relatively recent addition to the things performance programmers have to worry about. Used to be, electronic memory was considered fast, and it was always the same speed. Typically, it took at most 4 cycles to access it. Now, there are 4 levels of electronic memory (including the registers) and the main pool is 200 cycles away. We think of looking at memory now the way we used to think of loading a file from a disk.
I'm not ashamed to say my personal leaning these days is toward single-CPU single-core machines tied together at the network adapter. My brain can handle only so much complexity.
At least it would spare the "expensive locking operations" mentioned in the main article, wouldn't it?
That locking operation is for updating a counter. For accessing one, the only cost is moving the cache line from one CPU to another -- through main memory. Whoops, I just remembered - on fancy modern machines, there's a shortcut to move a cache line without going out on the memory bus. So it's cheaper than what I said earlier, but still more than folks are willing to pay.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds