User: Password:
|
|
Subscribe / Log in / New account

NFS as a cache

NFS as a cache

Posted Feb 18, 2009 6:16 UTC (Wed) by pjm (subscriber, #2080)
In reply to: A look at package repository proxies by yokem_55
Parent article: A look at package repository proxies

One issue is handling the case that multiple machines try to install something at the same time: Ideally you'd allow multiple machines to upgrade simultaneously but not download the same file twice. I believe none of apt/yum/... do per-file locking in the NFS-shared directory as this would require, whereas most other suggestions here do have the desired property. (See also other people's comments on locking in this thread.)

Deletion is another issue: if some machines are configured to use bleeding edge versions of things while others take the "better the bugs you know about" approach, then they'll have different ideas of when it's OK to delete a package from the cache. For that matter, apt will by default delete package lists that aren't referenced by its sources.list configuration file, which would be bad if different machines have different sources.list contents, so you'd want to add a APT::Get::List-Cleanup configuration entry on all your client machines to prevent this — and you'd then manually remove package-list files.

A very minor issue is that a per-machine cache is occasionally useful when the network is down (for the same reasons that apt/yum/... keep a local cache at all); though conversely there are some benefits (du, administration) in avoiding multiple caches.

I'd expect NFS to be slightly less efficient than the alternatives, but this shouldn't be noticeable.


(Log in to post comments)


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds