User: Password:
Subscribe / Log in / New account

A look at package repository proxies

A look at package repository proxies

Posted Feb 13, 2009 23:19 UTC (Fri) by yokem_55 (subscriber, #10498)
Parent article: A look at package repository proxies

For my gentoo machines, my "proxy" is nothing more than exporting my distfiles & portage tree via nfs off one master machine, which saves a lot of diskspace as well as bandwidth. Is there something about Fedora that makes this kind of "proxy" impractical?

(Log in to post comments)

rpm/yum package repos via NFS

Posted Feb 13, 2009 23:28 UTC (Fri) by dowdle (subscriber, #659) [Link]

The /etc/yum.conf specifies what directory to look for packages in and although I haven't done it myself, I'm sure NFS mounting an updates repo dir on said directory would make each client machine happy. They'd still have to contact the repo servers for the metadata but when they checked what needed to be downloaded, they'd find everything already sitting there.

Another way would be just to mount things over NFS somewhere and then use file:// references in the .repo defs rather than http://. In that case the NFS mount would be used for both packages and repo metadata.

rpm/yum package repos via NFS

Posted Feb 14, 2009 0:59 UTC (Sat) by JoeBuck (guest, #2330) [Link]

I don't think that yum's locking works correctly with a shared NFS mount for the package archive. Checking to see if the process with the lock is still alive won't work right.

On the other hand, if yum commands are run in such a way that no two machines are running yum at the same time, things should be fine.

A look at package repository proxies

Posted Feb 14, 2009 7:38 UTC (Sat) by tzafrir (subscriber, #11501) [Link]

One obvious issue (at least for apt/dpkg) is that this will not produce a signed repository.

A look at package repository proxies

Posted Feb 14, 2009 19:07 UTC (Sat) by jwb (guest, #15467) [Link]

I don't understand. If I mount /var/cache/apt/archives from a remote system, I see no problems with the package signatures.

A look at package repository proxies

Posted Feb 14, 2009 22:15 UTC (Sat) by drag (subscriber, #31333) [Link]


With Debian I believe the package list is signed and the package list contains checksums of all the packages. So as long as the checksums match up with the packages then it should not matter any.


With Debian I just used approx. A cach'ng proxy seems the obvious way to go and it does not involve setting up any network shares or anything like that.

I frequently do temporary installs and VMs on various pieces of hardware for various reasons. When doing a network install having the ability to simply direct the installer to use a is a HUGE time saver. On my work's corporate network it goes through a proxy which either is somewhat broken or gives very low priority to large files being downloaded... so it can take a hour or two to download a single 30 meg package or whatnont, depending on how busy the network is. Having a nice and easy to use proxy that doesn't require anything special is a big deal for me.

This is one of the things I really miss when using Fedora.

NFS as a cache

Posted Feb 18, 2009 6:16 UTC (Wed) by pjm (subscriber, #2080) [Link]

One issue is handling the case that multiple machines try to install something at the same time: Ideally you'd allow multiple machines to upgrade simultaneously but not download the same file twice. I believe none of apt/yum/... do per-file locking in the NFS-shared directory as this would require, whereas most other suggestions here do have the desired property. (See also other people's comments on locking in this thread.)

Deletion is another issue: if some machines are configured to use bleeding edge versions of things while others take the "better the bugs you know about" approach, then they'll have different ideas of when it's OK to delete a package from the cache. For that matter, apt will by default delete package lists that aren't referenced by its sources.list configuration file, which would be bad if different machines have different sources.list contents, so you'd want to add a APT::Get::List-Cleanup configuration entry on all your client machines to prevent this — and you'd then manually remove package-list files.

A very minor issue is that a per-machine cache is occasionally useful when the network is down (for the same reasons that apt/yum/... keep a local cache at all); though conversely there are some benefits (du, administration) in avoiding multiple caches.

I'd expect NFS to be slightly less efficient than the alternatives, but this shouldn't be noticeable.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds