Fedora Weekly News Issue 24
Posted Dec 7, 2005 18:54 UTC (Wed) by drag
In reply to: Fedora Weekly News Issue 24
Parent article: Fedora Weekly News Issue 24
Definately take everything I am about to say with a grain of salt and douple check everything. This is just me telling you interesting things I've found based on what research I've done myself and very not definative and not ment as advice.
Keeping that in mind...
Coda is dead, from what I understand.
OpenAFS is actively developed and is quite nice.
The only problem I have with it is that I have to use Debian Testing versions because the Debian Stable version sucks in a few different ways.
Also the afs volume size limit is 8 gigs, which took me a while to figure out. It'll let you copy more then 8 gigs to a volume, people have had upwards of a 150 gigs and such.. but begins to crap out in unusual manners and stuff like volume management and that sort of thing gets funky.
But OpenAFS is not stateless. It has agressive local caching and can deal with versioning stuff so that you cache doesn't have stale entries. This basicly means you can handle temporary disconnects with no ill effects.
I use a Debian-based domain with OpenLDAP and Kerberos 5 (openafs has it's own Kerberos 4 stuff, but there are compatability packages for Debian to integrate OpenAFS stuff with Kerberos 5 realm)
But neither is Lustre stateless, from what I understand.
What is nice about Lustre is is that it's modern.
I look at it like 'generational' file systems...
First generation technology file systems are like NFS and CIFS/SMB. These are relatively simple setups with singular file servers for a PC desktop LAN enviroment. Not very dynamic, not very scalable (by themselves), but it is what everybody uses.
Second generational technology is things like OpenAFS (even though AFS is very old itself). Not very common outside academic setups. Designed specificly were you have local (lan or harddrive) storage that is much faster then your links to the OpenAFS server.. Kind of a WAN or campus-wide setup. They are secure, offer advanced volume adminstation stuff, global namespaces (so that users and systems don't have to know the names of servers and the relationship of their file systems to each other), fail over stuff and things like that.
Now we are starting to see more 'third generation' type stuff.
Right now they are like Redhat's GFS and Lustre.. Sort of bridging the gap between LANS 'first gen' file server stuff and SANS.
With GFS you can do fancy stuff like serve as a bridge between your SANS and servers.. Or you can do PC-turned-to-DASD-unit type stuff like use entire computers as sophisticated disc controllers with GNBD (gfs network block deamon, I beleive) and serve out direct block access to your regular servers that show up like /dev/sda items.. Which you can manage with CLVM (lvm2 + extra stuff for clusters)
So you have like PC storage clusters with GFS (even though in enterprise you'd mostly use it now for extending SANS capabilities). You'd have like seperate as-fast-as-you-can-afford networks were you trust hosts completely (read: your hosts take care of things like UID id and file system restrictions). People are working on extending CLVM and things like ddraid were you can do software raid with these storage clusters and mirror volumes and whatnot just like they were local storage.
Ddraid and such isn't production-ready I beleive.. but people are using GFS and clvm now, I beleive.
With OpenAFS it is designed for a world were local storage is faster then remote storages... With Lustre it is designed were remote storage is faster then local storage. So you have things like parrellel file system access over multi-gigabyte networks that can aggregate storage access speeds and such.
Were you have a local modern drive that can get 50-80 MB/s (for the nicest SATA stuff). You can probably RAID them and get SCSI arrays and such and get 100-200 MB/s performance (just guessing).
With the very fastest Lustre (commercially aviable) networks using propriatory network stuff like Quadrics and whatnot you can get file transfers speeds of up to 10 Gb/s in file transfer speeds. That's not bandwidth aviable, that is actual 2-way file transfer. In terms of harddrive stuff that would be something like 1200+ MB/s.
The way I figure it if you had a efficient and well designed gigabit ethernet network with nice switches and 10 gigabit connections to multiple file servers from the switches you'd be able to get file server access that is equilevent to a local harddrive.
This should be ok for either using GFS or Lustre...
So for workstations that aren't going anywere then I figure that you'd be able to use that and Lustre to host your operating systems and home directories directly off of the file servers and get actual performance improvements for the desktops.. not to mention lowering administration costs and such.
Then, for security, you'd have to have a seperate network for normal network services. (internet access, email, IM, etc).
This is because, in my limited understanding, GFS and Lustre type stuff have no security to speak of... Other then normal NFS-style 'your computer decides' method. Both should support normal posix permissions and I beleive that both support Ext3-style extended ACLs and such. (both GFS and Lustre are tied closely to ext3 developement, from what I understand) Or they will in the near future.
For slower, more normal networks were you have to worry about users pluging stuff into your network.. I would use GFS or GNBD-style stuff just for the file servers.
Then I'd use the stateless linux style stuff were it's like your booting off of a knoppix cd for the desktop stuff with read-only root.
For home directories I'd host them on OpenAFS with nice big 10+ gig local cache. (Currently I host stuff like Doom3 and UT2004 on a OpenAFS server on a gigabyte switched network. (To save disk space on my desktop) They start of slow the first time you load the game, but after that your working from local cache and it's just as fast as anything else.)
OpenAFS has some funky not-quite-Posix-compatable file system ACLs. They are more flexible then posix for stuff, but you get situations. For instance normal Unix file system commands work with ownershipe read-write-execute permissons, but group and world permissions are ignored.
They work out ok and for home directories they work out ok... But OpenAFS doesn't support some special files like named pipes and such, which break odd programs like totem or whatnot that want to build pipes and sockets in your home directory. Also when using Gnome with OpenAFS you have the FAM deamon, which will take a crap and cause 100% cpu usage in a seemlingly random fasion and cause nautilus to puke.
However with newer kernels with dnotify support and gamin instead of famd then nautilus and openafs get quite along.
I used OpenAFS for home directories myself for a little bit just for my personal desktop, but now it seems better to use local home directory and use symbolic link to my openafs share. (But if I had lots of clients were I could get away with having network-based home directories, then I think it would be worth the trouble.. it would take some experimentation)
Nautilus and gamin seem to like this and I get previews and file notification changes working fine with that setup.
There is a issue with Gamin and OpenAFS.. Since OpenAFS has strong security thru it's Kerberos stuff, kerberos authentication expires after a set time. So if you leave yourself logged in after 8 hours or so, and quickly log out and in you can be using a stale version of gamin from your previous account that lacks file system permissions on the openafs server, were your current user account should. This causes nautilus to try to open and then quickly close windows.
Logging out and waiting for a while, or logging out and making sure that all gnome-related items and gamin for your user are dead before logging back in solves this.
It is basicly a non-issue once I figured out what is going on.. OpenAFS is like that. (similar experiance to the discovery of the 8 gig limit)
Newer versions of OpenAFS seem very fast and are very reliable, at least in my experiance.
Also it's nice for remote users with fast internet access.
Since OpenAFS has kerberos as it's basis for authentication and has (not to secure, but secure enough) file transfer encryption by default it is usually safe enough and with it's agressive caching is fast enough, to use over the internet! Which is much nicer then things like NFS or CIFS.
However Window's AFS support realy sucks (and I think that OS X's support is still subpar also.) They are slow and tend to hammer the openafs server more.. (with Linux cache is preserved between users and reboots, but with windows it doesn't work) (also windows client has a weird translate from AFS to CIFS thing it does which increases overhead) so you'd still want to use CIFS for a mixed enviroment, which is not as nice as afs, but has almost ubiquious client support and SAMBA rocks.
to post comments)