LWN: Comments on "POHMELFS returns" https://lwn.net/Articles/480095/ This is a special feed containing comments posted to the individual LWN article titled "POHMELFS returns". en-us Mon, 22 Sep 2025 13:55:58 +0000 Mon, 22 Sep 2025 13:55:58 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net POHMELFS returns https://lwn.net/Articles/492327/ https://lwn.net/Articles/492327/ nix <div class="FormattedComment"> No good. glusterfs cannot support inotify, because it is based on FUSE, and FUSE doesn't support inotify. Currently, it seems, only in-kernel distributed filesystems can support inotify -- and, as far as I can see, none of them do.<br> <p> </div> Fri, 13 Apr 2012 19:15:52 +0000 POHMELFS versus Ceph https://lwn.net/Articles/490931/ https://lwn.net/Articles/490931/ Cyberax <div class="FormattedComment"> Well, the newest Russian transliteration rules (I hate them) state that it should be "Evgeni" :)<br> </div> Fri, 06 Apr 2012 07:34:23 +0000 POHMELFS versus Ceph https://lwn.net/Articles/490923/ https://lwn.net/Articles/490923/ bradfitz <div class="FormattedComment"> It's complicated.<br> <a href="http://en.wikipedia.org/wiki/Romanization_of_Russian">http://en.wikipedia.org/wiki/Romanization_of_Russian</a><br> <p> </div> Fri, 06 Apr 2012 05:46:36 +0000 POHMELFS returns https://lwn.net/Articles/485317/ https://lwn.net/Articles/485317/ nix <div class="FormattedComment"> I haven't looked at it, though I've heard of it. I'll give it a look: at first sight it looks really rather nice.<br> </div> Mon, 05 Mar 2012 23:46:35 +0000 POHMELFS returns https://lwn.net/Articles/485008/ https://lwn.net/Articles/485008/ TRS-80 Have you looked at GlusterFS? It stores files on a regular filesystem, and then makes them network accessible, optionally clustering, striping and mirroring them. Sat, 03 Mar 2012 10:01:43 +0000 POHMELFS versus Ceph https://lwn.net/Articles/481946/ https://lwn.net/Articles/481946/ raalkml <div class="FormattedComment"> <font class="QuotedText">&gt; P.S. Evigeny, if anything I've said here is wrong</font><br> <p> Er, yes. It is "Evgeniy" :)<br> </div> Thu, 16 Feb 2012 17:13:04 +0000 POHMELFS versus Ceph https://lwn.net/Articles/480487/ https://lwn.net/Articles/480487/ cmccabe <div class="FormattedComment"> Ceph has a three tier architecture-- monitors, metadata servers, and object storage daemons.<br> <p> The object storage layer of Ceph seems vaguely similar to Evgeniy's "elliptics network." However, there are some very important differences.<br> <p> In Ceph, there is only ever one object storage daemon that has the authority to write to an object. In contrast, the elliptics network is based on the Chord paper from MIT.[1] So potentially there could be multiple writers to a single distributed hash table (DHT) object at once. In his 2010 paper [2], Evigeny describes the DHT model as "write-always-succeed and eventual consistency."<br> <p> One of the biggest questions about any distributed system is how it handles "split-brain syndrome." In other words, what happens when we cut the network into two halves which cannot talk to one another? In Ceph, only one of those halves would be able to continue functioning. This is accomplished by the monitors, who use Paxos [3] to decide on changes in cluster topology. In contrast, in the elliptics network, it looks like both halves would continue functioning. Then later, if they were reunified, we would make some attempt to "merge the histories."<br> <p> Merging histories sounds good in theory, but in practice it's kind of a quagmire. What happened if one side of the brain decided to delete all files in /foo and remove the directory, and another side added files to this directory? Who should "win"? When the parallel universes implode into one, there are potentially going to be some unhappy users. To be fair, some users seem willing to accept this.<br> <p> Another issue is caching. Ceph has true POSIX read-after-write semantics. If you're on one computer and you write a byte to a file at offset 0, and then immediately afterwards someone on another computer reads a byte from offset 0, he'll see exactly what you wrote. In CAP terms [4], Ceph is a highly consistent system. In contrast, in his commit message, Evigeny says that POHMELFS will "only sync (or close with sync_on_close mount option) or writeback will flush data to remote nodes."<br> <p> That actually seems to run counter to what I would call "strict" POSIX semantics. However, I've never seen a formal definition of POSIX filesystem semantics and my usage is kind of informal. If anyone has a document which clarifies it succinctly, I'd love to see it.<br> <p> Full disclosure: I worked on the Ceph project for a while.<br> <p> P.S. Evigeny, if anything I've said here is wrong, please let me know. Elliptics and POHMELFS seem like an interesting projects and I'm always curious to see what you'll come up with in the future.<br> <p> P.P.S. Evigeny, if you're reading this, do you have any ideas about avoiding replication storms?<br> <p> [1] <a href="http://www.pdos.lcs.mit.edu/chord">http://www.pdos.lcs.mit.edu/chord</a><br> [2] <a href="http://www.ioremap.net/tmp/lk2010-elliptics-text.pdf">http://www.ioremap.net/tmp/lk2010-elliptics-text.pdf</a><br> [3] <a href="http://the-paper-trail.org/blog/?p=173">http://the-paper-trail.org/blog/?p=173</a><br> [4] <a href="http://en.wikipedia.org/wiki/CAP_theorem">http://en.wikipedia.org/wiki/CAP_theorem</a><br> </div> Fri, 10 Feb 2012 04:58:42 +0000 POHMELFS returns https://lwn.net/Articles/480485/ https://lwn.net/Articles/480485/ cmccabe <div class="FormattedComment"> If POSIX compliance was the path to victory, we'd all be using AT&amp;T's RFS now. (I tried to find a link, but apparently RFS doesn't even have a wikipedia entry... sigh.)<br> <p> Personally, I think AFS was a great system that should have been more widely adopted. It didn't get open sourced until much later, though, and the usual VHS vs. Betamax thing happened.<br> </div> Fri, 10 Feb 2012 04:14:03 +0000 POHMELFS returns https://lwn.net/Articles/480418/ https://lwn.net/Articles/480418/ nix <div class="FormattedComment"> Yeah. AFS made NFS look like the soul of POSIX-compliance: no cross-directory hardlinks, close() with extra magic effects (IIRC), and its own very non-Unixlike permissions system. (Ironically it would look more Unixlike today than when it was originally written, because ACLs are fairly similar to the AFS permission model.)<br> </div> Thu, 09 Feb 2012 21:18:10 +0000 POHMELFS returns https://lwn.net/Articles/480239/ https://lwn.net/Articles/480239/ jackb <div class="FormattedComment"> Some people have great experiences with NFSv4. My experience has been that it's easy to encounter obscure bugs.<br> <p> About a month ago I went through a period in which running emerge on any client on a network in which /home and /usr/portage is hosted on an NFS server would randomly trigger lockd errors on all the other clients, requiring a hard reboot to resolve.<br> <p> Then after a few weeks the problem went away. I'm not sure which update (kernel, nfs-utils, portage, or some other dependency) resolved the problem. I didn't change any configuration during this time. That basically describes my experience with NFS - it's good when it works but it's also prone to mysterious and inexplicable problems from time to time.<br> </div> Thu, 09 Feb 2012 13:24:43 +0000 POHMELFS returns https://lwn.net/Articles/480228/ https://lwn.net/Articles/480228/ epa <div class="FormattedComment"> That was also the flaw with the Andrew File System (AFS) I believe.<br> </div> Thu, 09 Feb 2012 12:09:01 +0000 POHMELFS versus Ceph https://lwn.net/Articles/480226/ https://lwn.net/Articles/480226/ abacus Anyone who knows how POHMELFS compares to <a href="https://lwn.net/Articles/258516/">Ceph</a> ? Thu, 09 Feb 2012 12:02:57 +0000 POHMELFS returns https://lwn.net/Articles/480215/ https://lwn.net/Articles/480215/ nix <div class="FormattedComment"> So, in place of the old POHMELFS, which had no use case other than 'just like NFS, only better, look, you can take an existing filesystem and distribute it across the network instantly!' (I'm not sure I can think of a more common use case), we have... this, which requires you to shift all your FS data onto new storage which cannot be accessed without pohmelfs.<br> <p> Not an improvement, sorry.<br> <p> I think I'll try out NFSv4 one of these days. Maybe it's got inotify support now. I'd really like something NFSish ('take an FS and make it distributed, you don't need a cluster or special fabrics or anything like that') but that is closer to POSIX and also preferably supports inotify so that modern KDE and GNOME versions have a chance of working properly if your home directory is exported over it.<br> </div> Thu, 09 Feb 2012 11:06:55 +0000