LWN: Comments on "Supporting filesystems in persistent memory" https://lwn.net/Articles/610174/ This is a special feed containing comments posted to the individual LWN article titled "Supporting filesystems in persistent memory". en-us Fri, 29 Aug 2025 09:47:28 +0000 Fri, 29 Aug 2025 09:47:28 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Supporting filesystems in persistent memory https://lwn.net/Articles/675459/ https://lwn.net/Articles/675459/ dlang <div class="FormattedComment"> mount it on something that has larger than 4K block size. IIRC this can be done with powerpc, sparc, and I think even AMD64 systems<br> </div> Sat, 13 Feb 2016 02:19:40 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/675302/ https://lwn.net/Articles/675302/ roblucid <div class="FormattedComment"> And as one of the links in article explains, core memory was not only slower than RAM, but the persistence was only 80% reliable, frequently a colder style boot was required.<br> <p> Fast memory has needed to be closer to the core, for example modern L1 cache, technologies which include RAM inside the CPU package (EDRAM/HBM2) are intended for high bandwidith whilst providing low latency. A device on the end of some bus, that's plugged in, can NEVER compete with the short physical connections of on chip RAM, so there's very good reason to believe the miracle people want to believe, will prove to have engineering tradeoffs (for example the EROS persistent RAM store, was disk backed therefore with log write throughput is FAR lower than RAM and it's performance relied on cache and the hot checkpoint area of disk reducing seeks for "85%" of accesses).<br> </div> Fri, 12 Feb 2016 09:24:43 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/675187/ https://lwn.net/Articles/675187/ smurf <div class="FormattedComment"> What does one do in that situation? <br> <p> This is not an idle question; I have inherited an apparently-Linux-based hard disk video recorder (now deceased) which seems to have created an ext3 file system with 8k blocks.<br> </div> Thu, 11 Feb 2016 14:15:21 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/645014/ https://lwn.net/Articles/645014/ Aissen <div class="FormattedComment"> This is changing. PCIe and NVMe SSDs are already on the market without all the cruft of backward compatibility. <br> </div> Tue, 19 May 2015 14:22:00 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/612833/ https://lwn.net/Articles/612833/ zlynx <div class="FormattedComment"> <font class="QuotedText">&gt; The security community isn't even happy with keeping keys in RAM as it is.</font><br> <p> And then the invent things like TPM chips and "secure element" chips. But then they make them so impossible to use that programmers have no choice except to store encryption keys in RAM.<br> </div> Mon, 22 Sep 2014 20:09:19 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/612680/ https://lwn.net/Articles/612680/ Lennie <div class="FormattedComment"> "If NVM would be cheaper than RAM .., you could simply wipe the device .. upon boot to make it behave like RAM."<br> <p> The security community is going to laugh at that if you don't have some kind of policy around not writing private keys to NVM. Which will probably not work. Existing applications don't have any concept of different kinds of memory for holding keys so you'd need to modify all of them ?<br> <p> The security community isn't even happy with keeping keys in RAM as it is. As a simple can of compressed air can prevent your RAM from being wiped already:<br> <p> <a href="http://en.wikipedia.org/wiki/Cold_boot_attack">http://en.wikipedia.org/wiki/Cold_boot_attack</a><br> </div> Sat, 20 Sep 2014 22:13:51 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611902/ https://lwn.net/Articles/611902/ zlynx <div class="FormattedComment"> Its possible. But its also possible for the kernel to write to a random position on your SATA drive. Granted, this isn't as likely to happen by accident. Although it could drop random bytes into file cache which would result in corrupted data...<br> <p> Also, I was under the impression that the kernel switches memory maps in kernel mode, locking itself into a 1 or 2 gigabyte range. Writes outside that range that don't go through copy_to_user result in a BUG. Although maybe that was only done in 32-bit 4G/4G mode...Hmm.<br> <p> If it doesn't do that currently, it certainly could do it in the future.<br> </div> Sun, 14 Sep 2014 21:40:06 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611761/ https://lwn.net/Articles/611761/ vedantk <div class="FormattedComment"> Btrfs: Btrfs keeps track of the space it allocates in its extent tree [1]. Holes between extents correspond to free space. There's a free space cache but no explicit free space tracking [2].<br> <p> ZFS: ZFS appends allocation/deallocation metadata to a log, which it replays into an in-memory AVL tree. You can read about ZFS-style space maps in [3].<br> <p> Those are the only two `modern' (read: non-bitmappy) approaches I know of, but I could be missing others.<br> <p> [1]: <a rel="nofollow" href="https://btrfs.wiki.kernel.org/index.php/Btrfs_design#Extent_Trees_and_DM_integration">https://btrfs.wiki.kernel.org/index.php/Btrfs_design#Exte...</a><br> <p> [2]: <a rel="nofollow" href="https://btrfs.wiki.kernel.org/index.php/Glossary">https://btrfs.wiki.kernel.org/index.php/Glossary</a><br> <p> [3]: <a rel="nofollow" href="https://blogs.oracle.com/bonwick/en/entry/space_maps">https://blogs.oracle.com/bonwick/en/entry/space_maps</a><br> </div> Fri, 12 Sep 2014 19:40:02 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611679/ https://lwn.net/Articles/611679/ kevinm <div class="FormattedComment"> Thinking a bit about NVM, it seems like NVM that is directly addressable by the entire kernel vastly increases the probability that write through a wild pointer by $RANDOM_DODGY_DRIVER will corrupt data on your "disk".<br> </div> Fri, 12 Sep 2014 03:37:54 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611601/ https://lwn.net/Articles/611601/ rrcsraghu <div class="FormattedComment"> "RAM-based filesystems are designed for RAM; it turns out that they are not all that well suited to the NVM case"<br> <p> In this context, I'd like to point out to some of the work folks at Lawrence Livermore National Laboratory, and us at The Ohio State University, have done in designing a highly-scalable in-memory file system designed to handle persistent memory devices.<br> <p> <a rel="nofollow" href="http://go.osu.edu/cruise-paper">http://go.osu.edu/cruise-paper</a><br> <p> Some of you might be familiar with the IBM BlueGene/Q supercomputer's persistent memory capability. We used that as our playground when prototyping CRUISE. While the file system was designed for checkpoint-restart workloads, the design allows extension for typical file system workloads as well.<br> <p> Our prototype is up on github for those who want to look around:<br> <a rel="nofollow" href="https://github.com/hpc/cruise">https://github.com/hpc/cruise</a><br> </div> Thu, 11 Sep 2014 15:02:11 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611599/ https://lwn.net/Articles/611599/ kevinm <div class="FormattedComment"> <font class="QuotedText">&gt; However, if the NVM device can write individual cachelines (and assuming that's not a lot less efficient than 4K block writes, as I would expect from something I connect to the CPU's memory bus), then a filesystem that assumes it needs to write a sector at a time doesn't feel like the right thing to me.</font><br> <p> That's the point of this DAX work, though - when a filesystem supports direct access, the NVM page *is* the page cache page, and at least for file data there's no writing of sectors at all. A write() of 64 bytes will just result in a 64 byte copy_from_user() directly into the NVM.<br> <p> (The sticking point is that there will still be an in-memory inode structure and an "on-disk" inode structure stored in the NVM, which will probably mean that the entire NVM copy of the inode gets rewritten whenever an attribute like mtime changes).<br> </div> Thu, 11 Sep 2014 14:45:07 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611545/ https://lwn.net/Articles/611545/ dakas <blockquote>Volatile storage has always been considerably faster, smaller in capacity and far more expensive[...]</blockquote> "Always" is such a strong word. There were literally decades where core memory was persistent because, well, consisting of magnetic cores. <p/> HP at the current point of time appears to be betting a significant part of its assets on memristor technology. It does not look feasible to make it "much faster and more volatile": they want to move the CPUs to that technology as well, so there will be no issue of "better fit/worse fit" as there is with static RAM vs dynamic RAM vs block devices. <p/> I'd like to see them pull this off and make computing more exciting again than the "more of the same, just more so" developments of the last 30+ years. Thu, 11 Sep 2014 09:01:23 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611406/ https://lwn.net/Articles/611406/ reubenhwk <div class="FormattedComment"> I suspect this is going to be another vaporware. Even if somebody does produce a storage device which is as fast as ram, somebody else will build on that technology by making it much faster and more volatile.<br> <p> Volatile storage has always been considerably faster, smaller in capacity and far more expensive and we've seen how a tiny bit of cache (compared to the size of RAM) can massively improve performance. A tiny bit of RAM (compared to HDD storage space) can greatly improve the performance of disk access (using the page cache). I see no reason why that will change now. If we suddenly have 100 TB of VNM which cost maybe $100, the MOBO will undoubtedly have a slot for 8-16 TB of even faster volatile memory which is going to do exactly was RAM does today and will also cost about $100. In other words, nothing will change, but everything will be faster and larger.<br> </div> Wed, 10 Sep 2014 14:28:43 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611285/ https://lwn.net/Articles/611285/ compudj <div class="FormattedComment"> About PRAMFS: we tried using it to map the LTTng tracer ring buffer, so we could recover user-space and kernel traces after crash. Unfortunately, PRAMFS does not support a very useful and obvious use-case: memory mapping a file backed by persistent memory with MAP_SHARED; it only supports MAP_PRIVATE, which actually keeps the written pages in the page cache, which pretty much defeats the purpose of non-volatile memory.<br> <p> In the case of tracing, we really want to bypass the page cache. Moreover, we don't want the performance hit of going through a system call for each event.<br> <p> So this DAX patchset, assuming it supports MAP_SHARED memory maps, is really welcome from a tracing POV. Having the ability to associate struct page to those memory regions will be useful for kernel tracing too.<br> <p> Thanks!<br> <p> Mathieu<br> </div> Tue, 09 Sep 2014 11:41:49 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611138/ https://lwn.net/Articles/611138/ etienne <div class="FormattedComment"> <font class="QuotedText">&gt; &gt; I am not sure memory caching is easy to enable on PCIe mapped memory, there is plenty of small details there...</font><br> &gt;<br> <font class="QuotedText">&gt; Like for instance? I naively thought it was MTRR and done.</font><br> <p> If your processor support MTRR (most of them, but maybe do not crash if not), then you can use them - problem is their number is limited and their size/alignment is a bit strict.<br> On newer processor you shall use PATs (<a href="http://en.wikipedia.org/wiki/Page_attribute_table">http://en.wikipedia.org/wiki/Page_attribute_table</a>).<br> But the problem I was thinking of is that the PCIe connection to your persistent memory device may include PCIe bridges, and that bridge may have a different "Cache Line Size", its memory base address may not include the "Prefetchable" bit (see PCI specs) and few other stuff like that (like error recovery) due to PCI history...<br> </div> Mon, 08 Sep 2014 10:43:41 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611113/ https://lwn.net/Articles/611113/ mithro <div class="FormattedComment"> Anyone know where you can actually by a NVM device as a home consumer? It seems like a really cool thing to experiment with, even if its doesn't deliver on the 100TB of fast storage for less than DDR cost. <br> </div> Mon, 08 Sep 2014 01:18:08 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/611029/ https://lwn.net/Articles/611029/ pedrocr <div class="FormattedComment"> <font class="QuotedText">&gt; the OS needs to be designed from the ground up to be stateless and to *never corrupt itself*. When everything is persistent, rebooting does not fix problems...</font><br> <p> You'd just need to clearly mark the parts of the NVRAM that are persistent filesystem structures and the parts that are kernel structures. The filesystem parts are read just as today and the kernel structure parts ignored and created from scratch. That doesn't seem too hard.<br> </div> Sat, 06 Sep 2014 22:04:44 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610997/ https://lwn.net/Articles/610997/ willy <div class="FormattedComment"> Yes, the ext2-XIP design was for that purpose, but the infrastructure that it put in looked suitable as a basis for supporting persistent memory with a modern filesystem like ext4/XFS. It turned out that the -&gt;direct_access() block device API was close, needing minor improvements, but the -&gt;get_xip_mem() filesystem API was all wrong and needed to be replaced. We ought not have support for two direct access APIs in Linux, so my patchset migrates ext2 from XIP to DAX as the appropriate pieces of DAX are added.<br> <p> One of the ext* maintainers/developers told me that having XIP support in ext2 and not in ext4 was one of the few remaining reasons not to remove ext2 from the Linux codebase, so we may see ext2-XIP deprecated in favour of ext4-DAX soon. That wasn't particularly my intent, but if it happens to help others, then that's just gravy.<br> <p> Speaking of huge pages, I'm currently working on support for those. I haven't finished debugging them yet, but I've definitely seen them inserted into the user's address space and removed again.<br> </div> Sat, 06 Sep 2014 17:20:34 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610902/ https://lwn.net/Articles/610902/ pkern <div class="FormattedComment"> XIP was designed for readonly blobs of memory on mainframes shared by multiple VMs. It needed a persistable structure, so ext2 was picked. I'm not sure why you'd try to port it to ext4 instead. You do not need the journaling of ext4 and extents are probably also not all that useful if you are mapping in page sized hunks anyway (huge pages aside).<br> </div> Fri, 05 Sep 2014 14:32:32 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610896/ https://lwn.net/Articles/610896/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; I am not sure memory caching is easy to enable on PCIe mapped memory, there is plenty of small details there...</font><br> <p> Like for instance? I naively thought it was MTRR and done.<br> <p> </div> Fri, 05 Sep 2014 13:32:45 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610895/ https://lwn.net/Articles/610895/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; As for the question of "why do we need RAM if we have tens of TB of NVRAM" I'll just say this: most of the information the kernel keeps for running the machine is volatile. It's not designed to be persistent. If you just want a machine to have persistent memory, the OS needs to be designed from the ground up to be stateless and to *never corrupt itself*. When everything is persistent, rebooting does not fix problems...</font><br> <p> Well, you can always do less with more. Name a partition of your NVRAM as volatile and ignore its former content when you boot, job done?<br> <p> </div> Fri, 05 Sep 2014 13:29:10 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610894/ https://lwn.net/Articles/610894/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; Like flash based SSDs before NVRAM, the architectural structure is not very different to storage we've been using for 50 years.</font><br> <p> ... except SSDs and flash memory generally speaking are all elaborated smoke and mirrors especially designed to maintain backward compatibility. Very fast smoke and mirrors in the case of SSDs but still. So, probably not a great example :-)<br> <p> <a href="http://arstechnica.com/information-technology/2012/06/inside-the-ssd-revolution-how-solid-state-disks-really-work/">http://arstechnica.com/information-technology/2012/06/ins...</a><br> <p> <a href="http://www.bunniestudios.com/blog/?p=3554">http://www.bunniestudios.com/blog/?p=3554</a> (On Hacking MicroSD Cards)<br> <p> </div> Fri, 05 Sep 2014 13:24:49 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610874/ https://lwn.net/Articles/610874/ etienne <div class="FormattedComment"> <font class="QuotedText">&gt; because the linux kernel does not support block size &gt; page size</font><br> <p> FAT filesystems can have a "cluster size" bigger than page size, Linux probably only reject "device minimum access block size" &gt; page size<br> </div> Fri, 05 Sep 2014 09:47:46 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610843/ https://lwn.net/Articles/610843/ dgc <div class="FormattedComment"> Yup, nothing new there. We already have to deal with that with filesystems made on a 64k page architecture with a 64k block sizw. They can't be mounted and used on an architecture with a 4k page size, because the linux kernel does not support block size &gt; page size.<br> <p> -Dave.<br> </div> Fri, 05 Sep 2014 03:00:56 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610758/ https://lwn.net/Articles/610758/ markusw <div class="FormattedComment"> Granted, I omitted the cacheline size. However, the CPU/MMU happily writes individual cachelines to RAM, despite the VM mapping. I'd also argue that the VM page size barely has an influence on the block size of block devices (otherwise, moving from 512B to 4K sectors would have a noticeable impact; let alone the 2M pages).<br> <p> To me, the "architectural structure" of a NAND based SSD looks very different from magnetic tapes or disks. It's not that we're lucky to not have to modify our designs because the architecture didn't change. It did. A lot. It's our existing designs that force new technology to immitate something we already have, know and designed for.<br> <p> I certainly agree that it makes sense to use a file-system that was designed with block devices in mind on anything that resembles a block device. It also seems logical that a filesystem not designed for persistency is an utterly bad fit for NVM. And I can imagine getting quick results with a block device based filesystem on NVM. However, if the NVM device can write individual cachelines (and assuming that's not a lot less efficient than 4K block writes, as I would expect from something I connect to the CPU's memory bus), then a filesystem that assumes it needs to write a sector at a time doesn't feel like the right thing to me.<br> <p> I don't follow your last argument, either. By definition, volatile information doesn't need to persist across a reboot (unlike an FS on NVM, where your argument holds). If NVM would be cheaper than RAM (at the same capacity and latency), you could simply wipe the device (or relevant portions) upon boot to make it behave like RAM.<br> <p> However, persistence clearly is an additional feature (when compared to RAM). And I bet we continue to have to pay for that. Either with higher latency or higher price.<br> <p> Markus<br> <p> </div> Thu, 04 Sep 2014 18:35:12 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610753/ https://lwn.net/Articles/610753/ etienne <div class="FormattedComment"> <font class="QuotedText">&gt; That means PCIe based devices such as NVMe storage as well as NVDIMMs that you plug into normal ddr3/ddr4 slots.</font><br> <p> I am not sure both types can be treated the same way.<br> - PCIe devices are hot-plug with sufficiently good hardware.<br> - PCIe devices are better accessed with DMA, because then the length of the transfer is clearly known by the hardware before the start of the PCIe transaction (one PCIe transaction to transfer 4096 bytes versus a lot of transactions, one per cacheline or one per assembly instruction)<br> - I am not sure memory caching is easy to enable on PCIe mapped memory, there is plenty of small details there...<br> <p> Also, are there conditions where a modification of the page-cache are not sent to the media, for instance a modified memory mapped file receiving a revoke system call?<br> </div> Thu, 04 Sep 2014 14:26:26 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610719/ https://lwn.net/Articles/610719/ ballombe <div class="FormattedComment"> <font class="QuotedText">&gt; SO, really, when you look at persistent memory, it's really just a very fast block device where the block size is determined by the CPU TLB architecture rather than the sector size used by traditional storage.</font><br> <p> Does that means that a particular device will only be readable on some hardware and not some others ?<br> </div> Thu, 04 Sep 2014 13:23:25 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610716/ https://lwn.net/Articles/610716/ martin.langhoff <div class="FormattedComment"> The first use I can think of is speeding up writes to traditional rotational media. An outsize, kernel-controlled write back cache; or a "data-writeback" journal.<br> <p> Can these NVM devices be used for that workload easily today?<br> </div> Thu, 04 Sep 2014 12:59:01 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610685/ https://lwn.net/Articles/610685/ dgc <div class="FormattedComment"> persistent memory == cpu addressable memory that is persistent.<br> <p> That means PCIe based devices such as NVMe storage as well as NVDIMMs that you plug into normal ddr3/ddr4 slots. All "byte addressable" but the actual minimum read/write resolution is that of a CPU cacheline. i.e. any sub-cacheline size modification requires a RMW cycle by the CPU on that cacheline. So really there is no difference between NVMe devices or NVDIMMs apart from protocols and access latency.<br> <p> However, all this NVRAM is still *page based* due to requiring the physical memory to be virtually mapped. This means the "block sizes" for storage is the page sizes the hardware TLB can support are. i.e. 4k or 2M for x86-64. IOWs, to optimise storage layout to minimise page faults we really need to allocate in page aligned chunks and not share pages between files. Which is exactly what a block based filesystem does ;).<br> <p> SO, really, when you look at persistent memory, it's really just a very fast block device where the block size is determined by the CPU TLB architecture rather than the sector size used by traditional storage.<br> <p> Like flash based SSDs before NVRAM, the architectural structure is not very different to storage we've been using for 50 years. The speeds and feeds are faster, but the same access and optimisation algorithms apply to the problem. We don't need to completely redesign the wheel to take advantage of the new technology - we only need to do that if we want to eek every last bit of performance out of it.<br> <p> As for the question of "why do we need RAM if we have tens of TB of NVRAM" I'll just say this: most of the information the kernel keeps for running the machine is volatile. It's not designed to be persistent. If you just want a machine to have persistent memory, the OS needs to be designed from the ground up to be stateless and to *never corrupt itself*. When everything is persistent, rebooting does not fix problems...<br> <p> -Dave.<br> </div> Thu, 04 Sep 2014 04:55:27 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610671/ https://lwn.net/Articles/610671/ nix <div class="FormattedComment"> Pick the previous host. It's quite plausible that a filesystem could be split into zones of some kind and have some identifier somewhere relating to the current zone...<br> <p> </div> Wed, 03 Sep 2014 23:56:04 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610637/ https://lwn.net/Articles/610637/ ssl <div class="FormattedComment"> <font class="QuotedText">&gt; free block bitmap</font><br> <p> By the way, is there a comparision how modern FS keep tabs on free space?<br> </div> Wed, 03 Sep 2014 21:48:30 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610564/ https://lwn.net/Articles/610564/ busterb <div class="FormattedComment"> The article mentions NVDIMMs, which are byte-addressable (well, as byte addressable as a DDR3 can be, if we're being pedantic). Current implementations are usually RAM + FLASH + some sort of backup power, but I assume there's something greater on the horizon.<br> <p> I think it'd be great to have a form of hibernation that would allow the DRAM refresh to be completely disabled. I've worked on embedded server systems whether or not to enable the memory controllers when the CPU was not in use was a real bone of contention between the software developers (who wanted to use more memory for the management processor cores) and the hardware guys (who wanted the lowest power consumption when the main processor cores were powered off.)<br> </div> Wed, 03 Sep 2014 16:01:16 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610547/ https://lwn.net/Articles/610547/ markusw <div class="FormattedComment"> Wait a second. When talking about "persistent memory", are we looking at (block) devices (speaking ATA or SCSI) or something closer to byte-addressable RAM?<br> <p> In the former case, I'm wondering if the page cache really is inefficient, as I think RAM is likely to keep some latency margin compared to any kind of sufficiently big NVM device. (Why else would you keep the RAM if NVM was available in "tens of TB" and equal latency?) In any case: sure, use a block based filesystem for what basically is a block device.<br> <p> In the latter case, I don't quite think a block based filesystem is a good fit. Nor a non-persistent one.<br> <p> (And no, with all its queue management and 64 byte long commands, I don't consider NVMe to be byte-addressable. Not efficiently. It may well "ensur[e] efficient small random I/O operation" [1] as long as you consider 4K a small I/O operation.)<br> <p> Markus<br> <p> [1]: <a href="http://www.nvmexpress.org/about/nvm-express-overview/">http://www.nvmexpress.org/about/nvm-express-overview/</a><br> <p> </div> Wed, 03 Sep 2014 14:48:57 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610532/ https://lwn.net/Articles/610532/ lpremoli <div class="FormattedComment"> Thanks Dave. I googled your quote and found the village person you mention and his comments ;)<br> <p> <p> </div> Wed, 03 Sep 2014 11:50:41 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610523/ https://lwn.net/Articles/610523/ dgc <div class="FormattedComment"> <font class="QuotedText">&gt; Does anybody have comments about the fitting of PRAMFS with mmappable NVMs</font><br> <p> I've got a quote for you:<br> <br> | That's where we started about two years ago with that<br> | horrible pramfs trainwreck.<br> <p> Bonus points for working out which village idiot said that and where it fits in the context of the discussion. ;)<br> <br> -Dave.<br> <p> </div> Wed, 03 Sep 2014 11:33:50 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610513/ https://lwn.net/Articles/610513/ smurf Looking at PRAMFS's tech page <a href="http://pramfs.sourceforge.net/tech.html">on SourceForge</a>, I seriously doubt its suitability for multi-terabyte file systems. A free-block <i>bitmap</i>, linear directories, and no hard links? No way. Wed, 03 Sep 2014 10:11:14 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610506/ https://lwn.net/Articles/610506/ lpremoli <div class="FormattedComment"> For some time I have been studying PRAMFS and I think that PRAMFS pretty much satisfies most requirements an NVM has (persistency, memory mappable contents, simple structures and small code base).<br> <p> Does anybody have comments about the fitting of PRAMFS with mmappable NVMs?<br> </div> Wed, 03 Sep 2014 08:41:12 +0000 Supporting filesystems in persistent memory https://lwn.net/Articles/610492/ https://lwn.net/Articles/610492/ flewellyn <div class="FormattedComment"> I was going to ask essentially the same question as Andrew Morton, along the lines of "Why not just treat it as persistent RAM, aka a RAM disk?" But the point about persistence requiring consistency checks and the other features that a full-on filesystem provides is a good one.<br> <p> Now I just find myself hoping that the DAX feature, somewhere, has something called "Jadzia"...<br> </div> Wed, 03 Sep 2014 02:36:49 +0000