LWN: Comments on "Rethinking multi-grain timestamps" https://lwn.net/Articles/946394/ This is a special feed containing comments posted to the individual LWN article titled "Rethinking multi-grain timestamps". en-us Fri, 29 Aug 2025 01:43:47 +0000 Fri, 29 Aug 2025 01:43:47 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Rethinking multi-grain timestamps https://lwn.net/Articles/948161/ https://lwn.net/Articles/948161/ LyonJE <div class="FormattedComment"> Talking about a performance trade-off has me thinking that simply using a low-res timestamp works fine? It looks like we are only talking about fail-to-use-cache for a millisecond. (On the occasions that happens.) I suppose media is faster these days, but then also why isn't a layer closer to the media doing that work for the millisecond in the cases where it matters?<br> <p> Maybe I'm missing something, but nanosecond caching at a higher FS level maybe isn't the right place to do that, especially if it means introducing a swathe of finer-grained changes and adaptations, as has clearly been seen to be problematic?<br> </div> Thu, 19 Oct 2023 09:49:41 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947485/ https://lwn.net/Articles/947485/ jlayton <div class="FormattedComment"> When we talk about a fine grained timestamp, what we mean is one that comes directly from the kernel's internal high-resolution timekeeping. That generally has very fine resolution (~100ns or better) and monotonically increases. We grab that value, calculate and fix up the wallclock time from it and give it out. The coarse grained timestamp is updated approximately every jiffy (once per timer tick) and is just a snapshot of the fine grained timestamp at that time.<br> <p> So to answer your question, there should be no problem. The idea is to update the apparent coarse-grained timestamp _source_ before returning any fine-grained timestamp. Any later fetch from the timestamp source (coarse or fine), will always be later than or equal to the last fine grained timestamp handed out. That should be good enough to keep "make" happy.<br> <p> (Note that it's a bit more complex with the way that times are tracked in the timekeeping code, so I'm glossing over some details here.)<br> </div> Wed, 11 Oct 2023 23:18:14 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947483/ https://lwn.net/Articles/947483/ nijhof <div class="FormattedComment"> How would that work up with multiple coarse - fine - coarse - fine... updates in close succession? If each would have to be later than the previous one, then each coarse timestamp would have to be advanced. And so you could end up with timestamps in the future?<br> </div> Wed, 11 Oct 2023 22:55:16 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947341/ https://lwn.net/Articles/947341/ mathstuf <div class="FormattedComment"> There's still no causality broken in QM with entanglement. You can observe some measurement of an entangled entity and know what result would occur if measured somewhere else at the same moment (and outside the light cone), but causality is not broken because to *use* the information, you must actually communicate with the other side (as you cannot influence the result without breaking entanglement; you're just learning things at the same time as elsewhere).<br> Note that the "interpretations" (e.g., Copenhagen, many worlds, etc.) are about *how* entangled particles do this.<br> <p> QM doesn't have anything to say about black holes as it does not have a model for gravity at all. The problems are that black holes represent a situation where gravity is strong enough to matter (heh) on the QM scales.<br> <p> And yes, there are gaps in the theories for what happens here. We don't know what it is.<br> <p> PBS Space Time is a good source of information on these topics: <a href="https://www.youtube.com/c/pbsspacetime/videos">https://www.youtube.com/c/pbsspacetime/videos</a><br> </div> Wed, 11 Oct 2023 11:07:16 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947340/ https://lwn.net/Articles/947340/ jlayton <div class="FormattedComment"> I probably didn't explain this very well. When I say "handed out" I meant the clock value being stamped onto the inode, not given out via stat() and friends.<br> <p> Basically, when we go to update any of the inode's timestamps we'll always grab the coarse-grained timestamp in the kernel for the update, unless someone has viewed it recently, in which case we'll grab a fine-grained timestamp instead. The idea is to update the coarse grained timestamp whenever we hand out a fine-grained one. That avoids the problem described in the article where the timestamps on files updated later appear to be earlier than the fine grained update.<br> <p> That does make issuing a fine-grained timestamp a bit more expensive though, so some of my current effort is in trying to improve that, and minimizing the number of fine-grained updates that are needed.<br> </div> Wed, 11 Oct 2023 10:25:42 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947339/ https://lwn.net/Articles/947339/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; The events are closer together in time than they are in space such that it is not possible for light to travel from one event to the other, and there is no causal connection between the events (i.e. it is not possible to say that event 1 caused event 2, or vice versa).</span><br> <p> Just to throw a spanner into the works, quantum mechanics would beg to differ :-) That was Einstein's "Spooky action at a distance", which appears to be a real thing.<br> <p> Just like (if I've got it right) quantum mechanics says black holes can't exist.<br> <p> The latest I knew, we have some evidence that says relativity is correct, we have some evidence that says quantum mechanics is correct, and we have loads of evidence that they can't both be right. Where do we go from here :-) Has somebody found the GUT? Or the TOE?<br> <p> Cheers,<br> Wol<br> </div> Wed, 11 Oct 2023 09:59:12 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947334/ https://lwn.net/Articles/947334/ nim-nim <div class="FormattedComment"> That’s very nice to learn thank you for the info!<br> <p> But won’t that make any file which timestamp is read after the fine-grained timestamp, newer than files which timestamp has been read before, even though in coarse timestamp world they would have the same timestamp, and may have even been written in a different order ?<br> </div> Wed, 11 Oct 2023 07:52:16 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947312/ https://lwn.net/Articles/947312/ jlayton <div class="FormattedComment"> I think it will work.<br> <p> Whenever we stamp a file with a fine-grained timestamp, that time will now become the floor for any further timestamp that is handed out. The revised draft I have of this series works, and it now passes the testcase that was failing before, but it's still quite rough and needs further testing.<br> </div> Tue, 10 Oct 2023 22:24:14 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947295/ https://lwn.net/Articles/947295/ ianmcc <p>If two events are separated in space by a distance that is more than <i>cΔt</i>, where <i>Δt</i> is the difference in time between the events and <i>c</i> is the speed of light, then it is known as a "space-like interval". The events are closer together in time than they are in space such that it is not possible for light to travel from one event to the other, and there is no causal connection between the events (i.e. it is not possible to say that event 1 caused event 2, or vice versa). <p> It is a theorem in special relativity that if two events are space-like separated, then there exists a (possibly moving) reference frame where the two events are simultaneous. Moreover there are also reference frames where event 1 occurs before event 2, and reference frames where event 2 occurs before event 1. <p> Although different observers will genuinely disagree about the order of events, since there is no causal connection between them there is ultimately no ambiguity in observable effects. I.e. both observers would be able to calculate and agree that event 1 could not have caused event 2, and vice versa. So although there will be a reference frame where you sip your tea before the supernova explodes, you can rest assured that you didn't cause it. Tue, 10 Oct 2023 20:30:11 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947273/ https://lwn.net/Articles/947273/ wittenberg <div class="FormattedComment"> At this point, you need some old guy to point out the definitive discussion of this: Time, Clocks, and the Ordering of Events in a Distributed System <br> <a href="https://lamport.azurewebsites.net/pubs/time-clocks.pdf">https://lamport.azurewebsites.net/pubs/time-clocks.pdf</a> (1978) by Leslie Lamport. As you would expect from him, it's beautifully written. Everybody concerned with this issue should read it.<br> <p> --David<br> </div> Tue, 10 Oct 2023 16:11:54 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947268/ https://lwn.net/Articles/947268/ nim-nim <div class="FormattedComment"> I don’t think advancing the apparent coarse grained time will work, that will just replace one set of time comparaison errors with another.<br> </div> Tue, 10 Oct 2023 15:47:07 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947260/ https://lwn.net/Articles/947260/ Baughn <div class="FormattedComment"> First from your frame of reference, sure, but there'll be a frame of reference in which the events are reversed.<br> <p> In the supernova case those are all far away from you in phase space, but for high-frequency networking there's a lot more chance of ambiguity.<br> </div> Tue, 10 Oct 2023 14:27:42 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947237/ https://lwn.net/Articles/947237/ spacefrogg <div class="FormattedComment"> This could only be a problem under the assumption that both modifications are less time apart than the lower time resolution (less precise timestamp). In such cases (last I know of is FAT), timestamps must be considered unreliable and disregarded or treated in an application-specific way.<br> <p> Without knowing any specifics, I don't think that this is an issue, here.<br> </div> Tue, 10 Oct 2023 13:54:33 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947196/ https://lwn.net/Articles/947196/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; you can and should just pick your favorite reference frame, and use Lorentz transformations to correct all observations in other frames to match it.</span><br> <p> I'm thinking humans here. And stock markets. Where $billions could hang on the precise ordering of events. :-) <br> <p> And yes, I know that in our macro world all reference frames are - to all intents and purposes - the same. But as soon as you say "pick your favourite frame", you're going to get people fighting for the one that is to their personal advantage.<br> <p> Which is my point. As clocks get faster (the point of this article) and distances get greater (we're talking about a network), the greater the importance of the chosen reference frame, which is a matter of politics not maths. Which means we cannot appeal to logic for a solution.<br> <p> Cheers,<br> Wol<br> </div> Tue, 10 Oct 2023 12:33:51 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947195/ https://lwn.net/Articles/947195/ Wol <div class="FormattedComment"> But as far as the photon is concerned, you sipped the tea before the supernova happened.<br> <p> From its reference frame, no time elapsed between the supernova exploding, and it arriving at yours.<br> <p> So you must have sipped the tea before the star exploded.<br> <p> Cheers,<br> Wol<br> </div> Tue, 10 Oct 2023 12:21:37 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947189/ https://lwn.net/Articles/947189/ NYKevin <div class="FormattedComment"> <span class="QuotedText">&gt; But what I was trying to get at, is that if that distance is greater than your thirty kilometers, either you don't actually need to know the order, or any attempt to assign an order is essentially throwing dice at random.</span><br> <p> No, that is not what relativity says. Relativity says that the order is *arbitrary*, not that it is random. There is no randomness introduced by events separated by spacelike intervals - you can and should just pick your favorite reference frame, and use Lorentz transformations to correct all observations in other frames to match it. This is an entirely deterministic mathematical process which will produce a total ordering of all events (unless your chosen reference frame says they are exactly simultaneous, which can be disregarded since your measurements are not perfectly precise anyway).<br> <p> <span class="QuotedText">&gt; At the end of the day, humans don't like it when the people who know say "it's unknowable". And in the example we appear to be discussing here, "make" running across a distributed file system, I find it hard to grasp how you can make the required sequential determinism work over the randomness of parallel file saves. If the system is running fast enough, or the network is large enough, the results will by the laws of physics be random, and any attempt to solve the problem is doomed to failure.</span><br> <p> Yes, but this is not about relativity. This is about "I don't know how fast my network/SSD/whatever runs," or "I don't know how wrong my clock is." Those are much older problems, which have been well-understood in the world of distributed systems for decades. The most common approach is to use something like Paxos, Raft, or CRDTs, all of which explicitly establish "happens-before" relationships as a natural part of their consensus/convergence algorithms. Or, to put it in even simpler terms: The way you make sure X happens before Y is to have the computers responsible for X and Y talk to each other and arrange for that to be the case.[1]<br> <p> [1]: It should be acknowledged that this is harder than it sounds. If you only have two computers, it may well be completely intractable, for some definitions of "talk to each other" - see the "two generals problem." But there are versions of this problem which are more tractable, and modern distributed systems are built around solving those versions of the problem.<br> </div> Tue, 10 Oct 2023 07:43:35 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947187/ https://lwn.net/Articles/947187/ Wol <div class="FormattedComment"> <span class="QuotedText">&gt; &gt; At which point, don't you now get caught by relativity? :-)</span><br> <p> <span class="QuotedText">&gt; Not yet. The next logical step down from 1 ms resolution is 100 µs resolution, but as long as both machines are within thirty kilometers[1] of each other, the events in question are separated by a timelike interval, and so all observers will agree about the order in which they happen.</span><br> <p> Fascinating! Yes really. But I think maybe I should have used the word "causality" rather than "relativity". My bad ...<br> <p> But what I was trying to get at, is that if that distance is greater than your thirty kilometers, either you don't actually need to know the order, or any attempt to assign an order is essentially throwing dice at random. (I think about that with regard to distributed databases, and I'd certainly try to localise the problem to avoid those network effects ...)<br> <p> At the end of the day, humans don't like it when the people who know say "it's unknowable". And in the example we appear to be discussing here, "make" running across a distributed file system, I find it hard to grasp how you can make the required sequential determinism work over the randomness of parallel file saves. If the system is running fast enough, or the network is large enough, the results will by the laws of physics be random, and any attempt to solve the problem is doomed to failure.<br> <p> From what you're saying, we're nowhere near that limit yet, but we might get better results if we planned for hitting it, rather than pretending it's not there.<br> <p> Cheers,<br> Wol<br> </div> Tue, 10 Oct 2023 07:24:21 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947182/ https://lwn.net/Articles/947182/ joib <div class="FormattedComment"> <span class="QuotedText">&gt; At which point, don't you now get caught by relativity? :-)</span><br> <p> "Relativity" is not some pixie dust you can sprinkle over your argument to handwave away the need to think, unfortunately.<br> <p> To actually answer the question, yes at some point you need to take relativistic effect into account if you need really accurate time synchronization. Gravitational time dilation, meaning that your clock ticks faster or slower depending on the altitude (strength of the gravitational field), is a thing. Likewise, if two clocks are moving at significant velocity with respect to each other (say, GPS satellites) you start seeing relativistic effects.<br> <p> But just signals propagating between fixed locations A and B at finite speed does not need any relativity. If you can measure the propagation delay between the two locations, you can agree on a common reference time. That's how e.g. TAI (<a href="https://en.wikipedia.org/wiki/International_Atomic_Time">https://en.wikipedia.org/wiki/International_Atomic_Time</a> ) works, with super accurate atomic clocks spread out all over the world agreeing on a common reference time scale. (Just to clarify, the atomic clocks participating in TAI do account for gravitational time dilation; my point is that fixed clocks separated by some distance is not some unsolvable relativistic mystery.)<br> <p> <span class="QuotedText">&gt; Two events, happening separated by space, you just can NOT always tell which happened first. End of. Tough.</span><br> <p> From your, no doubt, extensive studies of relativity you should know that is an ill posed statement. What relativity actually tells us is that there is no absolute time scale in the universe, it's all, drumroll, relative. However, for any particular observer, the order in which the observer sees events IS well defined. And thus two observers, knowing their distance and velocity with respect to each other can agree on a common time scale and they can calculate in which order, and when, the other sees events (which might not be the same in which it itself sees them).<br> <p> <span class="QuotedText">&gt; I think as soon as you have events happening to the same file system, from different computers, you just have to accept that knowing for sure which one happened first is a fool's errand. Some times you just have to accept that the Universe says NO!</span><br> <p> Practically speaking, the problem is not so much that relativity is this mysterious force that prevents us from knowing, but rather that things like computers themselves as well as signal propagation in computer networks is subject to a lot of timing variation. Time synchronization protocols like NTP and PTP do a lot of clever filtering etc. to reduce that noise, but obviously can't reduce it to zero.<br> <p> Another practical problem wrt ordering events is that if you have a bunch of timestamped events (which, as mentioned above, we can agree to a common timescale to a relatively high accuracy) coming in from a number of sources, one must wait for at least the propagation delay before one can be certain about the relative ordering of the events. Well, there are a numbers of approaches to dealing with agreeing upon a common event ordering in a distributed system, like the Google Spanner mentioned in a sibling comment, two-phase commit, and whatnot. They all tend to have drawbacks compared to a purely local system that doesn't need to care about such issues.<br> <p> </div> Tue, 10 Oct 2023 06:07:17 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947178/ https://lwn.net/Articles/947178/ NYKevin <div class="FormattedComment"> <span class="QuotedText">&gt; At which point, don't you now get caught by relativity? :-)</span><br> <p> Not yet. The next logical step down from 1 ms resolution is 100 µs resolution, but as long as both machines are within thirty kilometers[1] of each other, the events in question are separated by a timelike interval, and so all observers will agree about the order in which they happen.<br> <p> That's not to say this never becomes a problem (obviously there are pairs of computers that are separated by more than thirty kilometers), but there are several objections to the "relativity" argument:<br> <p> * The average LAN is way too small for this to be a problem, so LAN users can disregard relativity altogether unless we want to go to hundreds-of-nanoseconds precision or better.<br> * Even when relativity is a problem, you always have the option of selecting an arbitrary reference frame (like, say, the ITRF[2]), and declaring that to be the "right" reference frame, applying local corrections as needed, so you can still have a total ordering on events. Some observers will disagree with that ordering, but...<br> * ...in practice, the observers who disagree with your chosen ordering are either moving relative to your chosen reference frame, or they are experiencing a different level of gravity (because they're in space and your reference frame is not, or something along those lines). Data centers on Earth are not really moving relative to each other at significant speed, and surface variations in the Earth's gravity are quite small as well, so data centers will generally agree on the order in which events happen, even if they are separated by large distances. The "local corrections" that we need to do are entirely trivial, and amount to backsolving for the light-speed delay. Admittedly, this is a much harder problem if you want to build a data center on the Moon, or Mars, but we're not doing that yet.<br> * If all else fails, you adopt the TrueTime[3] data model and report time intervals instead of time stamps (i.e. instead of saying "it is exactly 14:00:00.000," you report something like "it is no earlier than 14:00:00.000 and no later than 14:00:00.007"). You can then account for all relativity of simultaneity by including it as part of the uncertainty (and always reporting relative to some arbitrary fixed reference frame, regardless of what the local reference frame looks like). This probably does make performance somewhat worse in some deployment scenarios (e.g. on Mars), but it has already been widely deployed as part of Spanner, so we know that it correctly solves the general "I don't know exactly what time it is" problem, regardless of whether that problem comes from relativity, clock skew, or some combination of the two.<br> <p> [1]: <a href="https://www.wolframalpha.com/input?i=distance+light+travels+in+100+microseconds">https://www.wolframalpha.com/input?i=distance+light+trave...</a><br> [2]: <a href="https://en.wikipedia.org/wiki/International_Terrestrial_Reference_System_and_Frame">https://en.wikipedia.org/wiki/International_Terrestrial_R...</a><br> [3]: <a href="https://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf">https://static.googleusercontent.com/media/research.googl...</a><br> <p> Disclaimer: I work for Google, and the service I manage uses Spanner as a backend.<br> </div> Tue, 10 Oct 2023 02:45:34 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947170/ https://lwn.net/Articles/947170/ mjg59 <div class="FormattedComment"> <span class="QuotedText">&gt; To add, if the time between two events is less than the time for a photon to travel between the locations, then the question "which came first" does not make sense.</span><br> <p> If the light from a distant supernova reaches me shortly after I've taken a sip of tea, I can pretty confidently assert that the supernova happened first even though the time between the two events was less than the time for a photon to travel between the locations.<br> </div> Mon, 09 Oct 2023 22:13:30 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947168/ https://lwn.net/Articles/947168/ iabervon <div class="FormattedComment"> It seems like what NFS wants is an mtime value such that if you do:<br> <p> mtime1 = mtime<br> content1 = content<br> mtime2 = mtime<br> <p> and see that mtime1 == mtime2, then later, if mtime == mtime1, content was still content1 when you last looked at mtime. This doesn't work at millisecond granularity, because there could be another modification in the same millisecond after the one that led to mtime2, and there could have been a modification leading to mtime1 in the same millisecond before a second one between reading the content and mtime2. Of course, the additional precision beyond a millisecond doesn't have to reflect when in the millisecond the modifications happened; it just has to increase with each different content. The excessive precision is really just ensuring that multiple modifications can't happen without getting a different mtime, but you also need to deal with still having this property if the file gets evicted from the cache at various points, which is the tricky part.<br> </div> Mon, 09 Oct 2023 21:37:09 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947166/ https://lwn.net/Articles/947166/ Cyberax <div class="FormattedComment"> One light-nanosecond is just 30 centimeters. At this point, it makes no sense to talk about precision. Even 10ns is just 3 meters.<br> </div> Mon, 09 Oct 2023 21:01:40 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947165/ https://lwn.net/Articles/947165/ Wol <div class="FormattedComment"> To add, if the time between two events is less than the time for a photon to travel between the locations, then the question "which came first" does not make sense.<br> <p> Surely, if the latency of a message passing between two computers is greater than the time between two events, one happening on one computer, and the other event on the other computer, it's exactly the same. Asking "which came first" is a stupid question, even if the speed of light does mean that an answer is possible (which is not guaranteed).<br> <p> Cheers,<br> Wol<br> </div> Mon, 09 Oct 2023 20:39:31 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947163/ https://lwn.net/Articles/947163/ Wol <div class="FormattedComment"> At which point, don't you now get caught by relativity? :-)<br> <p> Two events, happening separated by space, you just can NOT always tell which happened first. End of. Tough.<br> <p> I think as soon as you have events happening to the same file system, from different computers, you just have to accept that knowing for sure which one happened first is a fool's errand. Some times you just have to accept that the Universe says NO!<br> <p> Cheers,<br> Wol<br> </div> Mon, 09 Oct 2023 20:30:08 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947154/ https://lwn.net/Articles/947154/ smoogen <div class="FormattedComment"> Wouldn't caching it in VFS work only for non-networked filesystems. If you have a cluster going, each one is going to have different cached VFS timestamps but it might be A which writes to file1 and B which writes to file2.. the cached high res in B would not get to C (the nfs server)<br> </div> Mon, 09 Oct 2023 18:12:04 +0000 Timestamp should be a range https://lwn.net/Articles/947153/ https://lwn.net/Articles/947153/ epa <div class="FormattedComment"> The right interface would be to provide timestamp as a range from earliest possible value to latest possible. Then make(1) could work out conservatively what it needs to do.<br> </div> Mon, 09 Oct 2023 18:07:47 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947150/ https://lwn.net/Articles/947150/ jlayton <div class="FormattedComment"> <span class="QuotedText">&gt; The shape of what comes next might be seen in this series from Jeff Layton, the author of the multi-grain timestamp work.</span><br> <p> This set is probably also defunct, as it means that you could use utimensat() to set a timestamp and then not get the same value back when you fetched it. My current approach is to try to advance the apparent coarse grained time whenever a fine grained time is handed out. That should mitigate the problem of seeing out-of-order timestamps that Jon descrbed. This is a major rework though, and probably won't be ready for v6.7.<br> </div> Mon, 09 Oct 2023 17:38:22 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947142/ https://lwn.net/Articles/947142/ Wol <div class="FormattedComment"> This may be a daft comment, but if the problem is that file1's modification data has been stored in low-res, surely the fix is to at least cache it in hi-res?<br> <p> If you're caching all modifications in hi-res in the VFS, would that help? Then you do the usual thing of dropping cache on an LRU basis, quite possibly bunching files on an "equal low-res modification time" to drop. You could always specify when to drop based on an aging basis rather than a cache full basis, so a system with loads of space to cache that can stay on top of it for a while (or will that make huge holes in kernel ram?).<br> <p> Cheers,<br> Wol<br> </div> Mon, 09 Oct 2023 16:29:58 +0000 Rethinking multi-grain timestamps https://lwn.net/Articles/947140/ https://lwn.net/Articles/947140/ tux3 <div class="FormattedComment"> <span class="QuotedText">&gt;Timestamps are carefully truncated before being reported to user space, though, so that the higher resolution is not visible outside of the virtual filesystem layer. That should prevent problems like the one described above. </span><br> <p> I may be confused, but is it really enough?<br> <p> Say I write file 1 from userspace, locally, I do steps 2, 3, 4 on file 2 from NFS.<br> Now, a local program watches file 2, sees that it has been written, and responds by updating write file 1 (locally).<br> <p> On the other side of the NFS, maybe I am waiting to see a file 1 update, because I expect the watcher program to respond.<br> Can it happen that I see file 1 written before file 2, because file 1 got a low-res timestamp, but NFS still returns me a high-resolution file 2 timestamp, and so I wait forever?<br> <p> </div> Mon, 09 Oct 2023 15:55:53 +0000