Dentry negativity
A "dentry" in the Linux kernel is the in-memory representation of a directory entry; it is a way of remembering the resolution of a given file or directory name without having to search through the filesystem to find it. The dentry cache speeds lookups considerably; keeping dentries for frequently accessed names like /tmp, /dev/null, or /usr/bin/tetris saves a lot of filesystem I/O.
A negative dentry is a little different, though: it is a memory of a filesystem lookup that failed. If a user types "more cowbell" and no file named cowbell exists, the kernel will create a negative dentry recording that fact. Should our hypothetical user, being a stubborn type, repeat that command, the kernel will encounter the negative dentry and reward said user — who is unlikely to be grateful, users are like that — with an even quicker "no such file or directory" error.
Optimized error messages for fat-fingered commands is a nice benefit from negative dentries, but their real value lies elsewhere. It turns out that lookups on nonexistent files happen frequently, and it's often the same files that are being looked for. Shared-library lookups are one example; it can be instructive to type something like this:
$ strace -eopenat /usr/bin/echo 'Subscribe to LWN'
On your editor's system, the output looks like:
openat(AT_FDCWD, "/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/lib/locale/en_US.UTF-8/LC_IDENTIFICATION", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/locale/en_US.utf8/LC_IDENTIFICATION", O_RDONLY|O_CLOEXEC) = 3
[...]
That simple echo command generates 13 failed lookups on a Fedora 31 system; launching oowriter creates 68 of them, and launching gnucash generates 277. For applications like these, optimizing failed lookups can yield a perceptible improvement in startup time. Compilers and language runtimes can also generate a lot of failed lookups; consider, for example, the handling of C #include or Python import statements. A quick "allmodconfig" kernel build run on your editor's system caused 52,799,262 failed lookups; that is worth optimizing.
There is one little problem with negative dentries, though: they require memory. All of those failed lookups can generate a lot of negative dentries, to the point that they start to crowd out more useful data. This is not a new problem; LWN reported on a complaint about negative dentries from memory-management developer Andrea Arcangeli — in 2002. For the most part, though, the normal shrinker mechanisms that keep the dentry cache as a whole under control have also sufficed to keep the negative variety from taking over.
Long has been working on the cases where normal shrinking doesn't work, though; he posted a new version of his patch set toward the end of February. As he points out, the number of positive dentries is limited by the number of files in the system, but there is no practical limit to the number of files that don't exist. As an illustration of what this can mean, Eric Sandeen pointed out some code in the NSS library that deliberately tries to open 10,000 nonexistent files — every time it starts up — as a timing exercise. Even without such pathological examples, though, the number of negative dentries has the potential to grow without bound.
Long's patch set adds a new sysctl knob, /proc/sys/fs/dentry-dir-max; if its value is zero (the default), the system's behavior is unchanged. If, instead, it is set to a positive value, the number of negative dentries associated with any given directory will not be allowed to exceed that value. The limit on negative dentries can be no lower than 256 to avoid excessive trimming of dentries. When the time comes to clean up excess dentries, the code tries to pick those that have not been referenced recently, and will reduce the number to 7/8 of the limit. A static key is used to prevent this mechanism from slowing down the system if it is not being used.
There seems to be no disagreement with the idea of putting firmer limits on
how many negative dentries can exist. The specific solution chosen here,
though, is a bit more controversial. Adding new sysctl knobs is always a
bit of a hard sell; as Matthew Wilcox put
it: "A sysctl is just a way of blaming the sysadmin for us not
being very good at programming
". In general, such knobs are
difficult for administrators to discover in the first place, and even
harder for them to set correctly. How should an administrator know what an
appropriate number of negative dentries for any given directory should be
for their systems and workloads?
Thus, Wilcox and others argued for some sort of dynamic limit calculated (and adjusted) by the kernel itself. Long responded with a suggestion that the administrator could control the total amount of memory used by negative dentries instead of setting a per-directory maximum count; Wilcox didn't care how the mechanism worked internally, but insisted that it had to be self-tuning.
Dave Chinner, instead, wondered about the need for this kind of mechanism at all. He suggested that the offending applications should just be confined to a memory control group; when memory gets tight within the group, the system will reclaim memory inside that group, including negative dentries. There is, he said, already an effective mechanism for limiting the amount of memory used by a specific application, so there should be no need to add another.
Long answered that, while control groups can help, they don't solve the entire problem. Large numbers of negative dentries can impact the performance of the program generating them, even if a control group isolates the rest of the system from the problem. He also pointed out that daemons often run in the root control group, where they cannot be constrained in this manner.
As has happened every time that this patch set has been posted, the
discussion wound down without any sort of conclusion on how things should
proceed. This patch set seems no closer to the mainline than it was years
ago; a search for control over negative dentries in the kernel will return
a negative result.
| Index entries for this article | |
|---|---|
| Kernel | Dentry cache |
Posted Mar 12, 2020 22:39 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (4 responses)
One of these things is not like the others...
(Should it be /usr/games/tetris, or is there an actual utility called "tetris" which I have not heard of?)
Posted Mar 13, 2020 0:20 UTC (Fri)
by Nahor (subscriber, #51583)
[Link]
Posted Mar 13, 2020 10:52 UTC (Fri)
by Sesse (subscriber, #53779)
[Link] (2 responses)
(Sadly, gtetrinet seems to be abandoned.)
Posted Mar 13, 2020 11:36 UTC (Fri)
by bahner (guest, #35608)
[Link] (1 responses)
1. -rw-r--r-- 1 bahner bahner 78K okt. 19 2011 /home/bahner/Dokumenter/xtris_1.15-9_amd64.deb
Posted Mar 13, 2020 20:13 UTC (Fri)
by Sesse (subscriber, #53779)
[Link]
Posted Mar 12, 2020 23:54 UTC (Thu)
by josh (subscriber, #17465)
[Link] (24 responses)
Having a systemwide number of negative dentries seems like it doesn't adapt well to the directory.
It seems like we need a couple of heuristics:
1) Negative dentries that have never been used (meaning a second failed lookup never occurred for that entry) are less valuable, and should be the first thing reclaimed.
2) Negative dentries should perhaps be limited to some proportion of the *positive* dentries in a directory, rather than an absolute number.
Also, perhaps we could offer userspace a mechanism to say "this file I'm opening, it would really benefit from fast lookup failures if it doesn't exist", versus "I'm opening a file that's unlikely to be checked for often, any negative dentry is of low value", versus the default of not hinting to the kernel. GCC could hint on its include-file lookups, and execvp could hint on its binary lookups, and glibc could hint on its locale lookups.
Posted Mar 13, 2020 9:31 UTC (Fri)
by cladisch (✭ supporter ✭, #50193)
[Link] (11 responses)
This would break down for mostly-empty directories that can override other directories later in the search path, such as /usr/local/{bin,lib}.
Posted Mar 13, 2020 10:15 UTC (Fri)
by josh (subscriber, #17465)
[Link] (10 responses)
Posted Mar 13, 2020 11:32 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (9 responses)
Cheers,
Posted Mar 13, 2020 11:54 UTC (Fri)
by josh (subscriber, #17465)
[Link] (2 responses)
Posted Mar 13, 2020 13:53 UTC (Fri)
by willy (subscriber, #9762)
[Link] (1 responses)
Posted Mar 13, 2020 19:44 UTC (Fri)
by iabervon (subscriber, #722)
[Link]
Posted Mar 14, 2020 6:26 UTC (Sat)
by viro (subscriber, #7872)
[Link] (5 responses)
Posted Mar 14, 2020 11:47 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (4 responses)
But does that mean - like someone else suggested - that dentries are path-sensitive? Or are they cached by path-name rather than object-name?
I was thinking rather simplistically, but it sounds like the problem is somewhat horrid :-)
Cheers,
Posted Mar 14, 2020 20:45 UTC (Sat)
by neilbrown (subscriber, #359)
[Link] (1 responses)
lwn.net probably has an article describing this in excessive detail, however...
We *cannot* decide to disallow negative dentries for some directories as that would interfere with file creation. To create a new file, we first allocate a dentry and try to populate it via a lookup. If that fails, then we have a negative dentry which acts as a place holder. A lock on that dentry will prevent anyone else from acting on the same name (I don't think we currently use that fact as much as we could).
We do have directories where we *know* that all entries are in the dcache - tmpfs for example and anything else that uses "simple_lookup". We could set a very short time-to-live for negative dentries in those directories (though we would still want the ttl to be an order of magnitude more than that cost of allocating and freeing a dentry), and we might be able to let a filesystem mark a directory as 'fully-cached'. I doubt there are many workloads that would really benefit though.
Posted Mar 16, 2020 23:59 UTC (Mon)
by luto (guest, #39314)
[Link]
Posted Mar 15, 2020 15:33 UTC (Sun)
by viro (subscriber, #7872)
[Link] (1 responses)
Posted Mar 15, 2020 18:06 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
Or the dentry list was "per directory". If you've got more entries in the negative dentry list than there are in the directory, then it makes sense to ditch the negative dentry list completely because it costs more to search the negatives than the positives - and the other responses I've had here imply that that's not true ...
Cheers,
Posted Mar 13, 2020 14:28 UTC (Fri)
by willy (subscriber, #9762)
[Link] (10 responses)
If your search path is /home/josh/lib:/usr/local/lib:/usr/lib:/lib
If your workload involves a lot of looking for libraries, then this ratio will dominate. If your workload is almost anything else, then some other ratio would be better.
Right now, our number of negative dentries is really only constrained by the amount of memory in the machine, which is ridiculous. I want to use that memory for caching useful things, not that Tetris is not present in /home/willy/bin.
This is why I say we need to autotune the number of negative dentries. And once we're autotuning the number of negative dentries, it doesn't matter whether we express that as a fraction of the number of positive dentries, or as an absolute number. Either way, we have a target number of negative dentries and need to adjust the target up or down, depending on how the workload is behaving.
Posted Mar 13, 2020 15:07 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
Posted Mar 14, 2020 13:13 UTC (Sat)
by jlayton (subscriber, #31672)
[Link]
Posted Mar 13, 2020 20:46 UTC (Fri)
by neilbrown (subscriber, #359)
[Link] (6 responses)
Is this just a case of having a separate LRU for negative dentries, and then setting a different 'seeks' count for the relevant shrinker?
(I'm reminded that cache invalidation is one of the three hard problems of computer science, along with naming things).
Posted Mar 13, 2020 20:52 UTC (Fri)
by willy (subscriber, #9762)
[Link] (5 responses)
I think we need more than a simple LRU list. If I have my cache sized nicely for compiling the kernel and someone else comes along and uses that program that deliberately opens 50k files that don't exist, I don't want it to compete against the negative dentries that have proven useful in the past. Perhaps we need an LFU. I'm not conversant with the latest techniques in preventing cache pollution.
Posted Mar 14, 2020 10:06 UTC (Sat)
by neilbrown (subscriber, #359)
[Link] (4 responses)
not "almost".
> that have proven useful in the past.
Past performance is not indicative of future results.
This is *exactly* the cache invalidation problem and there is no general solution.
But there is very little else that can be done.
We have a few interfaces to allow programs to voluntarily be less demanding on the page cache (so we can apply behaviour modification to code). Maybe O_PONIES ... sorry, I mean O_NO_NOCACHE for filename lookup that limits the caching of negative entries created for that lookup.
Posted Mar 14, 2020 11:13 UTC (Sat)
by willy (subscriber, #9762)
[Link] (3 responses)
My understanding is that "the cache invalidation problem" refers to the difficulty of making sure that nobody has a stale entry (eg TLB invalidation) rather than the difficulty of knowing what to cache. Maybe that just reflects my biases as someone who has no control over what the CPU decides to cache.
The primary difficulty here is deciding how many negative dentries to allow. How to detect thrashing (a signal to increase the number of dentries allowed) and how to detect underutilisation of the cache (an opportunity to shrink the dcache and give the memory back).
Do you have any insight into how we might measure the effectiveness of the current cache?
Posted Mar 14, 2020 20:23 UTC (Sat)
by neilbrown (subscriber, #359)
[Link]
The fact that these two problems might have the same name is certainly delightful.
> The primary difficulty here is deciding how many negative dentries to allow.
I think that is a narrow formulation. If you look at the email that introduced the recent patches you will see two problems:
i.e. the problem is speed. Reducing the number of negative dentries might help - certainly with 1, probably with 2 - but it is a secondary issue, not a primary one.
Problem 2 could be addressed by optimizing the code (maybe - it's probably quite light weight already ... though I wonder if a negative dentry needs to be linked into the list from its parent - probably not if an alternate way could be provided to invalidate the ->parent pointer when the parent is freed) or by pruning negative dentries earlier (hence my comment about a shrinker with a different 'seeks' value) or by pruning some other cache - because when you need memory, you don't much care exactly what is sacrificed to provide it.
> Do you have any insight into how we might measure the effectiveness of the current cache?
Disable it and try to get some work done.
A really big problem with this stuff is that you cannot improve estimates of need without accounting of some sort, and any accounting has a cost - particularly global accounting on a multi-core multi-cache machine. As the primary problem is speed, you want to make sure that cost is (nearly) invisible.
Posted Mar 15, 2020 7:17 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link]
A cache with a poor invalidation strategy is more succinctly known as a "memory leak."
Posted Mar 17, 2020 20:25 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Just track the hit rate, hits versus misses versus additions for lookups across the entire cache? If there are many more additions than hits, shrink the cache; if there are many more misses than hits, grow it. (Or something like that.)
(Downside: this sort of global counter is a cache nightmare, but the figures need only be approximate, so the usual trick of having per-CPU counters occasionally spilled into a global counter should work fine.)
Posted Mar 14, 2020 7:23 UTC (Sat)
by mm7323 (subscriber, #87386)
[Link]
Since adding system calls for specific cases seems popular these days, how about adding something to resolve path search with a positive cache in the kernel e.g. int pathsearch(const char *filename, char *const path[] /* NULL terminated list of paths */); which returns either the index of the first path where filename exists or -ENOENT etc...
The cache structure maybe complex, requiring the path list and filename to be combined to a key, but this could then implement a positive cache with one entry per (successful) lookup, as a single system call from userspace (rather than repeated calls to access(), open() or exec() as is typically done for path search now). Other file system operations would also need to invalidate the cache as needed, possibly in vast swathes depending on how fast vs fine grained invalidation would want to be.
Such a cache could be made in userspace, using inotify() for invalidation and some sort of daemon with IPC to share the cache between processes (e.g. repeated invocations of gcc looking for includes), but it would probably be a bit racy and not give the full performance benefit.
Posted Mar 14, 2020 13:36 UTC (Sat)
by jlayton (subscriber, #31672)
[Link]
This idea completely breaks down once you have to deal with (most) network filesystems. NFS (e.g.) won't tell you anything about the number of entries in a directory when you do a LOOKUP, and in any case that info might be invalid the moment the LOOKUP reply leaves the server.
I just don't see how the number of positive dentries is at all connected to how many negative dentries we should cache per directory.
Posted Mar 12, 2020 23:59 UTC (Thu)
by josh (subscriber, #17465)
[Link] (3 responses)
Tip for faster program startup: set LANG and possibly LC_MESSAGES to your preferred UTF-8 locale, and set most of the other LC_* variables to C. Some of them are useless and should never be used (LC_NAME, LC_TELEPHONE), and some are things you might not want anyway (LC_COLLATE).
echo for me generates only 6 openat calls, of which only 2 are failed lookups.
Posted Mar 13, 2020 16:32 UTC (Fri)
by nivedita76 (subscriber, #121790)
[Link]
The only failed lookup is
Posted Mar 28, 2020 2:29 UTC (Sat)
by Hello71 (subscriber, #103412)
[Link] (1 responses)
I get 21427 ENOENTs for allmodconfig with a fresh O=, and 8161 with an existing one. less with in-tree builds.
Posted Apr 1, 2020 12:19 UTC (Wed)
by mgedmin (subscriber, #34497)
[Link]
Posted Mar 13, 2020 2:18 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (2 responses)
Wait, if one application/control groups populates this negative dentry:
openat(AT_FDCWD, "/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
... then no other application/control group would benefit anymore because negative dentries are confined and not shared across control groups? That sounds like a bit too much isolation.
I guess you could split each dentry and its cost into "shares" for each control group but that could become very complicated very quickly. The eviction policies could be... fun. "Least recently used" - by whom and when?
Posted Mar 13, 2020 13:49 UTC (Fri)
by Paf (subscriber, #91811)
[Link] (1 responses)
Posted Mar 14, 2020 6:48 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
for which "specific" group? A read-only page can be used by any number of groups.
Apparently there was an attempt to "charge" each memory page to only one group: https://lwn.net/Articles/443241/ Did this get merged and again: charged to which group?
Posted Mar 13, 2020 2:19 UTC (Fri)
by gus3 (guest, #61103)
[Link]
So why not specify the dentry cache as a ratio of positive-to-negative? The dentry cache can be split 2:1 or 3:1 or 50:1, giving positive dentries N positions for every 1 negative dentry position. Calculate the dentry cache size, then split the cache and move forward, with effectively two caches. As lookups get called, the caches get refreshed, both successes and failures. Eventually, the stale entries in each cache become invalidated and ejected, while the survivors continue to speed up performance.
On a basic desktop system, a 3- or 4-to-1 ratio might suffice for performance. On a particular server like FreeDB, the ratio would probably be very different. But that seems to me to be a basic, yet powerful sysctl tuning knob.
Posted Mar 13, 2020 6:49 UTC (Fri)
by smurf (subscriber, #17840)
[Link]
Heh. "quick" and "strace -f make allmodconfig" are still mutually exclusive for most people out there.
Posted Mar 13, 2020 8:00 UTC (Fri)
by bokr (guest, #58369)
[Link] (5 responses)
It sounds like the local file version of going through a fixed mirror list
IME locale data files provide a consistently bad example.
Maybe glibc could provide a generic jit-able wrapper for abis to self-tune
Anyway, IMO the file system should not be an enabler for bad app habits.
Posted Mar 13, 2020 10:41 UTC (Fri)
by hkario (subscriber, #94864)
[Link] (3 responses)
Posted Mar 13, 2020 19:32 UTC (Fri)
by bokr (guest, #58369)
[Link] (2 responses)
It's great to optimize file operations and make bad apps usable, but it
Posted Mar 13, 2020 20:33 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
Posted Mar 16, 2020 12:21 UTC (Mon)
by hkario (subscriber, #94864)
[Link]
If only we did find some time in the past 50 years to teach them, we would eliminate them completely! /s
It's not only libc problem, as the article pointed out there are others that do the same, like NSS, and thus Firefox.
Posted Mar 14, 2020 10:39 UTC (Sat)
by dezgeg (subscriber, #92243)
[Link]
Glibc can already do that, by the means of the locale-archive database. But as is apparent from the listing in the article, whichever distro is used has not enabled it.
Posted Mar 14, 2020 13:00 UTC (Sat)
by pr1268 (guest, #24648)
[Link] (2 responses)
How expensive is the management of negative dentries (or dentries in general)? For example: Couldn't this get a little expensive sometime? (I'm unsure what order 4 and 5 would happen.) And I'm referring to how the kernel does things _now_. Also, couldn't dentries be set for directory names? (I'm referring to the part about "... the negative dentries were created by deleting a large directory full of files" blurb of the 2002 LWN article linked above.)
Posted Mar 14, 2020 15:39 UTC (Sat)
by dezgeg (subscriber, #92243)
[Link] (1 responses)
Posted Mar 15, 2020 7:34 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link]
Just to clarify, are you saying that the kernel is doing this?
(Obviously, I have omitted numerous failure modes which we don't need to care about for the purposes of this discussion.)
Posted Mar 17, 2020 15:02 UTC (Tue)
by jezuch (subscriber, #52988)
[Link]
Posted Mar 19, 2020 3:08 UTC (Thu)
by faramir (subscriber, #2327)
[Link] (1 responses)
Posted Mar 21, 2020 8:25 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
(Over)Simply put, the *path* name is hashed, and use to look up the table. (I guess that's over-simplistic, but the idea is close enough.) The lookup never goes anywhere near the directory itself.
Cheers,
Posted Mar 26, 2020 15:25 UTC (Thu)
by mtmmtm (guest, #137939)
[Link] (2 responses)
Posted Mar 26, 2020 15:38 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
> Another thing is that lib-c or similar should not look for non-existing files
How would any libc know they don't exist without searching? If you mean for internal things, PATH (and other search mechanisms) are going to be pretty broken if every search "should" hit…
Posted Mar 31, 2020 19:03 UTC (Tue)
by mtmmtm (guest, #137939)
[Link]
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Wol
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Wol
Dentry negativity
There is a single global hash table for dentry lookup. The lookup key is the memory address of the dentry of the containing directory (the parent) and the name within that directory (which might be treated as case-insensitive). So the actual name lookup process does not notice how many names are currently cached for a particular directory. But that isn't really the issue.
(Here's a crazy idea. Any directory that hasn't been changed for N seconds gets a 'bloom-filter' attached when a readdir happens. The bloom filter is given the names of all existing objects in the directory. When a negative dentry is added that is *not* in the bloom filter, the dentry goes on a separate LRU with a short lifetime - the justification being that repopulating such negative dentries is cheap, requiring only a re-check of the bloom filter. Would this complexity be worth the benefit.... I doubt it).
Dentry negativity
Dentry negativity
Dentry negativity
Wol
Dentry negativity
you're going to want to cache the fact that libiberty.so is not found in the first two, but is in the third. So for this specific lookup case you need twice as many negative as positive dentries.
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Sometimes you can add RAM so your cache becomes bigger than your working set.
Sometimes you can apply behaviour modification so your working set becomes smaller than your cache.
Sometimes you can slip in a heuristic that helps your use case today, and no-one else notices.
The dcache already has a "referenced" flag which results in never-referenced dentries having shorter life expectancy. That sometimes helps, but I suspect it sometimes hurts too.
Dentry negativity
Dentry negativity
I would describe "the cache invalidation problem" as "choosing whether to invalidate something, and what, and when".
1/ speed of cleaning up negative dentries at unmount
2/ speed of freeing up memory used by negative dentries when memory is needed for something else.
Problem 1 could be addressed with some sort of lazy-unmount protocol. We don't *need* to release all cache memory before reporting success for unmounting a filesystem. We just need to be sure it can be released (not currently in use) and that it cannot be accessed again.
Any cache provides diminishing returns as it grows beyond the size of the working set, and as the "working set" changes over time, you can only hope for an approximate size. We have that by applying modest vm pressure.
The best heuristic for "should I provide more pressure on caches" is "am I being asked to apply lots of pressure". So the more often we need to find free memory, the more memory we should find each time (do we already do that?)
But that needs to be balanced against the risk of creating unnecessary pressure by pruning things that are needed again soon (thrashing). Maybe there is value in keeping a bloom filter recording identifies of recently pruned objects, and if the hit rate on that suggests we are re-populating objects that were recently removed, we could slow down the pruning. I suspect there are some circumstances were such a filter would help, but I doubt it would really be useful in general.
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
Dentry negativity
Dentry negativity
https://bugs.debian.org/636694, https://bugs.debian.org/638173).
Dentry negativity
Dentry negativity
Dentry negativity
Program launch seems to be a key factor
Dentry negativity
Dentry negativity
slows down the system, or delivery of its result, the answer is not for
the file system to make the dumb thing faster, but rather to eliminate
the calls on the file system to do the dumb things (i.e., look for files
known not to be there).
and chewing doggedly through it in order a to z for every file to be downloaded.
Couldn't systemd run something once per bootup and store a "mirror list"
e.g. in ~/.cache/.../locale_files.d/ as links to do what positive dentry does (IIUC)
and avoid hitting need for negative dentry at all?
by re-ordering a vtable of files, so as to learn to get a hit on the first try?
(it would need to store its state changes somewhere like ~/.cache or /var/run/ or?
Dentry negativity
Dentry negativity
How does that "fix" provide a path to better next versions
of those "thousands upon thousands of apps"? It doesn't.
It lets app developers continue with bad design, perhaps
even unwittingly, because the clever file system implementers
have hidden the worst effects of the bad app (or lib used) design.
doesn't "fix" them :)
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Dentry negativity
Wol
Dentry negativity
Dentry negativity
Dentry negativity
There is a hash and rehash-command for updating the hashtable in most shells. I think bash search every directory in PATH if it fails to find the executable in it's hashtable. Sounds like a bad idea to me. Not sure where the hashtable is located, but having ONE hashtable would be most efficient.
