LWN.net Logo

TALPA strides forward

By Jake Edge
August 27, 2008

When last we left TALPA, it was still floundering around without a solid threat model, but over the last several weeks that part has changed. Eric Paris proposed a fairly straightforward—though still somewhat controversial—model for the threats that TALPA is supposed to handle. With that in place, there is at least a framework for kernel hackers to evaluate different ways to solve the problem, while also keeping in mind other potential uses.

It seems almost tautological, but anti-virus scanning is supposed to, well, scan files. In particular, they scan for known malware and block access to files when they are found to be infected. For better or worse, scanning files is seen as an essential security mechanism by many, so TALPA is trying to provide a means to that end. Paris describes it this way:

This is a file scanner. There may be all sorts of marketing material or general beliefs that they provide security against all sorts of wide and varied threats (and they do), but in all reality the only threats they provide any help against are those that can be found by scanning files. Simple as that. Some may argue this isn't "good" security and I'm not going to make a strong argument to the contrary, I can stand here for days and show cases where this is highly useful but no one can provide a threat model more than to say, "we attempt to locate files which may be harmful somewhere in the digital ecosystem and try to deny access to that data."

All of the various scenarios where active processes can infect files with malware or actively try to avoid scanning can be ignored under this model. While this looks like "security theater" to some, it avoids the endless what-ifs that were bogging down earlier discussions. It may not be a threat model that appeals to many of the kernel hackers, but it is one that they can work with.

To many kernel developers—used to efficiency at nearly any cost—time consuming filesystem scans seem ludicrous, especially since they only "solve" a subset of the malware problem. But the fact remains that Linux users, particularly in "enterprise" environments, believe they need this kind of scanning and are willing to pay for products that provide it. The current methods used by anti-virus vendors to do the scanning are problematic at best, causing users to run kernels tainted with binary modules. With a threat model—however limited—in place, work can proceed to find the right way to add this functionality into the kernel.

Paris is narrowing in on a design that calls out to user space, both synchronously and asynchronously depending on the operation. File access might go something like this:

  • open() - causes interested user-space programs to be notified asynchronously; anti-virus scanners might kick off a scan if needed
  • read()/mmap() - causes a synchronous user-space notification, which allows anti-virus scanners to block access until scanning is complete; if malware is found, cause the read/mmap to return an error
  • write() - whenever the modification time (mtime) of a file is updated, asynchronously notify user space; this would allow anti-virus scanners to re-scan the data as desired
  • close() - asynchronous user-space notification; another place where anti-virus scanners could re-scan if the file has been dirtied

Where and how to store the current scanning status of a file is still an open question. Various proposals have been discussed, starting with a non-persistent flag in the in-memory inode of a file. While simple, that would cause a lot of unnecessary additional scanning as inodes drop out of the cache. Persistent storage of the scanned status of a file alleviates that problem, but runs into another: how do you handle multiple anti-virus products (or, more generally, scanners of various sorts); whose status gets stored with the inode?

For this reason, user-space scanners will need to keep their own database of information about which inodes have been scanned. For anti-virus scanners, they will also want to keep information about which version of the virus database was used. Depending on the application, that could be stored in extended attributes (xattrs) of the file or in some other application-specific database. In any case, it is not a problem for the kernel, as Ted Ts'o points out:

I'm just arguing that there should be absolutely *no* support in the kernel for solving this particular problem, since the question of whether a file has been scanned with a particular version of the virus DB is purely a userspace problem.

It is important to keep the scanned status out of kernel purview in order to ensure that policy decisions are not handled by the kernel itself. This is in keeping with the longstanding kernel development principle that user space should make all policy decisions. This allows new applications to come along, ones that were perhaps never envisioned when the feature was being designed. For example, Alan Cox describes another reason that the state of the file with respect to scanning should be kept in user space:

This is another application layer matter. At the end of the day why does the kernel care where this data is kept. For all we know someone might want to centralise it or even distribute it between nodes on a clustered file system.

The latest TALPA design includes an in-memory clean/dirty flag that can short circuit the blocking read notification (when clean). That flag gets set to dirty whenever there is an mtime modification. This optimizes the common case of reading a file that hasn't changed. Further optimizations are possible down the line as Paris mentions:

If some general xattr namespace is agreed upon for such a thing someday a patch may be acceptable to clear that namespace on mtime update, but I don't plan to do that at this time since comparing the timestamp in the xattr vs mtime should be good enough.

Various other uses for the kinds of hooks proposed for TALPA have also come up in the discussion. Hierarchical storage management, where data is transparently moved between different kinds of media, might be able to use the blocking read mechanism. File indexing applications and intrusion detection systems could use the mtime change notification as well. This is a perfect example of kernel development in action; after a rough start, the TALPA folks have done a much better job working with the community.

Some might argue that the kernel development process is somehow suboptimal, but it is the only way to get things into Linux. As the earlier adventures of TALPA would indicate, flouting kernel tradition is likely to go nowhere. While it is still a long way from being included—pesky things like working code are still needed—it is clearly on a path to get there some day, in one form or another.


(Log in to post comments)

TALPA strides forward

Posted Aug 28, 2008 1:48 UTC (Thu) by jwb (guest, #15467) [Link]

I certainly hope this never gets anywhere near to being included in the Debian kernel package. If this junk somehow, by some miracle of bad judgment, gets included in a release kernel, I hope the distributors have the food sense to rip it out.

The best solution is for distros which cater to the ignorant, like RHEL and SLES, to patch this crap into their private trees.

TALPA strides forward

Posted Aug 28, 2008 4:45 UTC (Thu) by jzbiciak (✭ supporter ✭, #5246) [Link]

If an actual virus scanner is included in the kernel, sure. But the hooks themselves sound reasonable for other purposes, such as hierarchical storage management. I wouldn't throw the baby out with the bathwater.

I do agree, though, that scanning Samba shares for Windows viruses sounds like a userspace problem.

TALPA strides forward

Posted Aug 28, 2008 6:42 UTC (Thu) by drag (subscriber, #31333) [Link]

Yes.

Look, ignore all the 'windoze' and 'ignorant users' for now.

What this allows to do is active scanning of files. Not necessarily for viruses, but for any reason you can think of.

-------------
Now we have Clamav, right? Well Clamav is a passive scanning program. Meaning it can only scan files that we direct at it to scan.

A active scanner, on the other hand, is designed so that it deals with files on a automated manner based on file system events.

This is all that they want to do, at this point. On a given file system event, pause the access, and alert a third party to the event, redirect data as necessary, and then allow or deny access based on that third party.

Virus scanning, at this point, doesn't even enter into it. The criteria or data logging facilities can be used for anything.

It's a bit of a solution in search of a problem, but I wouldn't be suprised if somebody comes up with something clever to do with it. It's a bit like extended acls... sure for most functions you can figure out how to do a decent job with rwx/ugo, but you'll run into situations were the older access controls won't work effectively without a lot of kludges.

At the very least when setting up a Windows network file server/web server/email server/etc you can vastly improve performance and security by only scanning files as they change, real-time, instead of periodically scanning the entire share. Instead of stumbling onto a virus a hour after it's been written to your server you can have a 'ok' chance of intercepting it and logging the machine.

TALPA strides forward

Posted Aug 28, 2008 6:54 UTC (Thu) by drag (subscriber, #31333) [Link]

Ok.. Here is one application.

How about tying a revision control system into it, something like 'git'? Instead of just having formal commits, you can have smaller revisions of secondary importance of each and every time you write to a file.

Maybe make it so that it's possible that you could colaberate with other people real-time... so that you could have your local copy, but have it alert you if a file your writing too has already been changed by another person. So that way you would not rely on a central server or system to keep a 'fence' or whatever.. you just write a file and notifications are sent out quickly.

Something.

TALPA strides forward

Posted Aug 28, 2008 10:29 UTC (Thu) by NAR (subscriber, #1313) [Link]

I believe the filesystem used by ClearCase does something similar.

TALPA strides forward

Posted Aug 28, 2008 15:50 UTC (Thu) by pflugstad (subscriber, #224) [Link]

Clearcase requires explicit check-in/out else others won't see your changes. At least if you're in different views. And if multiple people are working in the same view, you should be shot.

TALPA strides forward

Posted Aug 28, 2008 16:01 UTC (Thu) by NAR (subscriber, #1313) [Link]

I meant the you could have your local copy, but have it alert you if a file your writing too has already been changed by another person part - the dynamic view in ClearCase lets me have a local copy (as long as I've checked out the file) and will alert at checkin that the file was changed. The other (not checked out) files are automatically updated (i.e. colaberate with other people real-time).

On the other hand, even vim alerts me, if an opened file is changed on the disk...

TALPA strides forward

Posted Aug 28, 2008 10:55 UTC (Thu) by nix (subscriber, #2304) [Link]

That job is surely better done with FUSE.

Just about the only thing TALPA can do that FUSE can't is run over areas
of the system, like /usr or /, where frequent transitions to a userspace
filesystem would be damaging to performance.

TALPA strides forward

Posted Aug 28, 2008 23:51 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

the problem with fuse is the performance

but also, unless fuse implements similar hooks you would need to have a fuse layer for each scanner tool, and that will make the performance problems even worse.

SCM in the filesystem?

Posted Aug 31, 2008 2:40 UTC (Sun) by vonbrand (subscriber, #4458) [Link]

Sure, at first sight the idea of "each change is a commit" sounds sensible, but if you have ever worked with some kind of fine-grained (local like RCS, or distributed like git) SCM, you soon discover that commits must record meaningful changes. Not each time I decide to save a file in the editor "just in case" (or, much worse, that the editor decides that there have been enough changes to write out a snapshot) does make sense as a meaningful point in history. Most commits are coordinated changes to several files (something of which you are painfully aware when using RCS).

SCM in the filesystem?

Posted Mar 25, 2009 18:17 UTC (Wed) by mrshiny (subscriber, #4266) [Link]

It would be useful to automatically put /etc under source control using a tool like this. Every save in /etc IS a commit on my system.

TALPA strides forward

Posted Aug 28, 2008 6:49 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

another thing these hooks could be used for is backup programs.

today they have to check timestamps, inode numbers, and file checksums to try and figure out what files have changed since the last time they looked.

if the tags are stored persistantly these hooks would give another mechanism

however, alerting when mtime changes is not good enough, compilers are running into problems where they aren't recompiling everything they need to becouse the systems are getting fast enough that the timestamp before and after a change may be the same.

alerting any time that the mtime would be changed, even if it changes to the same value that it was before can work, but you can't count on the mtime changing every time a file is modified.

TALPA strides forward

Posted Aug 28, 2008 9:00 UTC (Thu) by evgeny (guest, #774) [Link]

> compilers are running into problems

I believe it's `make' that runs into the problem.

TALPA strides forward

Posted Aug 28, 2008 9:16 UTC (Thu) by liljencrantz (subscriber, #28458) [Link]

Sure, many other build systems, like scons, allow you to use file checksums instead of mtime, for determining if a file has been modified. But once your project gets big enough, that is slow. The largest project I've used scons contained a few megabytes of source code, and scons would take a noticable amount of time checking the dependencies. So it's not only make.

TALPA strides forward

Posted Aug 28, 2008 10:30 UTC (Thu) by rvfh (subscriber, #31018) [Link]

Looking at mtime should be enough, provided that we look for any change in it and not just for it to be greater that than of another file.

Let me explain: the time maybe wrong a one machine causing the mtime go backwards, like when editing a file that's on a build server share, but it is very unlikely that the mtime will be exactly the same as it was before edition.

It is quicker to check mtime for a change than checksuming the whole file.

TALPA strides forward

Posted Sep 4, 2008 20:27 UTC (Thu) by renox (subscriber, #23785) [Link]

Mmm, in this case a version number attribute associated with each file would be better (if only because SW developers would be less likely to compare mtime of different files), though it may be a bit costly to maintain especially on CPU with don't have an atomic increase instruction..

TALPA strides forward

Posted Aug 28, 2008 10:57 UTC (Thu) by nix (subscriber, #2304) [Link]

And the right solution to this is finer-grained timestamps.

TALPA strides forward

Posted Aug 28, 2008 11:05 UTC (Thu) by evgeny (guest, #774) [Link]

I'm not sure. Consider distributed compilation farms (here "distributed" may refer to either filesystem and/or compiler; or just an NFS-mounted volume in the simplest case). Then maintaining nanosecond-accuracy time sync between several computers is needed, which is not trivial.

TALPA strides forward

Posted Aug 28, 2008 11:58 UTC (Thu) by nix (subscriber, #2304) [Link]

NTP can already report jitter and offset values. Maybe what we need is a
way to have those values *reduce* the precision of the kernel-provided
nsec timestamps, so that you get timestamps as accurate as possible for
your timebase, but no more accurate? (Of course, if the jitter changes a
lot, interesting things may happen, but that's quite rare.)

TALPA strides forward

Posted Aug 28, 2008 16:01 UTC (Thu) by bfields (subscriber, #19510) [Link]

Note that no linux filesystem has time resolution better than a jiffy. (The on-disc format may use nanoseconds, but the mtime/ctime/atime aren't updated using a nanosecond-precision time source.)

TALPA strides forward

Posted Aug 28, 2008 18:43 UTC (Thu) by SEJeff (guest, #51588) [Link]

"""Then maintaining nanosecond-accuracy time sync between several computers is needed, which is not trivial."""

It is actually really easy if you are not using Cisco switches. The latency of the switches makes a big difference.

Use ptpd from the linux hosts:
http://ptpd.sourceforge.net/

It will allow you to keep nanosecond time sync of all machines in a lan using multicast.

TALPA strides forward

Posted Aug 28, 2008 23:41 UTC (Thu) by njs (guest, #40338) [Link]

No, you just need a cleverer algorithm -- like someone mentioned above, you should look for changed timestamps rather than simply "future" timestamps (because clocks get set back all the time, but it's extraordinarily unlikely that a second edit will come along at exactly the moment when the old timestamp is repeated). Then to fix the quickly-repeated-edits problem, if the timestamp is within 2*resolution of the current time (for some conservative definition of resolution), don't write that timestamp down in your cache. Easy and safe, and causes hardly any speed degradation.

(High-quality VCS's already do this; I first learned the trick from bzr, dunno if any other popular ones have picked it up.)

TALPA strides forward

Posted Aug 29, 2008 0:08 UTC (Fri) by dlang (✭ supporter ✭, #313) [Link]

remember that the notification goes out while the file is still open.

so a program writes to a file, the scanner gets notified, scans the file, notes the mtime, the program writes to the file again.

on a fast machine it's very possible that this can all take place in a short enough time that the mtime does not change

TALPA strides forward

Posted Aug 29, 2008 0:35 UTC (Fri) by njs (guest, #40338) [Link]

>so a program writes to a file, the scanner gets notified, scans the file, notes the mtime, the program writes to the file again.

and the scanner gets notified again, and scans the file again, yes.

All the things you say are true, but I'm afraid I don't understand why you are saying them here (i.e., I'm missing your point somewhere)?

TALPA strides forward

Posted Aug 29, 2008 0:47 UTC (Fri) by dlang (✭ supporter ✭, #313) [Link]

if the scanner is only notified when mtime changes, then if the mtime doesn't change no notification will be sent out.

I posted a proposal for a slightly different approach where instead of using mtime and a single 'clean' bit I suggested stealing a chunk of xattr namespace and have the kernel clear this namespace when the file was dirtied.

this would let a scanner set a placeholder in the namespace to indicate that it was looking at the file, then when it was done it could check to see if the placeholder was still there, if so the file didn't change while it was being scanned and it's safe to mark it as scanned, if the placeholder is not there then you know the file changed and the scan you just did is worthless.

by using a chunk of namespace you can also support multiple scanners (without them needing to know anything about each other)

TALPA strides forward

Posted Aug 29, 2008 7:46 UTC (Fri) by njs (guest, #40338) [Link]

Oh, I see. Sure. I was reading quickly and just assumed that anyone talking about "notify when the mtime changes" actually meant, "hook into the kernel's poke-that-file's-mtime routine so it sends a notification", whether the resulting mtime was modified or not.

(In practice I'm pretty sure that the mtime *would* always be updated, though, because in linux, in-memory inodes always get nanosecond-accurate timestamps. The extra resolution gets stripped away by the filesystem driver when the metadata gets pushed out to disk, but the actual data structures used in the core kernel don't care about that.)

TALPA strides forward

Posted Aug 29, 2008 16:45 UTC (Fri) by bfields (subscriber, #19510) [Link]

In practice I'm pretty sure that the mtime *would* always be updated, though, because in linux, in-memory inodes always get nanosecond-accurate timestamps.

That's not true. On a recent kernel try running a simple test program, that does e.g., write, stat, usleep(x), write, stat. You'll see that on ext2/ext3 "x" has to be at least a million (a second) before you see a difference in the two stats, and that on something like xfs, it has to be at least a thousand to ten thousand (a few milliseconds--the time resolution used is actually jiffies).

(On older kernels I think the ext2/3 behavior might look like xfs's; that was fixed because of problems with unexpected changes in timestamps (due to lost nanoseconds field) when an inode got flushed out of cache and then read back.)

TALPA strides forward

Posted Aug 30, 2008 1:47 UTC (Sat) by njs (guest, #40338) [Link]

I was aware of the issues with confusing timestamp changes, but didn't realize it had been changed. Thanks.

TALPA strides forward

Posted Aug 28, 2008 10:27 UTC (Thu) by rvfh (subscriber, #31018) [Link]

It usually takes me more than a second to compile-modify-compile, and I don't believe a file should change twice during a compilation. I would think the latter a bug.

Wasn't LSM invented for this?

Posted Aug 28, 2008 10:34 UTC (Thu) by NAR (subscriber, #1313) [Link]

I don't know much about the kernel security, but doesn't the LSM provide these hooks already?

Wasn't LSM invented for this?

Posted Aug 29, 2008 0:22 UTC (Fri) by dlang (✭ supporter ✭, #313) [Link]

no, LSM can approve or deny access, but it doesn't have hooks to do notification on status change from clean->dirty, it also doesn't have the ability to record the results of a scan so that the file doesn't need to be scanned on access

Threat Model

Posted Aug 28, 2008 12:47 UTC (Thu) by skitching (subscriber, #36856) [Link]

People have been very scathing about the need for providing traditional windows-style virus-checking on Linux.

I would certainly agree that "privilege escalation" problems are less common on Linux, and that the correct way to deal with these is through architectural fixes rather than trying to block programs that exploit a flaw. However it seems (in my uninformed view) that there are a large number of security issues that do not rely on privilege escalation at all.

Case 1: A user visits an evil webpage. That webpage then exploits some browser flaw to drop a .so file on the local system and modify the user's .bashrc file to specify that file in LD_PRELOAD or similar.

Case 2: A user downloads and runs a trojaned "game" of some sort that has been emailed to them. Yes they shouldn't, but there are more and more "innocent" users of Linux these days.

Even without privilege escalation, an attack of this sort can do significant damage, including:
* sending spam (when that user is logged in)
* capturing user private data

Won't a "virus scanning" solution help here, where the traditional Linux security approach will not?

Threat Model

Posted Aug 28, 2008 16:47 UTC (Thu) by bfields (subscriber, #19510) [Link]

Case 1: A user visits an evil webpage. That webpage then exploits some browser flaw to drop a .so file on the local system and modify the user's .bashrc file to specify that file in LD_PRELOAD or similar.

Would you rather fix this with a browser patch, or with a scanner that, with great effort, tries to identify a few specific examples of such exploits?

Case 2: A user downloads and runs a trojaned "game" of some sort that has been emailed to them. Yes they shouldn't, but there are more and more "innocent" users of Linux these days.

Again, do you want to get in the business of cataloging every single trojaned game, or would you rather, say, give users trusted game sources, or better tools for sandboxing the games they run?

"Do both" is one possible answer, but I worry whether the obvious incentives for short-term bandaids may reduce the incentives for longer-term solutions.

Threat Model

Posted Aug 28, 2008 18:07 UTC (Thu) by bronson (subscriber, #4806) [Link]

Remember the Sony rootkit. Such a scanner would necessarily be large and very complex... and quite flawed. There's a very good chance that someone would arrange a successful attack against the scanner itself.

Adding more layers of software is unlikely to ever reduce your attack surface.

TALPA strides forward

Posted Aug 28, 2008 18:26 UTC (Thu) by iabervon (subscriber, #722) [Link]

The main problem I see with this is that it makes every untrusted file write a probable denial of service. If you can find a plain-text string that the scanner will reject, you can probably defeat fail2ban trivially by getting ssh to log that somebody tried to log in as {the reject string}, which means that the log file now contains a virus and can't be read by most programs. Or if you email a brand-new virus to root before the mailer is ready to reject it, and the system is using mail spools, all of root's mail is in a file containing a virus (once the description files get updated). If you get a virus into a backup, the backup becomes likely impossible to restore. If you post a virus to a web form that puts it into a database, the database may stop working.

The assumption that, if a region of a file is unexpectedly blocked from being read, important system tools won't misbehave in exploitable ways is highly optimistic, considering that this currently only happens when the system has major hardware issues. I wouldn't be too surprised to hear about systems with scanning set up turning out to be vulnerable to a variety of attacks which cause the system to be unable to process security updates.

In the Windows world, there are relatively few important helper processes, because services tend to be monolithic, so there's a relatively clear distinction between what should be prevented from using virus-infected files and what should be able to help clean them up. The UNIX world just isn't like that, making it unlikely that people will be able to have non-trivial policies that don't create security issues themselves.

TALPA strides forward

Posted Aug 28, 2008 20:11 UTC (Thu) by oak (guest, #2786) [Link]

And if attacker knows that a widely used scanner has a security/DOS issue,
he only needs to get a suitable file to the target machine through any
channel (mail, browser cache, cookie etc).

TALPA strides forward

Posted Aug 28, 2008 21:56 UTC (Thu) by ballombe (subscriber, #9523) [Link]

I completely agree with you.

I will go even farther:
Suppose someone write a malware that include code from e.g. glibc.
The antivirus vendor dutifully add that to the malware database,
and all the Linux box get DOSed when they update their malware
database.

TALPA is a poorly thought out thread model that create more threats.

TALPA strides forward

Posted Sep 1, 2008 14:50 UTC (Mon) by kleptog (subscriber, #1183) [Link]

These arn't hypothetical problems either. On the postgresql lists there are regularly reports of people complaining that tables spontaneously vanish or worse, the transaction logs suddenly can't be written out. The cause is invariably that some antivirus has blocked the writes and uninstalling it fixes all the problems.

There's enough safeguards to prevent data loss in most cases, but once the scanner starts violating write-order guarentees, the shit will really hit the fan.

TALPA strides forward

Posted Sep 1, 2008 19:07 UTC (Mon) by nix (subscriber, #2304) [Link]

Wow. This highlights the need to be able to exclude stuff from antivirus
scanning if anything does: what kind of idiot scans an RDBMS's data for
viruses? This is as silly as searching a filesystem's *metadata* for
viruses and banning only part of a metadata write if it thinks it finds
one: instant disaster...

TALPA strides forward

Posted Sep 1, 2008 21:10 UTC (Mon) by kleptog (subscriber, #1183) [Link]

I suppose popular antivirus software comes with tables of stuff not to scan. Can you imagine the news if an antivirus product killed an Oracle installaion by helpfully renaming a datafile that looked suspicous.

Generally you can configure the software to exclude certain directories from scanning, but the default is always scan everything unless told otherwise. On the whole violating FS semantics for some silly scanning software seems insane.

TALPA strides forward

Posted Sep 1, 2008 22:03 UTC (Mon) by nix (subscriber, #2304) [Link]

Oh, I agree, but the existence of horrible things like Oracle*Mail
indicates that if you think you have to scan everything that might, say,
contain email that might be read by people using vulnerable clients, you
have to add a virus scanner *inside the database* as well, to scan
everything going to and from tables.

Likewise you have to add a scanner inside everything else that maintains
structured/transactioned data storage.

Even discounting the security-brokenness of 'excluding the bad software',
this obviously will not scale.

TALPA strides forward

Posted Aug 29, 2008 1:21 UTC (Fri) by njs (guest, #40338) [Link]

> Eric Paris proposed a fairly straightforward—though still somewhat controversial—model for the threats that TALPA is supposed to handle.

It sounds like the threat model TALPA is designed for is actually a social engineering attack: AV vendors using "marketing" to convince companies to install poorly-engineered kernel-kluging software, with predictable results on reliability, support load, etc.

A cleaner approach would be to patch IT managers to be more resistant to this class of marketing attacks, but given the difficulty of field-upgrading such units and the poor success of previous attempts to fix this problem (non-execuacceptable gift policies, administrator phone number randomization, etc.), the threat mitigation provided by TALPA may represent a reasonable medium-term compromise.

TALPA strides forward

Posted Aug 30, 2008 5:02 UTC (Sat) by flewellyn (subscriber, #5047) [Link]

A cleaner approach would be to patch IT managers to be more resistant to this class of marketing attacks, but given the difficulty of field-upgrading such units and the poor success of previous attempts to fix this problem (non-execuacceptable gift policies, administrator phone number randomization, etc.),

Okay, I just snorted water from laughing so hard.

inotify?

Posted Aug 31, 2008 15:58 UTC (Sun) by liamh (subscriber, #4872) [Link]

A quick scan of the events monitored makes it seem like an overlap of functionality with inotify. As far as the hooks in the kernel are concerned, would it make more sense to use inotify, augmented appropriately if needed?

Liam

inotify?

Posted Sep 2, 2008 7:42 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

short answer, inotify doesn't scale.

inotify works fairly well if you have a small set of files or directories that you want to look for changes in, but it doesn't work if you want to find out about changes throughout the filesystem

inotify?

Posted Sep 3, 2008 2:40 UTC (Wed) by liamh (subscriber, #4872) [Link]

OK. Then wouldn't it make sense to make this effort be an inotify next generation (robust and scalable) project, thereby avoiding the issue of the eventual application(s)? I use inotify now for something which has nothing to do with virus scanning, and I can imagine other applications as well, so perhaps there is a need for this anyway, quite apart from the scanners. It seems like a bad idea to have two independent partially overlapping subsystems in the kernel.

Liam

inotify?

Posted Sep 3, 2008 8:21 UTC (Wed) by njs (guest, #40338) [Link]

There's been talk of such things (OS X has a similar scalable mixed kernel/user-space notification service that it uses for things like indexing), and it absolutely should be created for Linux -- but TALPA has requirements well beyond that. Blocking read(2) until some other userspace process has woken up and okay'ed things is *way* outside the scope of an inotify-alike!

inotify?

Posted Sep 3, 2008 18:51 UTC (Wed) by jlokier (guest, #52227) [Link]

Actually it's not so far away from inotify.

We have leases in fcntl() F_SETLEASE, they go alongside dnotify in fcntl() F_NOTIFY.

inotify could similarly benefit from an an ilease extension, as well as further scalability improvements. (inotify was designed to improve on dnotify in many ways including scalability; it's a shame it stopped half way).

That's blocking readers right there.

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds