Posted Aug 10, 2008 11:58 UTC (Sun) by mattmelton (subscriber, #34842)
Parent article: The TALPA molehill
This does seem to be a case of the old developers-dont-run-web-servers syndrome.
While I know many people do run various servers for various reasons, I don't believe the
people commenting negatively on the principle have clearly thought out the application of
Linux in the business environment.
Linux is massively popular with dedicated server providers and virtual sever providers. These
providers laden their hosts with a multitude of software - software that is invariably
untested and sometimes often the very newest revision. Take the virtual server package
Virtuozzo which ships the admin panel Plesk which provides a HTTPd, FTPd and mail server etc.
If you look at ISPs like the Planet (aka EV1.net) they ship not just Plesk, but Ensim and
CPanel too - you name it. Linux is used to host a wide spectrum of applications at different
levels - be it the ISP using it to host Virtual servers, the reseller providing an API or the
end user installing applications and using the provided applications.
A long time ago, a server I was using was unfortunately compromised. It transpired that the
host we used had unmanaged hubs and that one of the unpatched and adjacent Plesk boxes was
used to ARPjack our box. Ultimately our passwords were captured and we had malware installed.
This was not my fault, but it does highlight the two problems where a kernel-based malware
scanner would help.
1) Ignoring the fact the box was unpatched, if the adjacent Plesk machine had a kernel based
malware scanner that prevented the hostile user from storing, opening or downloading his
ARPjacking toolset (a set of shell servers which merely intercept passwords) we would not have
2) If our box was configured with some kind of malware protection, it may have at the very
least sent out a warning message once the root user was found to be downloading malicious
In my situation, I could not afford to harden my box to allow only a few services - it
wouldn't have matter since my root password was compromised anyway. I was unable to prevent my
box from being hijacked, and I believe it is impossible to properly harden a general purpose
hosting package from attack when they export so many desired services.
I understand the floodgate theory everyone is scared of. People are worried that if end users
begin relying on anti-malware products for a sense of security, then they will neglect proper
security practice, leaving their system with the pretence of security rather than actually
being hardened. But I ask, which is the better option? A security professional unable to
thwart the kind of attack I suffered, or a security professional who receiving an email saying
something was a little fishy with the last thing the root user downloaded?
I also take issue with how more useful such a kernel machnism is for admins. It is clear to me
that an unobtrusive mechanism that updates malware definitions is far more likely to be
allowed to be automated and turn on by default than a patching mechanism like up2date or a
cron "apt get update". When ISPs have to scrutinise every single patch they apply, there is a
vulnerability void present between the release of the patch, and the potential for
Not forgetting the end user, if we take a side step and look at the news recently, we have
seen that it easy enough for any individual to set-up a malicious 3rd party package
repository. As more people turn to Linux, unfamiliar with autoconf or just unable to compile
software themselves, we see them using more and more 3rd party repositories. When it requires
nothing but a google search to find are repo with a package, or indeed a malicious package
itself, we should be more careful with what we are downloading - or we need something to be
more careful about what we are doing.
(yes, there is an argument about hashing to be had - but then that argument really does seem
to fail when you download customised builds, platform specific builds or nightly builds)
In a day when we have clever attacks and automated updates we should look to prevent them as
best we can. Security vulnerabilities are not illusionary just because you're using a Linux
kernel. They exist in the way we use every peice of software - and the Linux kernel is the
best place to implement a warden.