|
|
Subscribe / Log in / New account

Linux botnets

February 14, 2007

This article was contributed by Jake Edge.

Collections of subverted machines, called botnets are typically associated with Windows; thousands of zombie desktops sending spam and causing other internet mayhem. Unfortunately, it is increasingly clear that Linux boxes (as well as MacOS X and other UNIX boxes) are participating in botnets, but in a bit of a twist, it is mostly servers that have been subverted. Botnets are an enormous problem that Vint Cerf recently estimated may involve up to one quarter of all internet connected computers. This translates to a botnet controller's fondest wish: 150 million zombie machines to rent to the highest bidder.

Desktops are usually infected with a bot by an email-borne virus or a trojan attached to some application that the user installs, much like adware and spyware infect machines. The bot software then connects to a 'command and control' (C&C) infrastructure, that often use Internet Relay Chat (IRC) servers, to get instructions on what they should do. The 'owner' of a botnet (known as a bot herder) can then instruct the bots to do whatever they, or more likely their client, want. Because the traffic generated from a botnet comes from all over the Internet, it is difficult or impossible to recognize it for what it is. This allows botnets to be used for spamming, distributed denial of service (DDOS) attacks, click fraud and other malicious activities in a largely untraceable way.

The desktop infection methods are not typically as useful for Linux boxes and so bot herders have turned to web application exploits as a means for collecting subverted machines. Attacking servers has the additional advantage that they are usually machines with much greater resources: faster network connectivity, more storage, faster processors, etc. The attacks are largely targeted at everyone's favorite Internet security whipping boy, PHP applications. Open source PHP applications are the main target as they are ubiquitous and typically easy to exploit as some recent research indicates. An additional benefit of targeting a higher level application is that it is a cross-platform exploit; the operating system and web server software are immaterial if the target is a PHP application.

The easiest type of vulnerability to exploit is often Remote File Inclusion (RFI) which allows an attacker to run code on a vulnerable server with the permissions of the webserver. Generally, those permissions are sufficient to allow the bot to do anything the herder might wish it to; sending email and other network traffic is not normally a privileged activity. Even a cursory glance at the Bugtraq mailing list will reveal numerous RFI vulnerabilities; they are reported regularly and each can lead to bot exploitation if not patched.

Many different types of malware can be installed on a vulnerable machine, depending on the intent of the herder. As with the exploit itself, the installed code tends to be written in a scripting language so that it is cross-platform. The malware can range from simple test tools that indicate vulnerable servers to sophisticated shells that allow the attacker to effectively login to the server and perform any allowed operation.

The most serious damage that these botnets have caused is to our inbox; bots seem to be the preferred way to deliver spam these days. Diligent anti-spam efforts tend to get spamming accounts or systems shut down within hours but there is no easy way to shut down a spam-delivering botnet. A less visible, but potentially more damaging effect is DDOS attacks on internet sites. By attacking a site and working their way up the chain of DNS servers and registrars, a botnet can silence a site the herder does not like or hold sites hostage until they pay a ransom.

Past efforts to thwart botnets have often focused on destroying the C&C servers by shutting down the affected IRC sites, but botnets are moving toward using HTTP for C&C which allows that traffic to hide amongst the sea of similar traffic; it also has the advantage of getting through most firewalls. Botnets will be a serious problem going forward, and Linux systems are not immune to participation in them. The financial incentive is large and the means of prevention are weak, at least so far. As we have learned by trying to deal with spam, money makes our adversaries much more inventive which makes long-term solutions hard to come by.


Index entries for this article
GuestArticlesEdge, Jake


to post comments

Linux botnets

Posted Feb 15, 2007 2:42 UTC (Thu) by smoogen (subscriber, #97) [Link] (13 responses)

Having cleaned up a share of Linux systems.. the standard infection methods are:

1) ssh scanning. The botnet uses a dictionary attack against accounts to see what people have let open. In most cases, the crack-masters have gone through many broken systems and worked out what the most common account names are, and then used large numbers of broken into systems as a very large john-the-ripper cluster to figure out what passwords they could get. They then use those passwords as most likely because people choose passwords similarly. They then use large herds of bots to scan every open 22 port. Some botnets seem to also scan other 'common' ssh ports (2222 and 23). [I have seen bot 'clusters' scan a network and then whatever port you stuck SSH on other boxes would then start going after.]

2) PHP scanning. Looking at my logs I get about 40 scans a week for every PHP application that has had a vulnerability since 2000.

3) Webmin scanning. This is where popular webmin ports are scanned for and a similar set of tools as the ssh scanning are used. A lot of application vendors like to use this to help troubleshoot their applications from afar.. many of these vendors don't update the software or are aware that the webmin they installed was bad. They also like to choose password like the application vendor name spelt backwards.

4) Xvnc scanning. Same thing as above... the Oracle application Xvnc is rather old.

The big thing that comes up with several of these is that most botnet people are quite happy if they dont get root access. The ability to create a .<space> directory in the person home directory, /tmp or /var/tmp is fine with them. They can still execute their EnergyMech bot to get to some undernet IRC channel and get commands on what spam to send through the world. This doesn't mean that they wont' try to get root access on the system.. but for 99% of what they want to do.. they do not need to be root on a system.. just a normal user.

SSH scanning

Posted Feb 15, 2007 9:11 UTC (Thu) by ldo (guest, #40946) [Link] (10 responses)

I wrote a script which continually scanned /var/log/messages for "invalid user" entries logged by sshd, and did a

iptables --append INPUT --source srcaddr -j DROP

which was removed after 10 minutes. Most of the scanners never came back after the 10 minutes.

SSH scanning

Posted Feb 15, 2007 9:44 UTC (Thu) by ahoogerhuis (guest, #4041) [Link] (2 responses)

# Accept trusted hosts
iptables -A INPUT -s 192.168.0.0/24 -p tcp -m tcp --dport ssh -j ACCEPT

# For outsiders, rate-limit and enjoy
iptables -A INPUT -m recent -m state -p tcp -m tcp --dport ssh --state NEW --hitcount 3 --seconds 180 --update -j DROP
iptables -A INPUT -m recent -m state -p tcp -m tcp --dport ssh --set --state NEW -j ACCEPT

i.e. don't meddle in SSH from places we trust, for outsiders that DO need access, give them three attempts, otherwise it's the doghouse for a few minutes. Simple, very effective.

-A

SSH scanning

Posted Feb 15, 2007 10:51 UTC (Thu) by bkoz (guest, #4027) [Link]

Thanks for the iptables hackery. This is the #1 issue I see in my logs.

SSH scanning

Posted Feb 15, 2007 16:19 UTC (Thu) by nowster (subscriber, #67) [Link]

Order is important in these iptables commands. The commands in the parent appear to match on any traffic. Use instead:

# Accept trusted hosts
iptables -A INPUT -s 192.168.0.0/24 -p tcp -m tcp --dport ssh -j ACCEPT

# For outsiders, rate-limit and enjoy
iptables -A INPUT -p tcp -m tcp --dport ssh \
        -m state --state NEW \
        -m recent --hitcount 3 --seconds 180 --update -j DROP

iptables -A INPUT -p tcp -m tcp --dport ssh \
        -m state --state NEW \
        -m recent --set -j ACCEPT

SSH scanning - fail2ban

Posted Feb 15, 2007 12:10 UTC (Thu) by DG (subscriber, #16978) [Link]

alternatively try fail2ban (on ubuntu/debian)

SSH scanning

Posted Feb 15, 2007 15:02 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Why not just turn off password-authentication on your Internet-facing SSHen? Stick to challenge-response and you'll be safe from all these scanners (modulo major holes in sshd itself, which are rare.)

challenge-response on ssh

Posted Feb 15, 2007 23:52 UTC (Thu) by ccyoung (guest, #16340) [Link] (1 responses)

how? is there a package? or does it require real work?

challenge-response on ssh

Posted Feb 20, 2007 20:47 UTC (Tue) by nix (subscriber, #2304) [Link]

Well, ChallengeResponseAuthentication == public-key authentication and/or
use of OPIE, RSA SecurID, or some other one-time authentication system
(some of which OpenSSH has native support for).

SSH scanning

Posted Feb 15, 2007 16:29 UTC (Thu) by stevan (guest, #4342) [Link]

The blacklist.py python script
(http://blinkeye.ch/mediawiki/index.php/SSH_Blocking) works extremely well
for manging ssh scans, in our experience. The answer, though, is, of
course, keyed-only ssh access.

S

SSH scanning

Posted Feb 15, 2007 16:30 UTC (Thu) by kh (guest, #19413) [Link]

I have been happy with denyhosts

SSH scanning -- solutions

Posted Feb 16, 2007 2:28 UTC (Fri) by smoogen (subscriber, #97) [Link]

Thanks for everyone putting up various solutions.. they should make interesting grumpy old security admin articles some day.

They will also be handy for the admin who at 2am has to fix this problem and does a google search.

Linux botnets

Posted Feb 15, 2007 9:47 UTC (Thu) by wingo (guest, #26929) [Link]

Informative comment, thanks.

Single Packet Authentication is a far better solution.

Posted May 14, 2008 20:56 UTC (Wed) by shapr (subscriber, #9077) [Link]

I prefer Single Packet Authentication. The great advantage of SPA is that brute force scanners never know there's a service running.

The general case is, don't show headers when a user connects, just accept a connection when there's a correct login, and silently drop packets for illegal logins. That approach would dramatically reduce the attack surface for servers.

Linux botnets

Posted Feb 15, 2007 2:47 UTC (Thu) by zlynx (guest, #2285) [Link] (16 responses)

SELinux can help here. PHP applications should not be making outgoing network requests.

If SELinux is too difficult, iptables can filter away outgoing traffic as well. Not enough people put outgoing blocks on their firewalls.

A server farm / rack provider might also run IDS like Snort. See if you can get them to copy you on IDS alerts related to your IPs.

And for crying out loud, don't use your login password for your application's SQL account, helpfully listed in a plain text PHP include.

Linux botnets

Posted Feb 15, 2007 9:34 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link] (11 responses)

Don't even use a password at all to login in to a server. sshd_config should have an entry "PermitRootLogin without-password" and user accounts should all have a disabled password. Use ~/.ssh/authorized_keys. If you have a need to login from more than one client machine, use a smart card to access the server. I know that this is a trivial suggestion but when I occasionally see people login to their servers, most are entering a password.

ssh public keys

Posted Feb 15, 2007 11:05 UTC (Thu) by dion (guest, #2764) [Link] (1 responses)

I think the reason that some (a lot of) people use passwords with ssh is that they see ssh as "telnet, only secure".

They haven't looked at authorized_keys or they think that distributing their keys is too much trouble.

I've recently moved from doing copy+paste of id_dsa.pub when needing access to using a handy little shell script (don't run it on your machine, unless you want me to access it): http://dion.swamp.dk/ssh.sh

Before doing that I some added only one of my desktop machines or maybe I didn't bother with the key setup at all and just used password login, because I rarely needed access to that particular system.

With this solution all I need to get access to a new box is to tell the administrator (well, myself most times) to run that script and I'll never need to log in with a password.

Putting your public key on your website really ought to be in every "root account for dummies"-type book.

ssh public keys

Posted Mar 1, 2007 22:14 UTC (Thu) by muwlgr (guest, #35359) [Link]

Not everyone knows about ssh-copy-id utility.
May be that's what you need.
This operation is very popular, so openssh developers have written the tool for us all...

Linux botnets

Posted Feb 15, 2007 12:43 UTC (Thu) by minichaz (guest, #630) [Link] (7 responses)

I agree with using keys (ideally with passphrases too) but there's no need to allow root logins through SSH, particularly on internet facing servers. Set "PermitRootLogin no" and use "AllowGroups" or "AllowUsers" to prevent attacks against other accounts which should never connect over SSH.

Charlie

Linux botnets

Posted Feb 15, 2007 13:03 UTC (Thu) by dd9jn (✭ supporter ✭, #4459) [Link] (6 responses)

So and how do you get root access? Using su requires a password again and sudo without password will do nothing else but alias that user account to root. There is an old crypto rule which states: Put all your secret into one basket and watch that basket very well.

Public key authentication is far better than any password scheme. If you worry about a private key compromise, use a smart card.

Linux botnets

Posted Feb 15, 2007 15:20 UTC (Thu) by rfunk (subscriber, #4054) [Link] (5 responses)

Use sudo, with user's password. Make the basket of users who have access to sudo be
very small, and watch it closely.

Being able to get direct access to a root shell from the internet is just crazy.

Linux botnets

Posted Feb 15, 2007 20:22 UTC (Thu) by tetromino (guest, #33846) [Link] (4 responses)

How is sudo any more secure than root ssh logins with a password? In either case, if you can guess ONE password, you get remote root...

remote root

Posted Feb 15, 2007 21:21 UTC (Thu) by rfunk (subscriber, #4054) [Link] (2 responses)

This is an old debate. But you'll be hard-pressed to find an experienced professional
sysadmin who will allow remote root logins.

Allowing direct root access means that root access is not revokable per-admin; if the
password is somehow compromised (e.g. an admin is fired or is careless with the
password) you have to change the root password and communicate that to all admins
(with the associated insecurity of that communication). If admins are getting root from
their own accounts, then it's sufficient to disable or re-password a single admin's account
without affecting other admins.

So in the sudo case, if that one password is guessed, it's easier to recover than in the
single remote-root case.

Just running a root shell is dangerous. It's much better to be root only for what needs to
be done as root, to avoid accidents or possibly tripping over sabotage (e.g. someone
having gotten in and messing with your ls command).

This slashdot comment is one place that covers the issue well:
http://it.slashdot.org/comments.pl?sid=180864&cid=149...

remote root

Posted Feb 16, 2007 0:25 UTC (Fri) by dd9jn (✭ supporter ✭, #4459) [Link] (1 responses)

"Allowing direct root access means that root access is not revokable
per-admin; if the password is somehow compromised"

FWIW, I was talking about public key authentication for root access. This also means that revoking access is as simple as deleting one line from authorized_keys.

Where do you see the problem? I agree that logging of access is not as it should be but it is still available and come one, having root access does on most systems mean you have all the power to manipulate the logs. So why care.

remote root

Posted Feb 19, 2007 15:54 UTC (Mon) by hein.zelle (guest, #33324) [Link]

> Where do you see the problem? I agree that logging of access is not as it
> should be but it is still available and come one, having root access does
> on most systems mean you have all the power to manipulate the logs. So
> why care.

One reason I care is that it's easy to accidently turn password authentication back on. On many debian systems I've seen, the option UsePAM (on by default) effectively allows password authentication, even when PasswordAuthentication is off. This is not the case on the latest ubuntu, but dangerous nevertheless. I'd rather have an ssh login as a regular user, and then become root using su.

What is the reasoning behind not using su to become root? I understand the password will go over the line, but it's encrypted. Is this advised against for fear of keyloggers or so?

Linux botnets

Posted Feb 16, 2007 2:27 UTC (Fri) by smoogen (subscriber, #97) [Link]

Ok the security gained by using two layers is via tracking down who logged in... which becomes very important in large teams. If you are administrating a couple hundred linux servers you may have a team of 5-12 people who need root access. Knowing who executed a root level command and when is important and more secure in that if you lock down sudo you can see what they ran versus having a black hole that root logged in at 02:00 and logged out at 02:30 and you have no idea what they ran.

In the case of small teams.. you may not feel that you need this, but it comes in handy if the business grows... you find yourself with 12-20 people with the root password.

Linux botnets

Posted Feb 15, 2007 14:56 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

I don't see any reason to permit remote root login on Internet-facing servers (or desktop machines for that matter). A genuine administrator should have a user account, unique to them as a person, which can be audited.

The first attack scenario I'm considering is an outsider who is able to connect to the SSH server and perhaps has some limited (unprivileged) access to the target machine (e.g forum user access on a web server), plus they can snoop some of your traffic. This scenario would be typical for a black hat setting up a zombie network, who already has subverted some nearby machines in the network but not yours.

With root SSH logins, they can target the root account (yes, it could be renamed in theory, but unlike the Windows "Administrator" account that's not a routine precaution) and the only thing keeping you safe is that SSH server's authentication. Any flaws in that single line of defense whether security mistakes (you left your laptop unattended and they stole the private key file?) or system problems (OpenSSH is revealed to allow in 1-in-4 billion connections without authenticating) are a total loss.

In my alternative, they must actively target administrative users, without knowing in advance what names are used. Even if they get SSH access as a user, they still need to escalate to root, which is another layer of security, albeit one that we know is much weaker. On the other hand if they obtain (for example by social engineering) a root password or sudo root password equivalent, they still need remote access to use it. So in either case you've got two layer security.

I use and recommend private user logins, via ssh public keys PLUS an audited authentication step for escalating to root privileges. Also people should pay attention to the security of their SSH private keys. On every machine where you keep such a key, consider how a black hat might get access to the key and what they can access with it once they have it. If you use a passphrase (which you should) how are you sure the entry method is itself secure (IIRC trojans asking for the SSH passphrase have been seen in the wild) ?

Linux botnets

Posted Feb 15, 2007 12:09 UTC (Thu) by NAR (subscriber, #1313) [Link] (2 responses)

PHP applications should not be making outgoing network requests.

Then how should a PHP-based forum software send a "registration succeeded" e-mail to the user? Could connect to the local SMTP server, but spam could be sent out this way too.

Bye,NAR

PHP sending mail

Posted Feb 15, 2007 15:17 UTC (Thu) by rfunk (subscriber, #4054) [Link]

The local SMTP server can do its own checking to detect and block outgoing spam.

It can also queue the real messages when the recipient's mail server is temporarily
unavailable; sending directly from PHP can't do that.

Linux botnets

Posted Feb 16, 2007 2:34 UTC (Fri) by smoogen (subscriber, #97) [Link]

The standard way I have seen it done is that the application uses 'local' delivery and that is checked for 'correctness' at the machine and at the border SMTP router. This stops the majority of spambots in a large environment.

Linux botnets

Posted Feb 16, 2007 2:32 UTC (Fri) by smoogen (subscriber, #97) [Link]

I will say that Selinux has saved a customers bacon before.. the person had a bad PHP app installed and it got abused. The attacker was not able to execute anything however because the Selinux policy wouldnt let the attacks do anything from network connections to data viewing.

[While I do not use it.. I am betting that grsecurity might be able to do the same thing.]


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds