At SCALE 8x, Ronald Minnich gave a presentation about the
difficulties in trying to run millions of Linux kernels for simulating
botnets. The idea is to be able to run a botnet "at scale" to
try to determine how it behaves. But, even with all of the compute power
available to researchers at the US Department of Energy's Sandia National
Laboratories—where Minnich works—there are still various
stumbling blocks to be overcome.
While the number of systems participating in botnets is open to argument,
he said, current estimates are that there are ten million systems
compromised in the US alone. He listed the current sizes of various
botnets, based on a Network
World article, noting that "depending on who you talk to, these
either low by an order of
magnitude or high by an order of magnitude". He also said that it
is no longer reported when thousands of systems are added into a botnet,
instead the reports are of thousands of organizations whose systems have
been compromised. "Life on the Internet has started to really
Botnets are built on peer-to-peer (P2P) technology that largely came from
file-sharing applications—often for music and movies—which were
shut down by the RIAA. This made the Overnet, which was an ostensibly
legal P2P network, into an illegal network, but, as he pointed out, that
didn't make it disappear. In fact, those protocols and algorithms are
still being used: "being illegal didn't stop a damn thing".
For details, Minnich recommended the Wikipedia articles on
subjects like the Overnet, eDonkey2000, and Kademlia distributed hash
P2P applications implemented Kademlia to identify other nodes in a network
overlaid on the Internet, i.e. an overnet. Information could be stored and
retrieved from the nodes participating in the P2P network. That
information could be movies or songs, but it could also be executable
programs or scripts. It's a "resilient distributed store".
He also pointed out that computer scientists have been trying to build
large, resilient distributed systems for decades, but had little or nothing
with the currently working example; in fact, it's apparently currently being
maintained by money from organized crime syndicates.
Because the RIAA has shut down any legal uses of these protocols, it makes
it difficult to study:
"The good guys can't use it, but it's all there for the bad
guys" And the bad guys are using it, though it is difficult to get
accurate numbers as he mentioned earlier. The software itself is written
to try to hide its presence, so that it only replies to some probes.
Studying botnets with supercomputers
In the summer of 2008, when Estonia "went down, more or less"
and had to shut down its Internet because of an attack, Minnich and his
colleagues started thinking about how to model these kinds of attacks. He
likened the view of an attack to the view a homeowner might get of a forest
fire: "my house is on fire, but what about the other side of
town?". Basically, there is always a limited view of what is being
affected by a botnet—you may be able to see local effects, but the
effects on other people or organizations aren't really known: "we
really can't get a picture of what's going on".
So, they started thinking about various supercomputer systems they
have access to: "Jaguar" at Oak Ridge which has 180,000 cores in 30,000
nodes, "Thunderbird" at Sandia with 20,000 cores and 5,000 nodes, and
"a lot of little 10,000 core systems out there". All of them
run Linux, so they started to think about running "the real
thing"—a botnet with ten million systems. By using these
supercomputers and virtualization, they believe they could actually run a
Minnich noted that there have been two main objections to this idea. The
first is that the original botnet authors didn't need a supercomputer, so
why should one be needed to study them? He said that much of the research
for the Storm botnet was done by academics (Kademlia) and by the companies
that built the Overnet. "When they went to scale up, they just went to the
Internet". Before the RIAA takedown, the network was run legally on
the Internet, and after that "it was done by deception".
The Internet is known to have "at least dozens of nodes",
really, "dozens of millions of nodes", and the Internet was the
supercomputer that was used to develop these botnets, he said. Sandia
use the Internet that way for its research, so they will use their in-house
The second objection is that "you just can't simulate it".
But Minnich pointed out that every system suffers from the same
problem—people don't believe it can be simulated—yet simulation
is used very successfully. They believe that they can simulate a botnet
this way, and "until we try, we really won't know". In
addition, researchers of the Storm botnet called virtualization the "holy
grail" that allowed them to learn a lot about the botnet.
Why ten million?
There are multiple attacks that we cannot visualize on a large scale,
including denial of service, exfiltration of data, botnets, and virus
transmission, because we are "looking at one tiny corner of the
elephant and trying to figure out what the elephant looks like", he
said. Predicting this kind of behavior can't be done by running 1000 or so
nodes, so a more detailed simulation is required. Botnets exhibit
"emergent behavior", and pulling them apart or running them at smaller
scales does not work.
For example, the topology of the Kademlia
distributed hash network falls apart if there aren't enough (roughly
50,000) nodes in the network. The botnet nodes are designed to stop
communicating if they are disconnected too long. One researcher would hook
up a PC at home to capture the Storm botnet client, then bring it into work
and hook it up to the research botnet immediately
because if it doesn't get connected to something quickly, "it just
And if you don't have enough connections, the botnet dies: "It's kind
a living organism".
So, they want to run ten million nodes, including routers, in a
"nation-scale" network. Since they can't afford to buy that many machines,
they will use virtualization on the supercomputer nodes to scale up to that
size. They can "multiply the size of those machines by a
thousand" by running that many virtual machines on each node.
Using virtualization and clustering
Virtualization is a nearly 50-year-old technique to run multiple kernels in
virtual machines (VMs) on
a single machine. It was pioneered by IBM, but has come to Linux
in the last five years or so. Linux still doesn't have all of the
capabilities that IBM machines have, in particular, arbitrarily deep
nesting of VMs:
"IBM has forgotten more about VMs than we know". But, Linux
virtualization will allow them to run ten million nodes on a cluster of
several thousand nodes, he said.
The project is tentatively called "V-matic" and they hope to release the
code at the SC10 conference
in November. It consists of the OneSIS
cluster management software that has been extended based on what
Minnich learned from the Los Alamos Clustermatic system. OneSIS is based
on having NFS-mounted root filesystems, but V-matic uses lightweight
When you want to run programs on each node, you collect the binaries and
libraries and send them to each node. Instead of doing that iteratively,
something called "treespawn" was used, which would send the binary bundle
to 32 nodes at once, and each of those would send to 32 nodes. In that
way, they could bring up a 16M image on 1000 nodes in 3 seconds. The NFS
root "couldn't come close" to that performance.
Each node requires a 20M footprint, which means "50 nodes per
gigabyte". So, a laptop is just fine for a 100-node cluster, which
is something that Minnich routinely runs for development. "This VM
stuff for Linux is just fantastic", he said. Other cluster
solutions just can't compete because of their size.
For running on the Thunderbird cluster, which consists of nodes that are
roughly five years old, they were easily able to get 250 VMs per node.
They used Lguest virtualization because the Thunderbird nodes were
"so old they didn't have hardware virtualization". For more
modern clusters, they can easily get 1000 VMs per node using KVM. Since they have
10,000 node Cray XT4 clusters at Sandia, they are confident they can get to
ten million nodes.
Results so far
So far, they have gotten to 1 million node systems on Thunderbird. They
had one good success and some failures in those tests. The failures were
caused by two things: Infiniband not being very happy being rebooted all the
time, and the BIOS on the Dell boxes using Intelligent Platform Management
Interface (IPMI), which Minnich did not think very highly of. In fact,
Minnich has a joke about how to tell when a standard "sucks": if
it starts with an "I" (I20), ends with an "I" (ACPI, EFI), or has the word "intelligent" in
it somewhere; IPMI goes three-for-three on that scale.
So "we know we can do it", but it's hard, and not for very
good reasons, but for "a lot of silly reasons".
Some of the big problems that you run into when trying to run a
nation-scale network are the scaling issues themselves. How do you
efficiently start programs on hundreds of thousands of nodes? How do you
monitor millions of VMs? There are tools to do all of that "but all
of the tools we have will break—actually we've already broken them
all". Even the monitoring rate needs to be adjusted for the size of
the network. Minnich is used to monitoring cluster nodes at 6Hz, but most
big cluster nodes are monitored every ten minutes or
1/600Hz—otherwise the amount of data is just too overwhelming.
Once the system is up, and is being monitored, then they want to attack
it. It's pretty easy to get malware, he said, as "you are probably
already running it". If not, it is almost certainly all over your
corporate network, so "just connect to the network and you've
probably got it".
Trying to monitor the network for "bad" behavior is also somewhat
difficult. Statistically separating bad behavior from normal behavior is a
non-trivial problem. Probing the networking stack may be required, but
must be done carefully to avoid "the firehose of data".
In a ten million node network, a DHCP file is at least 350MB, even after you
get rid of the colons "because they take up space", and parsing the
/etc/hosts file can dominate startup time. If all the nodes can
talk to all other nodes, the kernel tables eat all of memory; "that's
bad". Unlike many of the other tools, DNS is designed for this
"large world", and they will need to set that up, along with the BGP
routing protocol so that the network will scale.
In an earlier experiment, on a 50,000 node network, Minnich modeled the Morris worm and learned
some interesting things. Global knowledge doesn't really scale, so
thinking in terms of things like /etc/hosts and DHCP configuration
is not going to work; self-configuration is required. Unlike the supercomputer world, you can't expect all
of the nodes to always be up, nor can you really even know if they are.
Monitoring data can easily get too large. For example, 1Hz monitoring of 10
million nodes results in 1.2MB per second of data if each node only reports
a single bit—and more than one bit is usually desired.
There is so much we don't know about a ten million node network, Minnich
said. He would like to try to do a TCP-based denial of service from 10,000
against the other 9,990,000. He has no idea whether it would work, but it
is just the kind of experiment that this system will be able to run.
For a demonstration at SC09, they created a prototype botnet ("sandbot")
nodes and some very simple rules, somewhat reminiscent of Conway's game of
Based on the rules, the nodes would communicate with their neighbors under
certain circumstances, and, once they had heard from their neighbors enough
times would "tumble", resetting their state to zero.
The nodes were laid out on a grid
which were colored based on the state of the node, so that pictures
and animations could be made. Each node that tumbled would be colored red.
Once the size of the botnet got over a
threshold somewhere between 1,000 and 10,000 nodes, the behavior became
completely unpredictable. Cascades of tumbles, called "avalanches" would
occur with some frequency, and occasionally the entire grid turned red.
Looking at the statistical features of how the
occur may be useful in detecting malware in the wild.
There is still lots of work to be done, he said, but they are making progress.
It will be interesting to see what kind of practical results come from this
research. Minnich and his colleagues have already learned a great deal
about trying to run a nation-scale network, but there are undoubtedly many
lessons on botnets and malware waiting to be found. We can look forward
to hearing about them over the next few years.
to post comments)