User: Password:
|
|
Subscribe / Log in / New account

Leading items

The Kernel Hacker's Bookshelf: Ultimate Physical Limits of Computation

June 18, 2008

This article was contributed by Valerie Henson

Moore's Law - we all know it (or at least think we do). To be annoyingly exact, Moore's Law is a prediction that the number of components per integrated circuit (for minimum cost per component) will double every 24 months (revised up from every 12 months in the original 1965 prediction). In slightly more useful form, Moore's Law is often used as a shorthand for the continuing exponential growth of computing technology in many areas - disk capacity, clock speed, random access memory. Every time we approach the limit of some key computer manufacturing technology, the same debate rages: Is this the end of Moore's Law? So far, the answer has always been no.

But Moore's Law is inherently a statement about human ingenuity, market forces, and physics. Whenever exponential growth falters in one area - clock speed, or a particular mask technique - engineers find some new area or new technique to improve at an exponential pace. No individual technique experiences exponential growth for long, instead migration to new techniques occurs fast enough that the overall growth rate continues to be exponential.

The discovery and improvement of manufacturing techniques is driven on one end by demand for computation and limited on the other end by physics. In between is a morass of politics, science, and plain old engineering. It's hard to understand the myriad forces driving demand and the many factors affect innovation including economies of scale, cultural attitudes towards new ideas, vast marketing campaigns, and the strange events that occur during the death throes of megacorporations. By comparison, understanding the limits of computation is easy, as long as you have a working knowledge of quantum physics, information theory, and the properties of black holes.

The "Ultimate Laptop"

In a paper published in Nature in 2000, Ultimate Physical Limits of Computation (free arXiv preprint [PDF] here), Dr. Seth Lloyd calculates (and explains) the limits of computing given our current knowledge of physics. Of course, we don't know everything about physics yet - far from it - but just as in other areas of engineering, we know enough to make some extremely interesting predictions about the future of computation. This paper wraps up existing work on the physical limits of computing and introduces several novel results, most notably the ultimate speed limit to computation. Most interesting in my mind is the calculation of a surprisingly specific upper bound on how many years a generalized Moore's Law can remain in effect (keep reading to find out exactly how long!).

Dr. Lloyd begins by assuming that we have no idea what future computer manufacturing technology will look like. Many discussions of the future of Moore's Law center around physical limits on particular manufacturing techniques, such as the limit on feature size in optical masks imposed by the wavelength of light. Instead, he ignores manufacturing entirely and uses several key physical constants: the speed of light c, Planck's reduced constant h (normally written as h-bar, a symbol not available in standard HTML, so you'll have to just imagine the bar), the gravitational constant g, and Boltzmann's constant kB. These constants and our current limited understanding of general relativity and quantum physics are enough to derive many important limits on computing. Thus, these results don't depend on particular manufacturing techniques.

The paper uses the device of the "Ultimate Laptop" to help make the calculations concrete. The ultimate laptop is one kilogram in mass and has a volume of one liter (coincidentally almost exactly the same specs as a 2008 Eee PC), and operates at the maximum physical limits of computing. Applying the limits to the ultimate laptop gives you a feel for the kind of computing power you can get in luggable format - disregarding battery life, of course.

Energy limits speed

So, what are the limits? The paper begins with deriving the ultimate limit on the number of computations per second. This depends on the total energy, E, of the system, which can be calculated using Einstein's famous equation relating mass and energy, E = mc2. (Told you we'd need to know the speed of light.) Given the total energy of the system, we then need to know how quickly the system can change from one distinguishable state to another - i.e., flip bits. This turns out to be limited by the Heisenberg uncertainty principle. Lloyd has this to say about the Heisenberg uncertainty principle:

In particular, the correct interpretation of the time-energy Heisenberg uncertainty principle ΔEΔt ≥ h is not that it takes time Δt to measure energy to an accuracy ΔE (a fallacy that was put to rest by Aharonov and Bohm) but rather that that a quantum state with spread in energy ΔE takes time at least Δt = πh/2ΔE to evolve to an orthogonal (and hence distinguishable) state. More recently, Margolus and Levitin extended this result to show that a quantum system with average energy E takes time at least Δt = πh/2E to evolve to an orthogonal state.

In other words, the Heisenberg uncertainty principle implies that a system will take a minimum amount of time to change in some observable way, and that the time is related to the total energy of the system. The result is that a system of energy E can perform 2E/πh logical operations per second (a logical operation is, for example, performing the AND operation on two bits of input - think of it as single bit operations, roughly). Since the ultimate laptop has a mass of 1 kilo, it has energy E = mc2 = 8.9874 x 1016 joules. The ultimate laptop can perform a maximum of 5.4258 x 1050 operations per second.

How close are we to the 5 x 1050 operations per second today? Each of these operations is basically a single-bit operation, so we have to convert current measurements of performance to their single-bit operations per second equivalents. The most commonly available measure of operations per seconds is FLOPS (floating point operations per second) as measured by LINPACK (see the Wikipedia page on FLOPS). Estimating the exact number of actual physical single-bit operations involved in a single 32-bit floating point operation would require proprietary knowledge of the FPU implementation. The number of FLOPS as reported by LINPACK varies wildly depending on compiler optimization level as well. For this article, we'll make a wild estimate of 1000 single-bit operations per second (SBOPS) per FLOPS, and ask anyone with a better estimate to please post it in a comment.

With our FLOPS to SBOPS conversion factor of 1000, the current LINPACK record holder, the Roadrunner supercomputer (near my home town, Albuquerque, New Mexico), reaches speeds of one petaflop, or 1000 x 1015 = 1 x 1018 SBOPS. But that's for an entire supercomputer - the ultimate laptop is only one kilo in mass and one liter in volume. Current laptop-friendly CPUs are around one gigaflop, or 1012 SBOPS, leaving us about 39 orders of magnitude to go before hitting the theoretical physical limit of computational speed. Finally, existing quantum computers have already attained the ultimate limit on computational speed - on a very small number of bits and in a research setting, but attained it nonetheless.

Entropy limits memory

What we really want to know about the ultimate laptop is how many legally purchased DVDs we can store on it. The amount of data a system can store is a function of the number of distinguishable physical states it can take on - each distinct configuration of memory requires a distinct physical state. According to Lloyd, we have "known for more than a century that the number of accessible states of a physical system, W, is related to its thermodynamic entropy by the formula: S = kB ln W" (S is the thermodynamic entropy of the system). This means we can calculate the number of bits the ultimate laptop can store if we know what its total entropy is.

Calculating the exact entropy of a system turns out to be hard. From the paper:

To calculate exactly the maximum entropy for a kilogram of matter in a liter volume would require complete knowledge of the dynamics of elementary particles, quantum gravity, etc. We do not possess such knowledge. However, the maximum entropy can readily be estimated by a method reminiscent of that used to calculate thermodynamic quantities in the early universe. The idea is simple: model the volume occupied by the computer as a collection of modes of elementary particles with total average energy E.

The following discussion is pretty heavy going; for example, it includes a note that baryon number may not be conserved in the case of black hole computing, something I'll have to take Lloyd's word on. But the end result is that the ultimate laptop, operating at maximum entropy, could store at least 2.13 x 1031 bits. Of course, maximum entropy means that all of the laptop's matter is converted to energy - basically, the equivalent of a thermonuclear explosion. As Lloyd notes, "Clearly, packaging issues alone make it unlikely that this limit can be obtained." Perhaps a follow-on paper can discuss the Ultimate Laptop Bag...

How close are modern computers to this limit? A modern laptop in 2008 can store up to 250GB - about 2 x 1012 bits. We're about 19 orders of magnitude away from maximum storage capacity, or about 64 more doublings in capacity. Disk capacity as measured in bits per square inch has doubled about 30 times between 1956 and 2005, and at this historical rate, 64 more doublings will only take about 50 - 100 years. This isn't the overall limit on Moore's law as applied to computing, but it suggests the possibility of an end to Moore's law as applied to storage within some of our lifetimes. I guess we file system developers should think about second careers...

Redundancy and error correction

Existing computers don't approach the physical limits of computing for many good reasons. As Lloyd wryly observes, "Most of the energy [of existing computers] is locked up in the mass of the particles of which the computer is constructed, leaving only an infinitesimal fraction for performing logic." Storage of a single bit in DRAM uses "billions and billions of degrees of freedom" - electrons, for example - instead of just one degree of freedom. Existing computers tend to conduct computation at temperatures at which matter remains in the form of atoms instead of plasma.

Another fascinating practical limit on computation is the error rate of operations, which is bounded by the rate at which the computer can shed heat to the environment. As it turns out, logical operations don't inherently require the dissipation of energy, as von Neumann originally theorized. Reversible operations (such as NOT) which do not destroy information do not inherently require the dissipation of energy, only irreversible operations (such as AND). This makes some sense intuitively; the only way to destroy (erase) a bit is to turn that information into heat, otherwise the bit has just been moved somewhere else and the information it represents is still there. Reversible computation has been implemented and shown to have extremely low power dissipation.

Of course, some energy will always be dissipated, whether or not the computation is reversible. However, the erasure of bits - in particular, errors - requires a minimum expenditure of energy. The rate at which the system can "reject errors to the environment" in the form of heat limits the rate of bit errors in the system; or, conversely, the rate of bit errors combined with the rate of heat transfer out of the system limits the rate of bit operations. Lloyd estimates the rate at which the system can reject error bits to the environment, relative to the surface area and assuming black-body radiation, as 7.195 x 1042 bits per meter2 per second.

Computational limits of "smart dust"

Right around the same time that I read the "Ultimate Limits" paper, I also read A Deepness in the Sky by Vernor Vinge, one of many science fiction books featuring some form of "smart dust." Smart dust is the concept of tiny computing elements scattered around the environment which operate as a sort of low-powered distributed computer. The smart dust in Vinge's book had enough storage for an entire systems manual, which initially struck me as a ludicrously large amount of storage for something the size of a grain of dust. So I sat down and calculated the limits of storage and computation for a computer one μm3 in size, under the constraint that its matter remain in the form of atoms (rather than plasma).

Lloyd calculates that, under these conditions, the ultimate laptop (one kilogram in one liter) can store about 1025 bits and conduct 1040 single-bit operations per second. The ultimate laptop is one liter and there are 1015 μm3 in a liter. Dividing the total storage and operations per second by 1015 gives us 1010 bits and 1025 operations per second - about 1 gigabyte in data storage and so many FLOPS that the prefixes are meaningless. Basically, the computing potential of a piece of dust far exceeds the biggest supercomputer on the planet - sci-fi authors, go wild! Of course, none of these calculations take into account power delivery or I/O bandwidth, which may well turn out to be far more important limits on computation.

Implications of the ultimate laptop

Calculating the limits of the ultimate laptop has been a lot of fun, but what does it mean for computer science today? We know enough now to derive a theoretical upper bound for how long a generalized Moore's Law can remain in effect. Current laptops store 1012 bits and conduct 1012 single-bit operations per second. The ultimate laptop can store 1031 bits and conduct 1051 single-bit operations per second, a gap of a factor of 1019 and 1039 respectively. Lloyd estimates the rate of Moore's Law as 108 factor of improvement in areal bit density over the past 50 years. Assuming that both storage density and computational speed will improve by a factor of 108 per 50 years, the limit will be reached in about 125 years for storage and about 250 years for operations per second. One imagines the final 125 years being spent frantically developing better compression algorithms - or advanced theoretical physics research.

Once Moore's Law comes to a halt, the only way to increase computing power will be to increase the mass and volume of the computer, which will also encounter fundamental limits. An unpublished paper entitled Universal Limits on Computation estimates that the entire computing capacity of the universe would be exhausted after only 600 years under Moore's Law.

250 years is a fascinating in-between length of time. It's too far away to be relevant to anyone alive today, but it's close enough that we can't entirely ignore it. Typical planning horizons for long-term human endeavors (like managing ecosystems) tend to max out around 300 years, so perhaps it's not unthinkable to begin planning for the end of Moore's Law. Me, I'm going to start work on the LZVH compression algorithm, tomorrow.

One thing is clear: we live in the Golden Age of computing. Let's make the most of it.

Valerie Henson is a Linux consultant specializing in file systems and owns a one kilo, one liter laptop.

Comments (55 posted)

Multi-system administration with Func

By Jake Edge
June 18, 2008

Managing multiple computer systems can involve a lot of repetitive tasks: connecting to each, performing some update, status check, or configuration tweak, and then moving on to the next machine. These kinds of things can be scripted of course, but scripts of that nature typically need to be adjusted frequently as machines come and go or the tasks change. The Fedora Unified Network Controller (Func) is a tool that will help simplify system administration, but there is more to it than that—it is a framework for doing two-way secure communication, from the command line, scripts, or applications.

Func is written in Python, providing an API for scripts written in that language, but it can also be used from the command line. Each client machine—minion in Func-speak—runs the funcd daemon which contacts the master server or overlord. From the overlord machine, commands can then be issued to individual minions or to subsets of them. Some of the power of Func can be seen in simple commands like:

    func "*" call service restart httpd
which will restart the web server on all of the minions.

Similar kinds of tasks—but with more control—can be handled through the Python API. A somewhat contrived example from the Func website gives a sense of what can be done:

    import func.overlord.client as fc

    results = fc.Client("*").service.status("httpd")
    for (host, returns) in results.iteritems():
       if returns == 0:
	   fc.Client(host).reboot.reboot()
This example looks for minions that are running a web server and reboots each that it finds.

Managing keys can be a hassle when using ssh as an administrative tool, so Func uses another tool, Certmaster, to assist with keys. Certmaster provides a set of utilities and a Python API for managing SSL certificates. Clients generate certificate signing requests (CSRs), which contain their public key, that are sent to the Certmaster on the overlord. Administrators can either sign them from the command line or enable auto-signing. The minion then retrieves the signed certificate so that the overlord and minion communicate over an encrypted channel after that.

Func is not meant to replace ssh, instead it is intended to provide multi-system and scripting capabilities which are not the strengths of ssh. Like ssh, though, Func is meant to be easy to deploy—eventually ubiquitous, at least for Fedora—simple to use as well as easy to extend. It also has a pluggable architecture that allows Python modules to be integrated easily into Func, expanding the abilities of the minions. The documentation shows how to use the func-create-module command to generate template code which allows the administrator to ignore the Func requirements and concentrate on the task at hand.

There is nothing particularly Fedora-specific about Func, that's just where it was born. There are some efforts underway to add it for other distributions. Most of the work would be in creating distribution-specific analogs for things like restarting services and querying hardware configurations.

Red Hat has been releasing a steady stream of system administration tools over the last year or so. The Emerging Technology (ET) group has developed quite an ecosystem of tools to support installations with large numbers of servers that are frequently installed and upgraded. One might think they have a large infrastructure of such servers.

One of those tools that is frequently discussed in conjunction with Func is Cobbler. It is meant to simplify the configuration of a server to handle network installation and booting for a large server farm. From the web page:

In short, Cobbler helps build and maintain network installation infrastructure really easily. It's highly customizable to your particular methods of operation through a wide variety of options, a powerful command line, a Web interface, a pluggable extension mechanism, and (for developers) its own Python API. Cobbler lets administrators forget how software gets installed and delivered and lets them concentrate instead on what they want to install where.

Cobbler and the other tools coming out of the ET group are not just targeted at physical machines, but also virtualized environments. By using Cobbler, the puppet configuration manager, and the oVirt virtual machine manager, thousands of systems of various kinds can be managed in a centralized fashion. As would be expected, all of the code is available as free software.

These tools are quite interesting for system administrators, particularly those who use Fedora and have lots of systems to maintain. Even for small home networks, though, Func at least could come in handy. For overworked administrators—no matter the size of their domain—better tools are always welcome.

Comments (9 posted)

Deki helps Mozilla developers collaborate

June 18, 2008

This article was contributed by Lisa Hoover

There was undoubtedly plenty of activity this week at the Mozilla Developer Center ahead of the release of Firefox 3. Thanks to a special tool created by the team at MindTouch and implemented into its latest product offering, Deki, Mozilla developers all across the globe were able view the site in their native tongue.

The "polyglot" language feature is only one of several components that make up Deki, an open source collaboration tool for communities and the enterprise. The polyglot can distinguish between different languages across a single system so it's no longer necessary for IT professionals to allocate sections of a web site's infrastructure to overcome language barriers. Instead, multiple languages are consolidated into one system and a site's pages are then localized according to user settings.

Deki functions similar to that of a traditional wiki, but with more features and practical applications. In fact, the company originally called the product "Deki Wiki" but realized it was too limiting and recently dropped "Wiki" from the name altogether. Developers can use Deki as a way to organize and aggregate project data, share documents and media, or even author and create collaborative applications from the ground up. Groups and organizations also use Deki as platform for managing a large knowledge base, coordinating team-based projects, or as a file repository.

Deki is part application, part platform. It behaves much the same way as other content management frameworks like Drupal and Joomla!, but has the underpinnings of a wiki that give it collaborative features as well. Furthermore, everything under Deki's hood can be accessed via the API on which it was built, and can be extended in any programming language.

At the heart of the platform is MindTouch Dream, which forms the application's architecture, and uses Deki as its interface. It's a .NET representational state transfer (REST) framework that runs on .NET 2.0 and Mono 1.2 — .NET runs on Microsoft Windows Servers 2003 and 2008, while Mono runs on Debian, Fedora, Ubuntu, openSUSE, and Apple OS X (see the web site for complete details). Data manipulation is done in XML using standard HTTP verbs, and data conversions to PHP, JSONP, etc. are done automatically behind the scenes. Licensed under the Gnu GPL and LGPL, together Deki and Dream can be completely customized and scaled to the needs of any size organization.

Company co-founders Aaron Fulkerson and Steve Bjorg were approached last winter by Mozilla's Chief Evangelist Mike Shaver about implementing Deki in time for the upcoming re-launch of its Developer Center. "Mike had reviewed our API and architectural documentation and he was enthusiastic about MindTouch Deki," recalls Fulkerson. "Later on the phone, we discussed Mozilla's needs, pains, and how MindTouch Deki seemed to be the perfect solution. We also day-dreamed a little about what the Mozilla community might build on the MindTouch platform. By my recollection, we both were pretty excited about the opportunity."

Given the Developer Center's wide geographical reach, barriers were to be expected as it struggled to cater to a group that collectively spoke dozens of different languages. In response, Bjorg and Fulkerson put together a design that allows for a multi-lingual Web site that scales as needed. As Mozilla's needs grow, additional languages can easily be added by translating a single file and submitting it for inclusion in the official Deki build. In fact, all current translations have come from the community, and more are on the way.

Deki isn't just for large organizations. Development platform-as-a-service provider Bungee Connect uses it as a documentation repository at the moment, but according to the Director of Bungee Connect's Developer Community, Ted Haeger, the plan is to soon make it the community platform for its Developer Network. "Our developers are very interested in programmable Web technologies, and Deki will allow us to provide them the most feature-complete wiki API we have seen yet. We expect to see some interesting and exciting things built by combining Bungee Connect and MindTouch Deki," he says.

The decision to choose Deki over other similar options "was driven overwhelmingly by the architecture of the product. Because Deki provides a complete RESTful API, it makes it an extremely attractive offering for us," notes Haeger.

Indeed, he considers the API Deki's best feature. "MindTouch has done an outstanding job with it," Haeger says. "Additionally, they have written their PHP front-end to the Deki API, which means that the API is central to the product rather than an afterthought. However, we should note that Deki's default PHP user interface is extremely polished, too. That combined with other must-haves, such as a permissions system that is considerably more flexible than what other wikis provide, helped solidify our decision."

Though there are varying levels of support options available, Haeger says Bungee Connect hasn't yet decided which to choose. They do plan, however, to lean on MindTouch for assistance as they migrate company documentation from MediaWiki to Deki. For organizations planning to take on the task themselves, Fulkerson points to the helpful guide on its site and the Mediawiki to Deki converter they have written: "As we always have done, we've released the source code to our public SVN repository. It's stable and has had generous test coverage, but this should be considered a beta release."

As Deki continues to gain traction in the enterprise as an agile content management system, Fulkerson and Bjorg say they knew they were on to something when they caught wind of the first user-organized conference held in Belgium last fall. Notes Fulkerson, "This was a pretty clear indication people liked what we're doing."

Comments (2 posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds