|| ||Peter Braam <email@example.com>|
|| ||firstname.lastname@example.org, email@example.com,
|| ||[ANNOUNCE] Lustre Lite 1.0 beta 1|
|| ||Wed, 12 Mar 2003 10:56:25 -0700|
Subject: Lustre Lite 0.6 released (1.0 beta 1)
We're pleased to announce that the first Lustre Lite beta (0.6) has
been tagged and released. Seven months have passed since our last
major release, and Lustre Lite is quickly approaching the goal of
being stable, consistent, and fast on clusters up to 1,000 nodes.
Over the last few months we've spent thousands of hours improving and
testing the file system, and now it's ready for a wider audience of
early adopters. In particular, Lustre users on ia32 and ia64 Linux
systems running 2.4.19 and Red Hat 2.4.18-based kernels. Lustre may
work on other Linux platforms, but has not been extensively tested,
and may require some additional porting effort.
We expect that you will find many bugs that we are unable to provoke
in our testing, and we hope that you will take the time to report them
to our bug system (see Reporting Bugs below).
Lustre Lite 0.6:
- has been tested extensively on ia32 and ia64 Linux platforms
- supports TCP/IP and Quadrics Elan3 interconnects
- supports multiple Object Storage Targets (OSTs) for file data storage
- supports multiple Metadata Servers (MDSs) in an active/passive
failover configuration (requires shared storage between MDS nodes).
Automatic failover requires an external failover package such as
Red Hat's clumanager.
- provides a nearly POSIX-compliant filesystem interface (some areas
remain non-compliant; for example, we do not synchronize atimes)
- aims to recover from any single failure without loss of data or
- scales well; we have tested with as many as 1,100 clients and 128 OSTs
- is Free Software, released under the terms and conditions of the GNU
General Public License
As with any beta software, but especially kernel modules, Lustre Lite
carries the real risk of data loss or system crashes. It is very
likely that users will test situations which we have not, and provoke
bugs which crash the system. We must insist that you
BACKUP YOUR DATA
prior to installing Lustre, and that you understand that
we make NO GUARANTEES about Lustre.
Please read the COPYING file included with the distribution for more
information about the licensing of Lustre.
Although Lustre is for the most part stable, there are some known bugs
with this current version that you should be particularly aware of:
- Some high-load situations involving multiple clients have been known
to provoke a client crash in the lock manager (bug 984)
- Failover support is incomplete; some access patterns will not
- Recovery does not gracefully handle multiple services present on the
- Failures can lead to unrecoverable states, which require the system
to be umounted and remounted (and, in some case, nodes may require a
- Unmounting a client while an MDS is failed may hang the "umount"
command, which will need to be "kill"ed manually (bug 978)
- Metadata recovery will time out and abort if there are clients which
hold uncommitted requests, but which do not detect the death and
failover of the MDS. Running a metadata operation on quiescent
clients will cause them to join recovery. (bug 957)
instructions for downloading, building, configuring, and running
Lustre. If you encounter problems, you can seek help from others in
the lustre-discuss mailing list (see below).
We are eager to hear about new bugs, especially if you can tell us how
to reproduce them. Please visit <http://bugzilla.lustre.org/> to
The closer that you can come to the ideal described in
<https://projects.clusterfs.com/lustre/BugFiling>, the better.
See <http://www.lustre.org/lists.html> for links to the various Lustre
The US government has funded much of the Lustre effort through the
National Laboratories. In addition to money they have provided
invaluable experience and fantastic help with testing both in terms of
equipment and people. We thank them all, but in particular Mark
Seager, Bill Boas and Terry Heidelberg's team at LLNL which went far
beyond the call of duty, Lee Ward (Sandia) and Gary Grider (LANL),
Scott Studham (PNNL). We have also had the benefit of partnerships
with UCSC, HP, Intel, BlueArc and DDN and we are grateful to them.
Thank you for your interest in and testing of Lustre. We appreciate
your effort, patience, and bug reports as we work towards Lustre Lite
The Cluster File Systems team
Peter J. Braam <firstname.lastname@example.org>
Phil Schwan <email@example.com>
Andreas Dilger <firstname.lastname@example.org>
Robert Read <email@example.com>
Eric Barton <firstname.lastname@example.org>
Radhika Vullikanti <email@example.com>
Mike Shaver <firstname.lastname@example.org>
Eric Mei <email@example.com>
Zach Brown <firstname.lastname@example.org>
Chris Cooper <email@example.com>
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/