In 2003, Red Hat announced
that it was acquiring Sistina, and that it would work to release Sistina's
current technologies as open source in 2004. Red Hat made good on that
promise on June 24 by re-releasing the Global File
System under the GPL. The
Global File System (GFS) has a fairly long and interesting history. According
to the OpenGFS website,
the GFS project started at the University of Minnesota and was sponsored from
1995-2000 by the University. Then Matthew O'Keefe, a professor at the
university, founded Sistina around GFS.
Sistina stopped making new versions of GFS available under the GPL in
2001. It's important to note that it's inaccurate to say (as many have)
that GFS has been "re-released" under the GPL -- the original code that was
available under the GPL remained available under the GPL. Sistina simply
quit putting out new releases under the GPL, but users still had the option
of using and working with releases prior to Sistina's license change, as
did the OpenGFS project.
The release put out by Red Hat last week actually consists of more than
just GFS the file system; it totals nine components in all. In addition to
GFS itself, Red Hat has released the clustering extensions to the Logical
Volume Manager 2 (LVM2). Also, Red Hat has released clustering
infrastructure tools and cluster block devices that work with GFS; The
Cluster Configuration System (CCS), Cluster Manager (CMAN), Distributed
Lock Manager (DLM), GFS Unified Lock Manager (GULM), the Fence I/O fencing
system, the Global Network Block Device (GNBD) and the Cluster Snapshot
Block Device (CSBD).
Linux has no shortage of filesystems to choose from, but GFS is quite a bit
different from Ext3, ReiserFS and other popular file systems being used
with Linux today. The GFS release probably isn't that interesting for users
with a single Linux workstation or for small installations of Linux systems
that don't require a great deal of filesystem sharing or redundancy. For
Linux shops that have deployed or plan to deploy Linux in a clustering
capacity or using a Storage Area Network (SAN) to share filesystems among
servers, instead, GFS is a very interesting technology.
GFS allows Linux servers to share a single file system on a block device
via fiber channel, iSCSI, NDB or other technology, and allows those servers
to simultaneously read from that file system and coordinates writes to the
filesystem to avoid data being overwritten. Changes to the filesystem made
by one server are immediately available to other servers. GFS is different
from the Network File System (NFS) in that it removes the requirement for
clients to access storage devices through an NFS server. It removes some of
the overhead from working with data, making GFS more robust. One can use
the two technologies in conjunction with one another, using GFS to give a
set of servers access to a filesystem stored on a set of fiber channel
drives (for example) and then exporting the filesystem to clients via NFS.
GFS is highly scalable, which means that hundreds of systems can share a
filesystem on a SAN. In addition, as one might expect, file system and
volume resizes can be performed while the system is running -- which means
that enterprise systems don't need to be brought down for filesystem
maintenance when a deployment starts to require more space. The file
servers themselves can be clustered to provide high availability,
redundancy and increased performance. Just what the doctor ordered for a
database cluster, enterprise file servers, large e-mail installations and
many other applications.
For those interested in trying out GFS, source
RPMs are available for Red Hat Enterprise Linux 3, CVS
snapshots are available, and enterprising Fedora user Lennert Buytenhek
has already whipped
up FC2 RPMs of GFS and the necessary tools. Packages are no doubt being
prepared for other popular Linux distributions as well. Instructions on using GFS
can be found here.
Of course, RHEL users still have the option of buying GFS for a mere $2200.
The GFS team is now working to put GFS into the mainline Linux kernel. It
shouldn't be terribly difficult for a project this useful to find a healthy
community of users to apply whatever elbow grease is necessary to make that
happen.
(
Log in to post comments)