Leading items
Adventures in Linux gaming
It has been an interesting week in the world of Linux games—really in the intersection of Linux and commercial games. First was the announcement of the release of the source code that underlies the Ryzom massively multi-player online role playing game (MMORPG). In addition, though, is word that the Humble Indie Bundle, a collection of cross-platform games being sold using a novel method, generated over $1 million in a week's time, with roughly a quarter of it coming from Linux users. It has long been said that there is no market for Linux commercial games, but these two events may shine a light on different business models that just might be successful.
Humble or successful?
The basic idea behind the Humble Indie Bundle is to take five (eventually
six) games developed outside of the major game studios ("indie"),
package them together, and allow the customer to set the price. All of the
games (World of Goo, Aquaria, Gish, Lugaru HD, Penumbra Overture, and
Samorost 2—the latter was donated to the bundle a few days later) are DRM-free: "Feel free to play them without an internet
connection, back them up, and install them on all of your Macs and PCs
freely.
" They are cross-platform for Linux, MacOS X, and Windows as
well. But sponsor Wolfire Games and
the other game creators took it a step further and split the proceeds with
two charities.
By default, whatever price is chosen will be split seven ways (five games plus two charities), but the buyer can change the allocation any way they choose. The two charities are Child's Play, which provides toys, games, and books for children in hospitals and the Electronic Frontier Foundation (EFF). Assuming an even split, each organization and game developer has brought in more than $150,000 since the promotion started on May 4.
Linux buyers account for around 14% of the purchases, but, interestingly, account for 23% of revenue as reported on May 7. Wolfire Games has been a strong advocate of cross-platform games, as it believes there is money to be made from Mac and Linux games. While the success of the bundle may not be repeatable exactly, it should give hope to game developers that there is money out there for cross-platform games, and to players on non-Windows platforms that there will be more games available.
A quick look at two of the games showed them to be fairly interesting,
certainly worth looking into further when some grumpy guy isn't yammering
on about some sort of deadline. One of the two, Lugaru has been released
as free software under the GPLv2. Anyone lacking an
"anthropomorphic rebel bunny rabbit with impressive combat
skills
" in their life is encouraged to check out the source or the game itself.
Ryzom
The Ryzom MMORPG has had a history of, almost, becoming open source, starting back in 2006, when the Free Ryzom Campaign tried to buy the assets of the original developer, Nevrax, which had fallen into bankruptcy. Then in 2008 it looked like there might be another opportunity to acquire Ryzom via bankruptcy proceedings, but that didn't happen either. But on May 6, the current owner, Winch Gate Properties Ltd, announced that the server and client code, along with thousands of textures and 3D objects, were being released under the Affero GPLv3 (code) and Creative Commons Attribution-Sharealike (artwork and objects) licenses.
According to Winch Gate CTO Vianney Lecroart, after acquiring Ryzom, the
company first focused
on getting it up and running: "We just
had 30 hard drives and we had to scan all them, buy servers, configure [them],
reconnect everything, it was very hard and long process.
" At first,
Ryzom was free to play, while Winch Gate got the billing system working,
and then switched back to a "pay to play" model. After that, it spent some
time making things more stable, reworking the "starting island to
make it easier to understand
" and adding the Kitin's Lair area for
more experienced players, he said.
The reason it is being open sourced now, Lecroart said, is because "we wanted to
focus first on players
", and now that is done, so it could turn to
freeing the code. He continued:
In addition, in just a week since the release, there have been patches
submitted that Winch Gate applied "as fast as we can
". The
roadmap on the development portal shows a release
expected in July that will concentrate on build tools and packaging, and
another in November that will focus getting the current Windows-only client
working for Linux and MacOS X. The current client will run under Wine and
the roadmap mentions a Linux native version that has been compiled and
"works
".
None of the Ryzom world data is part of the release, so those who want to run their own server—already available for Linux—will need to create their own world. Existing players could be harmed by the release of the world data as it would give others a potential leg up on the locations of interesting places or, more importantly, loot. There might also be a "spoiler" effect that could take away much of the fun of playing the game. But lack of world data does make it rather difficult to get started. Another problem is that the world building tools are all Windows-only and, because they use Windows-specific libraries and APIs, will be difficult to port. Currently the roadmap shows those being available as web-based tools in June 2011.
Winch Gate has put up a small instance of the Ryzom server, OpenShard which is
free to "connect, tweak, and hack [on]
", Lecroart said. In
addition, the current state page
lists various community members who have the server up and running. "It's now up to them to add some content or do what they want on their
server
", he said.
The Free Software Foundation, who had pledged $60,000 to the original Free
Ryzom effort, applauded the
release and suggested ways that free software developers could get
involved. The 13G of textures and 3D objects was of particular interest
because they "can be adapted and used in other games
". In
addition, the FSF suggests that making Blender and other free software 3D
modeling tools work with the Ryzom engine would be a worthwhile effort.
The "Help Us" page does not mention any kind of copyright assignment being required, nor does the Developer FAQ. Given the history of Ryzom—bouncing around from company to company, typically via bankruptcy—it's good to see that there won't be any organization that can make a proprietary fork. The AGPL also ensures that anyone using the engine to provide a service—game world—is required to release their code changes back to the community.
Linux and games
It is clear that Winch Gate hopes to gain some publicity—and Ryzom players—by freeing its code. It also seems like it is genuinely interested in what the community will do with the code, artwork, and objects. One would have to guess that the Ryzom player community is fairly small, given the various upheavals along the way, so the risk to Winch Gate is quite low. In the meantime, the community gets a chance to play with a professional MMORPG engine; it's anyone's guess where that will lead. Perhaps Winch Gate is hoping someday to run contract servers for a game world created by the community.
The Humble Indie Bundle has certainly raised the profile of Wolfire and the games that were included. World of Goo has made something of a name for itself in the Linux world—perhaps partially because Ted Ts'o mentioned it during the ext4 delayed allocation mess—but the others were flying under the radar. No more. It will be interesting to see where that leads as well.
What's new in GCC 4.5?
Version 4.5 of the GNU Compiler Collection was released in mid-April with many changes under-the-hood, as well as a few important user-visible features. GCC 4.5 promises faster programs using the new link-time optimization (LTO) option, easier implementation of compiler extensions thanks to the controversial plugin infrastructure, stricter standards-conformance for floating-point computations, and better debugging information when compiling with optimizations.
The GNU Compiler Collection is one of the oldest free software projects still around. Version 1.0 of GCC was released in 1987. More than twenty years later, GCC is still under active development and each new version is adding important features. Supporting these new features in such an old codebase often requires major rewriting of substantial parts of GCC. GCC 4.0 was an important milestone in this regard, and GCC internals are still evolving at a rapid pace. However, these core improvements are sometimes not clearly visible as improvements for users. This is not the case in GCC 4.5. This article describes four new features in GCC 4.5, and also looks at an internal feature that may radically change how GCC is developed in the future.
Link-Time Optimization
Perhaps the most visible of the new features in GCC 4.5 is the Link-Time Optimization option: -flto. When source files are compiled and linked using -flto, GCC applies optimizations as if all the source code were in a single file. This allows GCC to perform more aggressive optimizations across files, such as inlining the body of a function from one file that is called from a different file, and propagating constants across files. In general, the LTO framework enables all the usual optimizations that work at a higher level than a single function to also work across files that are independently compiled.
The LTO option works almost like any other optimization flag. First, one needs to use optimization (using one of the -O{1,2,3,s} options). In cases where compilation and linking are done in a single step, adding the option -flto is sufficient
gcc -o myprog -flto -O2 foo.c bar.c
This effectively deprecates the old -combine option, which was too slow in practice and only supported for C.
With independent compilation steps, the option -flto must be specified at all steps of the process:
gcc -c -O2 -flto foo.c gcc -c -O2 -flto bar.c gcc -o myprog -flto -O2 foo.o bar.o
An interesting possibility is to combine the options -flto and -fwhole-program. The latter assumes that the current compilation unit represents the whole program being compiled. This means that most functions and variables are optimized more aggressively. Adding -fwhole-program in the final link step in the example above, makes LTO even more powerful.
When using multiple steps, it is strongly recommended to use exactly the same optimization and machine-dependent options in all commands, because conflicting options during compilation and link-time may lead to strange errors. In the best case, the options used during compilation will be silently overridden by those used at link-time. In the worst case, the different options may introduce subtle inconsistencies leading to unpredictable results at runtime. This, of course, is far from ideal, and, hence, in the next minor release, GCC will identify such conflicting options and provide appropriate diagnostics. Meanwhile, some extra care should be taken when using LTO.
The current implementation of LTO is only available for ELF targets, and, hence, LTO is not available in Windows or Darwin in GCC 4.5. However, the LTO framework is flexible enough to support those targets and, in fact, Dave Korn has recently proposed a patch that adds LTO support for Windows to GCC 4.5.1 and 4.6, and Steven Bosscher has done the same for Darwin.
Finally, another interesting ongoing project, called whole program optimization [PDF], aims to make LTO much more scalable for very large programs (on the order of millions of functions). Currently, when compiling and linking with LTO, the final step stores information from all files involved in the compilation in memory. This approach does not scale well if there are many large files. In practice, there may be little interaction between some files and the information required could be partitioned and optimized independently, with little performance loss, or at least gracefully degrading the effectiveness of LTO depending on existing resources. The experimental -fwhopr option is a first step in this direction, but this feature is still under development and even the name of the option is likely to change. Therefore, GCC 4.6 will probably bring further improvements in this area.
Plugins
Another long-awaited feature is the ability to load user code as plugins that modify the behaviour of GCC. A substantial amount of controversy surrounded the implementation of plugins. The possibility of proprietary plugins was probably the main factor stalling the development of this feature. However, the FSF recently reworked the Runtime Library Exception in order to prevent proprietary plugins. With the new Runtime Library Exception in place, the development of the plugins framework progressed rapidly. This, however, did not completely end the controversy surrounding plugins, and while some developers think that plugins are essential for the future of GCC and for attracting new users and contributors, others fear that plugins may divert efforts from improving GCC itself.
The plugin framework of GCC can work in principle on any system that supports dynamic libraries. In GCC 4.5, however, plugins are only supported on ELF-based platforms, that is, most Unix-like systems, but not Windows or Darwin. A plugin is loaded with the new option -fplugin=/path/to/file.so. GCC makes available a series of events for which the plugin code can register its own callback functions. The events already implemented in GCC 4.5 allow plugins to interact with the pass manager to add, reorder and remove optimization passes dynamically, modify the low level representation used by C and C++ front-ends, add new custom attributes and compiler pragmas, and other possibilities described in the internal documentation.
Despite plugins being a new feature in GCC 4.5, several projects are already making use of the plugins support. Among these projects is Dehydra, the static analysis tool for C++ developed by Mozilla; and MELT, a framework for writing optimization passes in a dialect of LISP. Also, the ICI/MILEPOST research project strongly relies on the new plugins framework in GCC 4.5.
Variable Tracking at Assignments
The Variable Tracking at Assignments (VTA) project aims to improve debug information when optimizations are enabled. When GCC compiles some code with optimizations enabled, variables are renamed, moved around, or even completely removed. When debugging such code and trying to inspect the value of some variable, the debugger would often report that the variable has been optimized out. With VTA enabled, the optimized code is internally annotated in such a way that optimization passes transparently keep track of the value of each variable, even if the variable is moved around or removed.
A small example of the differences between debug information in GCC 4.5 and previous releases is the following program:
typedef struct list { struct list *n; int v; } *node; node find_prev (node c, node w) { while (c) { node opt = c; c = c->n; if (c == w) return opt; } return NULL; }
Variable opt
is removed when compiling with
optimization. Hence, in previous GCC versions, or when compiling
without VTA, one cannot inspect the value of opt
even at
the highest debugging level. In GCC 4.5, however, VTA enables
inspection of the value of all variables at all points of the function.
The effect of VTA is even more noticeable for inlined functions. Before VTA, optimizations would often completely remove some arguments of an inlined function, making it impossible to inspect their values when debugging. With VTA, these optimizations still take place, however, appropriate debug information is generated for the missing arguments.
Finally, the VTA project has brought another feature, the new
-fcompare-debug
option, which tests that the code
generated by GCC with and without debug information is identical. This
option is mainly used by GCC developers to test the compiler, but it
may be useful for users to check that their program is not affected by a
bug in GCC, though at a significant cost in compilation
time.
Standard conforming excess precision
Perhaps the most reported bug in GCC is bug 323. The symptoms appear when different optimization levels produce different results in floating-point computations, and when two ways of performing the same calculation do not produce the same result. Although this is an inherent limitation of floating-point numbers, users are still surprised that different optimization levels lead to highly different results. One of the main culprits of the problem is the excess precision arising from the use of the x87 floating-point unit (FPU). That is, operations performed in the FPU have more precision than double precision numbers stored in memory. Hence, the final result of a computation may significantly depend on whether intermediate operations are stored in the FPU or in memory.
This leads to some unexpected and counter-intuitive results. For example, the same piece of code may produce different results using the same compilation flags and the same machine depending on changes of seemingly unrelated code, because the unrelated code forces the compiler to save some intermediate result in memory instead of keeping it in a FPU register. One workaround to this behavior is the option -ffloat-store, which stores every floating-point variable in memory. This has, however, a significant cost in computation time. A more fine-grained workaround is to use the volatile qualifier in variables suffering from this problem.
While this problem will never be solved in computers with inexact representation of floating-point numbers, GCC 4.5 helps improve the situation by adding a new option -fexcess-precision=standard, currently only available for C, that handles floating-point excess precision in a way that conforms to ISO C99. This option is also enabled with standards conformance options such as -std=c99. However, standards-conforming precision incurs an extra cost in computation time. Therefore, users more interested in speed may wish to disable this behavior using the option -fexcess-precision=fast.
C++ compatible
GCC 4.5 is the first release of GCC that can be compiled with a C++ compiler. This may not seem very interesting or useful at the moment (but take a look at the much improved -Wc++-compat option). However, this is only the first step of an ongoing project to use C++ as the implementation language of GCC. Except for some front-end bits written in other languages, notably Ada, most of GCC is implemented in C. The internal structures of GCC are under a continuous improvement and modularization aimed at creating cleaner interfaces, and many GCC developers think that this work would be easier using C++ than C. However, this proposal is not free of controversy, and it is not clear whether the switch would occur in GCC 4.6, later, or ever.
Other improvements
The above are only some examples of the many improvements and new features in GCC 4.5. A few other features that are worth mentioning:
- GCC now makes better use of the information provided by the restrict keyword, which is also supported in C++ as an extension, to generate better optimized code.
- The libstdc++ profile mode tries to identify suboptimal uses of the standard C++ library, and suggest alternatives that improve performance.
- Previous versions of GCC incorporated the MPFR library in order to consistently evaluate math functions with constant arguments at compile time. GCC 4.5 extends this feature to complex math functions by incorporating the MPC library.
- Many improvements have been made in the specific language front-ends, in particular from the very active Fortran front-end project. Also worth mentioning is the increasing support for the upcoming ISO C++ standard (C++0x)
Conclusion
We are living interesting times on the compiler front, and GCC 4.5 is an indication that we can still expect new developments in the future. The release of GCC 4.5 brings to its users several important, and somewhat controversial, features. It also includes the typical long list of small fixes and improvements, where most will be able to find at least one thing to their liking. GCC 4.5 may well be a transition point, where the foundational work that has been done during the 4.x release series is starting to show up in user-visible features that would have been impossible in the GCC 3.x release series. It is difficult to say at this moment what GCC 4.6 will bring us in a year from now, as it will depend on what the contributors decide. Anyone can contribute to the future of GCC. This is free software after all.
Acknowledgments
I would like to thank in general the community of GCC developers, and in particular, Ian Lance Taylor, Diego Novillo, and Alexandre Oliva, for their helpful comments and suggestions when writing this article.
Of hall monitors and slippery slopes
Since its inception in July of 2009, the Fedora Hall Monitor
policy has had mixed reviews. The intent of the policy is to promote
more civil discourse on various Fedora mailing lists—to embody the
"be excellent to each other
" motto that is supposed to govern
project members' behavior. Questions were raised about the recent "hall
monitoring" of a thread on fedora-devel, because, instead of the usual
reasons for stopping a thread—personal attacks, profanity, threats of
violence, and the like—it was stopped, essentially, for going
on too long.
Kevin Kofler's open letter about why he was not going to run again for a seat on the Fedora Engineering Steering Committee (FESCo) was the starting point of the problem thread. But the focus of the discussion was mostly on the update process for Fedora, something which has been roiling the Fedora waters for several months now. Kofler strongly believes that the proposals requiring more "karma"—votes in favor, essentially—in the bodhi package management system before pushing out updates are simply bureaucratic in nature and won't prevent problems with updates. Other FESCo members, apparently the vast majority of them, disagree. As FESCo member Matthew Garrett put it:
But Kofler believes that package maintainers should be able to make these decisions, without hard and fast testing requirements imposed by FESCo, or the Fedora Packaging Committee (FPC). Kofler and others are quite happy with the status quo, whereas other community members—both FESCo and not—see that problems with upgrades are giving the project something of a black eye. Kofler is adamant in his response to Garrett:
Most of these arguments are familiar to those who follow fedora-devel. The participants in the discussion are often the same and the positions they take are fairly predictable. But the content was on-topic and the discourse wasn't descending into personal attacks or insults, so it was something of a surprise to many when hall monitor Seth Vidal stepped in and closed the thread:
No further posts to this thread will be allowed.
The last line turns out to have been somewhat premature as the thread continued, only it switched to focus on the hall monitors' decision. Toshio Kuratomi asked how the Hall Monitor policy—which is undergoing some changes as a result of this issue—could be applied to redundant threads:
Vidal quoted a blanket provision in the
policy that allows thread closure posts for "aggressive or
problematic mailing list threads
" as the reason the action was
taken. That didn't sit well with a number of folks. Kofler complained: "This vague paragraph can be abused to justify censoring pretty much
everything.
" Adam Williamson had a more detailed analysis:
At least, that's how I always assumed it was intended when the policy came in, and I'm not at all sure I'm okay with a policy which says 'hall monitors can shut down any discussion they choose for any reason they like'.
Evidently, three
users and two hall monitors had complained
about the thread, which was enough to constitute "repeated
complaints
". But, because the topic had (mostly) shifted away from
the update process and into things like hall monitoring and Fedora's
"purpose" (or goal, i.e. "what is Fedora for?"), it was allowed to
continue. In the end, the "thread closure" led to roughly doubling the
size of a thread which may—or may not—have been winding down on
its own.
In a post to fedora-advisory-board, Kuratomi requested that the board look into the issue
with an eye toward clarifying the policy. He suggested three ways to
resolve the issue: restricting the hall monitors' remit to just insults and
personal attacks, specifically calling out redundant threads as an area for
the hall monitors to police, or allowing thread closures based on the
number of complaints received. Kuratomi is in favor of the first option, "as the others are taking us
too far into the realm of giving a few people the power to decide what is
and is not useful communication.
"
At its May 6 meeting,
the board did discuss the issue. While it is clear that several board
members are not in favor of having hall monitors, and were surprised when
this particular thread was "clipped off
", as Mike McGrath put
it, there is more to the problem than just the policy. At its core, the
problem is that Fedora is still struggling with its identity.
Some community members would like to see Fedora be a well-polished desktop distribution that gets released every six months and is relatively stable from there—a la Ubuntu. Others see Fedora as a refuge for those who don't like the Ubuntu approach, want to get frequent package updates, and live closer to the "bleeding edge". It is, at the very least, difficult for one distribution to support both of those models, but in some sense that is what Fedora is currently trying to do.
Because the project hasn't made a firm commitment to a particular direction, at least one to the exclusion of the other, there are advocates on both sides who are trying hard to pull the distribution in the direction they want. Kofler is loudly, and repetitively, making his case that Fedora will lose a sizable chunk of its users and contributors if it becomes more conservative about updates. Others argue that update woes are driving users and contributors away.
McGrath is firmly in the camp that Fedora should first decide what it is
and what its goals are, and then ask those who are "chronically
unhappy
" with that direction to leave the project. That would lead
to less contentious mailing list threads among other things. It's a hard
problem, he said, and "we don't want everyone who's unhappy with
Fedora to leave
"
In a discussion that lasted for more than an hour, the board looked at
various facets of the problems, but hall monitor Josh Boyer brought it back
to the particular thread in question. He asked if there was "anyone on
the Board
that thinks the recent hall monitor action was inappropriate
". Matt
Domsch and McGrath were both surprised at the action, while John
Poelstra was not, and the rest of the board was non-committal.
No one said that they found the action inappropriate, but Domsch suggested
that the board recommend
"that hall monitors provide additional latitude to long threads that
may be redundant, but that aren't violent
".
Poelstra wanted to see some "overall objectives for having this
policy
" added to the policy document as well. Both he and Domsch
took action items to edit
the policy for board approval at its next meeting on May 13. The changes
that were made seem much in keeping with what the board members were
saying, so it seems likely that the board will approve them.
Seemingly arbitrary thread closures are clearly a concern to some in the community. Trying to determine which threads are "making progress" versus those that are just repetitive is difficult—and extremely likely to be contentious. While the goals of the hall monitor policy are generally good, it isn't clear that making decisions on specific threads to try to stop discussions getting "out of hand" is a good way forward. It is something of a "slippery slope". There are too many fine lines that need to be drawn—and then challenged by dissenters—that it may just be an exercise in futility.
For the current problem thread, at least, the real underlying issues have yet to be completely addressed. As Fedora moves toward implementing the new packaging rules, which may slow down the usual Fedora update stream, the decline in users and contributors that Kofler envisions may occur. The opposite could happen as well. Only time will tell.
Page editor: Jonathan Corbet
Next page:
Security>>