User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 7, 2014

Open-source games and cloning

August 6, 2014

This article was contributed by Vladimir Perić.

Gaming on Linux is a popular topic lately; Steam and GOG.com, two very popular digital distribution platforms, are now available on Linux. In addition, several popular "AAA" games have received Linux ports—such as Civilization 5 and X-Com: Enemy Unknown—and more high-profile games have been announced (for example, "old-school" RPG games Wasteland 2 and Pillars of Eternity). And that is not to mention the multitude of indie games. With over 600 games available on Steam and more than 50 on GOG.com, gamers who run Linux have reasons on celebrate. In light of the ready availability of so many proprietary games, though, it can be easy to forget that open-source games have existed for a long time already.

Recreations and originals

Many open-source game projects can accurately be described as game engine recreations. Their goal is usually to bring the original game to Linux, while also providing support for more modern hardware (particularly enabling higher screen resolutions). Interface improvements are sometimes included, too, although some projects prefer to remain as close to the original as possible. There is a comprehensive list of open source game clones hosted on GitHub. As the list shows, though, many of these projects are not in active development any more and some have not even produced a playable version.

Aside from these projects, which are directly inspired by existing proprietary games, there are few original open-source games. Battle for Wesnoth is a well-known turn-based tactical strategy game still in active development. Due to the releases of Quake II and Quake III game engines under the GPL (in 2001 and 2005, respectively), several open-source first-person shooters have been developed, such as Warsow, Xonotic, and Tremulous. Of those, only Warsow has had a release this year.

For a serious free-software gamer on Linux, the choices seem quite slim: either play an unfinished clone of an older game or one of the few original games that is still in development or which can be considered finished. Perhaps this is why the coming of Steam was greeted as a sort of renaissance of Linux gaming, long after the first wave of Linux game ports by Loki Software in 1999-2001. It is, however, more interesting to consider why open-source games are not more successful.

In some ways, game development offers a fundamentally different challenge than the development of other open-source software. Application software can be released with missing features and still be useful to users; games need to be feature-complete to draw players. Applications do not require much in the way of graphic design; for games this is important (and, in addition, programmers tend not to be good artists). Applications can be used by one user or by a million; games need to reach a certain critical mass of players to become interesting.

Finally, in applications it is easy for everyone to "scratch their own itch" and implement features they need; in games, a clear game design and project direction are required in order to produce a meaningful result. This, no doubt, is why game clones are so popular: it is easy for everyone to see the goal. However, beyond these technical considerations, open-source games face other limitations as well.

While developing in the open has many advantages, it obviously makes it impossible to design a compelling single-player experience: everyone involved would know the story in advance and the continual testing during development would wear down most interested people. As such, open-source games must be either multiplayer games, or "sandbox" games with procedurally generated content (such as OpenTTD or Freeciv). These sandbox game clones have long been successful and have their niche.

Multiplayer games, though, suffer a lot from the above-mentioned problem of critical mass: releasing too soon means not enough players will be interested in playing, while releasing too slowly means the current players will lose interest. Since the number of players constitutes a positive feedback loop, with more players improving the experience for everyone, once the game starts "dying" it is practically impossible to reverse the trend (consider this graph showing the number of unique players of Tremulous per day).

All of these challenges, combined with the natural drifting away of developers in any open-source project, make designing and programming an original open-source game exceedingly hard. However, I think that there is still room for open-source game engine recreations, both of the sandbox or multiplayer-game variety. I will look at two currently active projects and see how they deal with the issues listed above: OpenRA and OpenMW.

OpenRA

OpenRA is a recreation of classic Westwood real-time strategy games, currently supporting the original Command & Conquer (also known as Tiberian Dawn), Red Alert, and Dune 2000; work on Tiberian Sun is currently in progress. It is developed in Mono, with Lua used for scripting. The engine is released under the GPL. The original game assets (such as media content) are required for playing; however, the original games have been released for free (as in beer) download, and it is possible to extract the required assets directly from the games.

[OpenRA screenshot]

The stated goal of OpenRA is not to provide an exact recreation of the original games, but rather improve on the gameplay by adding modern features, such as unit veterancy (that is, the units in the game gain improved performance characteristics as they accumulate experience). Those who wish to play the original game's campaigns will be disappointed at present—only a few missions are implemented. There are issues open on the project bug tracker to implement the campaigns for Tiberian Dawn and Red Alert, but progress is stalled awaiting improvements to the scripting engine. It is possible for supporters to offer bounties for fixing these issues, through the Bountysource site.

I mostly played the Red Alert mod, as it was my favorite game back in the day; skirmish mode and a few scripted missions are available. A few practice matches and a thorough trouncing in multiplayer mode later, I must say that the game definitely feels like it used to. The improvements that have been made are completely in the spirit of the game, and simply represent evolutions in the RTS genre since the original games were released (such as better pathfinding, interface hot keys, and unit veterancy). The multiplayer community is somewhat small, but it is active; a list of servers is available on the official website. I did not have problems finding a game, though it appears that Red Alert is the most popular of the available mods.

Most importantly, the game is being actively developed (as the project's GitHub page reveals) and the developers have moved to a "release often" paradigm, with releases on June 8 and July 22. While there are still rough edges, the game is perfectly playable in multiplayer mode. Even players completely new to the series will not have problems adapting, due to the various interface improvements and modern concepts present.

I consider OpenRA a successful open-source project. Development is proceeding at a rapid pace, and one of the major gameplay element, multiplayer games, is fully functional. Since the original Westwood games are now freely available, using existing assets removes the need to create new game art. There is also a clear goal to work toward, and there is a steadily increasing number of active players. The OpenRA project, then, is currently dealing successfully with all the challenges that I introduced above. In my opinion, assuming the current pace of development continues, it is likely to only get more popular.

OpenMW

OpenMW is a recreation of the Morrowind game engine. Its goals are to support all existing content for the game (including official expansions and user-created mods), while supporting more platforms and, perhaps, utilizing modern hardware capabilities. A game editor, OpenCS, is also available, although development is currently focused on the main game engine. The game is written in C++ and uses OGRE, OpenAL, and other open-source libraries. Source code is available under GPLv3, however, the original game files are required to play the game. The process of obtaining those files under Linux is somewhat complicated, though files from a Wine or a Windows install can be easily used. The legal angles of functioning only with a specific, proprietary data format, though, do not seem to be something that the project has addressed.

OpenMW has seen four releases this year; the latest version, 0.31.0, was released on July 17. The current main goal for version 1.0 is to be a complete replacement for the original Morrowind.exe and nothing more. This means that no behavior will be changed nor improved (unless absolutely necessary), but that stipulation does not include copying non-functional or out-of-game material. Based on reports from the forums, the main quest is finishable in OpenMW. The general sentiment around the forums and the news posts is that a full 1.0 release is "close", possibly by the beginning of next year.

[OpenMW screenshot]

Jumping into OpenMW, the first thing I noticed was that the lips of the characters don't move. This is definitely unimportant, but it seems out of place. As soon as my character stepped outdoors, though, I was hit with horrible performance. And, while my machine isn't new, it runs the original Morrowind just fine, so there is obviously room for improvement. Nevertheless, a few tweaks later, I was ready to start my adventure. Barring a few truly minor issues, there is basically no difference between OpenMW and the original. Perhaps the game's starting areas are just better tested, but it's easy to see why there's an undercurrent of optimism on the forums.

OpenMW is a typical open-source game clone project. The development challenges discussed above mainly affect original open-source games; OpenMW obviously suffers from none of them. Although the game is neither a sandbox nor a multiplayer game, anyone who has played Morrowind knows that there is more than enough content available, even without the many, many mods available. Morrowind is obviously still popular, as there have been attempts to recreate it inside both of the subsequent releases from Bethesda Games (Morroblivion and Skywind). However, these efforts are limited to using the engines found in the later games—a limitation OpenMW does not suffer from. OpenMW is an ambitious project, and it is nearing a major milestone, so it is safe to assume that it will gather additional momentum in the future.

Conclusion

But as successful as OpenRA and OpenMW are and will be, both projects are still just tweaks and improvements of existing games. Perhaps the open-source development model isn't the right choice for developing completely original games? Is it simply too hard to maintain a singular vision when working exclusively with volunteers; not to mention creating enough content and art to fill the game? Game engine recreations are worthy goals, but most of them seem to fizzle out before they are complete. While there are certainly successful open-source games, and I would love if there were more, I wonder if the proprietary development process, behind closed doors, is better suited to the specific needs of game development.

Comments (31 posted)

Addressing brokenness in OpenType

By Nathan Willis
August 6, 2014
TypeCon 2014

At TypeCon 2014 in Washington, D.C., John Hudson presented a talk detailing the serious problems encountered when rendering text for many non-Latin languages—problems, he said, that stem from assumptions made in the OpenType font format, and that are overdue to be corrected. Although any file format has its limitations, Hudson said, he focused on what he called "problems of adjacency" that confront Indic and Arabic writing, and reveal a set of assumptions that were baked into the OpenType because of its roots in European alphabets.

The OpenType specification is an open, ISO standard for font files, first introduced in the mid 1990s, and built out of contributions from Adobe and Microsoft. The format it specifies for the actual shapes of glyphs themselves is derived from Adobe's PostScript-based Compact Font Format (CFF). But the specification includes much more than the outlines of characters; among other things, it also provides ways for font files to include rules for making context-sensitive substitutions of one glyph for another and for repositioning certain glyphs based on their adjacency to others.

Hudson started his talk by showing a video of Iraqi calligrapher Hajj Wafaa doing Arabic lettering with a broad-nib ink pen. Hudson pointed out that the adjacency issues he was discussing are inherent to the writing system, not to any technical implementation of it. Wafaa has to adjust for positioning and substitution rules as he works—the difference is that he can do so on the fly, based on accumulated experience. Software has to attempt to regularize and encode the required strategies, which is the difficult step. OpenType provides two basic means for adjusting how a sequence of character codes gets converted into text on screen: positioning rules, found in a font's GPOS table structure, and substitution rules, in the GSUB table.

Positioning

The positioning feature is most familiar to readers of Latin-based languages with regard to kerning awkward letter pairs (such as "AV") and placing diacritics. In the underlying string, a letter may be followed by the Unicode character for a diacritical mark; for example, "a" followed by "`". The text rendering engine encountering this sequence looks for a rule in the GPOS table, which tells it where to place the diacritic with respect to the letter—in this case, "à". Many languages support stacking multiple diacritics, so OpenType rules support this as well.

The approach breaks down almost entirely for several south Asian languages, however. Indic scripts simply do not fit into the assumptions made by OpenType; the individual letters in a word or syllable are positioned in two dimensions, often with respect to several surrounding letters. OpenType positioning essentially considers only the relationship between pairs of sequential character codes, because it was designed with kerning in mind. And kerning, he said, inherits its basic model from Gutenberg's metal type, where only horizontal movement was possible. Hudson then presented a real-world example.

[Telugu orthographic cluster and corresponding Unicode string]

Despite its unfamiliarity in the West, the Telugu language is the fifteenth most-common language in the world, with over 75 million native speakers. In the Telugu script, some of the basic building blocks of words are clusters made up of four letters of the form C M N V: an initial consonant, followed by two intermediary consonants, and ending with a vowel. The proper shaping of the character cluster is for the vowel V to be placed over the initial consonant C, and the two intermediary characters M and N to be placed below and to the right of C, respectively. Making matters still more complicated, the intermediary consonants M and N are also accompanied by "vowel killer" codes that are not printed at all, but alter the default shape of the glyphs.

This relationship cannot be expressed in OpenType without an arduous series of workarounds, Hudson explained, because GPOS rules can only be applied to adjacent characters. A common workaround, he said, is to designate the intermediary characters M and N in the font file as diacritic marks, which are defined as having zero width and can thus be ignored by the GPOS rule. This, of course, is a hack—the characters are not really diacritics at all. Moreover, because there is more than one of them in the middle of the cluster, there needs to be a separate rule defined to indicate how each should be positioned in relation to C.

With a little contemplation, the practical impossibilities of this approach become clear. Separate rules would be required for every possible combination of M and N medial characters in conjunction with every possible C consonant. Not only might the set of positioning rules overrun the size of the GPOS table, but performing the lookups while drawing text to the screen would impose a serious performance cost on the rendering engine. Not every possible letter sequence is a valid word, Hudson said, which limits the scope somewhat, but the increasing number of foreign loanwords makes the problem still larger, because they bring in sequences of letters that are not used in the original language. The sample word in his slides, he revealed, was actually the Telugu loanword for the English term "software."

GPOS rules can instruct the renderer to ignore diacritics, Hudson said, and they can even be written to ignore classes of related characters (such as variant forms of the same letter). But to properly support Telugu and similar writing systems, rules should be able to ignore arbitrary sets of characters defined within the font file. There is no technical reason why this cannot be done, he said; OpenType just needs to be amended to cope with the idea.

Substitution

The most familiar examples of glyph substitution in English are typographic ligatures, in which the sequence of character codes "f" "i" in a string are rendered with a distinct "fi" glyph rather than the "f" and "i" glyphs individually. This substitution is a common one in the GSUB OpenType table.

Cursive scripts, naturally, involve writing far more of these substitution rules. The Arabic language, notably, has four forms for each letter: isolated, initial, medial, and final. And, Hudson said, comparatively straightforward GSUB rules describe which is rendered for a given character code in context. But while such substitution rules generally suffice for the Arabic language, GSUB fails for several other languages that use the Arabic alphabet but operate with different rules.

[Several common Urdu words containing baṛi ye]

Hudson showed two examples. First is the baṛi ye, which exists in the Arabic language as an alternate, stylistic variant of the letter ye, but is a separate letter in Urdu. It is much wider than the usual ye, and Urdu text typically relocates vowel markers and other characters above and below it in order to be more compact. In addition, it sweeps backward in relation to the standard flow of the text (baṛi means "turned"), so it affects letters that precede it in many strings. GSUB rules cannot express the changes needed at all, Hudson said.

His second example is the Bengali script, which routinely requires placing multiple markers above the top line of the text. While Hudson and others have managed to create GSUB rules that more or less produce readable Bengali text in OpenType, he said, they are difficult hacks that require outside scripts to process the text in each application. In practice, most Bengali speakers simply put up with collisions and misplaced characters in their daily computer use and documents.

[A problematic Bengali string]

Fixing GSUB's shortcomings is not as simple as extending GPOS's functionality, he said. The problem is that, ultimately, OpenType rules assume that characters fit into fixed rectangular bounding boxes, and this assumption limits the operations that can move those boxes around. There are alternate software solutions for rendering Arabic scripts, he noted, such as Decotype's Arabic Composition Engine (ACE); they throw out the box model entirely. The typographic community has already dealt with similar issues when addressing how to render mathematical equations, Hudson said; surely it can come up with similar solutions for non-Latin scripts.

There are certainly other issues with OpenType, Hudson said. For example, Chinese and Japanese font developers do not run into the adjacency issues he described, but they do regularly hit the 65,000-character limit of OpenType files.

There is currently no active plan to develop a new revision to OpenType, but there is growing interest. Indeed, improving support for non-Latin writing systems was a major (if unofficial) theme at this year's TypeCon. Hudson noted that the typographic community has been using prefabricated rectangles as its base component for the three centuries since Gutenberg. That attests to the benefits of the system, he admitted, but it increasingly means grafting solutions like kerning and ligatures from one script onto another, whether they are appropriate or not. 300 years from now, he asked, will we still be talking about these problems in the same terms?

Hopefully not, he concluded. Kerning and ligatures are specific technical solutions. When technologists ask "how do we implement kerning and ligatures for this script?" they are asking the wrong question, Hudson said. Instead, they need to ask how the writing system in question copes with its adjacency problems already, then work toward a solution—even if that means breaking backward compatibility with an established format like OpenType.

Comments (29 posted)

Knot DNS: A high-performance, authoritative DNS server

August 6, 2014

This article was contributed by Ondřej Surý

There was a time when BIND was the only open-source DNS server, but those times have changed and now there are more alternative DNS servers available. If you are looking for more speed, less memory used, better security, or just adding diversity to your DNS infrastructure, you might want to check out Knot DNS. It has just reached version 1.5.0, which brings memory and performance improvements, along with dynamic processing modules that can help with IPv6 network management. Knot DNS is now able to process more than half a million queries per second while keeping the memory usage below that of BIND 9.

What is Knot DNS?

Knot DNS started out as an open-source project licensed under GNU GPLv3 at CZ.NIC, the Czech Republic national domain registry. When CZ.NIC started to run its own name servers for the .CZ top-level domain (TLD), there were only two usable open-source DNS servers with full standard coverage and the ability to run a TLD: BIND and NSD. In due course, CZ.NIC Labs, an R&D department, was formed and the decision to create a fast, modern. and open DNS server was made. The decision was based on the idea that DNS protocol is one of the most important protocols of the Internet, and thus it's stability, security, and reliability would benefit from another DNS server implementation that was written from scratch with full standards compliance in mind.

[Knot DNS Logo]

The first public release of Knot DNS (0.8) was published in 2011 and the project has gone a long way since then. New features have been implemented, the performance was further improved, and the code has been refactored to also focus development on memory requirements. Knot DNS is now able to cater to TLD and root zone operators' needs, but it has also been successfully deployed in DNS-hosting scenarios. For a full list of features and configuration options, see the documentation, but let's just focus on the most notable features and improvements.

Features

Knot DNS is written in pure C as a threaded daemon. As the zone file data is shared among the server threads, there was a need to handle updates to the zones that could come from various sources: manual updates, incoming transfers (AXFR and IXFR), dynamic DNS, or DNSSEC signing. The updates must not leave the zone in an inconsistent state, so you need to ensure that the whole update, such as incoming AXFR, is applied atomically. Knot DNS utilizes a technique you might know from the Linux kernel: Read-Copy-Update (RCU) via the userspace RCU library. This allows Knot DNS to maintain its response speed even when the zone contents are being updated. This can, of course, be rather expensive memory-wise. Even though Knot DNS tries to mitigate this by using shallow copies whenever possible, the incoming zone transfer can still consume double the amount of memory.

Knot DNS is fully standards-compliant and interoperable with other DNS servers. The server can receive and send data via both IPv4 and IPv6, using UDP or TCP. Zone contents can be updated by editing the zone files, incoming and outgoing full (AXFR) and incremental (IXFR) zone transfers, or by Dynamic DNS. The update policy is controlled by IP access lists or cryptographic (TSIG) signatures. Support for Name Server Identifier (NSID), which is important for people running DNS in anycast mode, is also included. While the new releases track and implement new DNS standards, the server also implements RFC 3597 and thus it can handle unknown (future) DNS Resource Records.

Dynamic processing modules are code hooks that can plug into the query-response processing chain and alter the incoming and outgoing DNS messages according to a configured rule. This feature, introduced in version 1.5.0, makes Knot DNS into more than just a simple DNS server. Right now, there are two modules: synth_record and dnstap, and the team plans to add more to support geolocation and high-availability.

The dnstap module implements a flexible, structured binary log format for DNS software. It uses Protocol Buffers to encode events that occur inside DNS software in an implementation-neutral format.

IPv6 reverse (PTR) and forward (AAAA) zones management can be a troublesome task, especially for ISPs with lots of residential customers. The IPv6 address space is vast and it's simply not possible to keep all reverse records in memory. The synth_record module has been developed as an answer to these troubles, as it can generate missing reverse (PTR) and forward records (A, AAAA) on the fly while maintaining the ability to serve real data if it is available.

The only drawback of the current implementation is that DNSSEC cannot be used to sign the generated resource records since the records need to be signed on the fly as they are generated. This will be addressed in the next release.

In Knot DNS 1.3.0, the zone file parser was migrated from a venerable Flex+Bison parser to a more modern Ragel State Machine Compiler parser, which brought much needed speed into zone parsing. For example, the new zone parser is able to process the .net zone, with 35 million records, in under ten seconds. The old parser would still be crunching the zone for another 1000 seconds. The zone file format is surprisingly permissive in terms of syntax, which you can see in the Ragel zone parser in the upstream Git repository.

[Response rate graph]

When talking about performance, we can look at the response performance and speed. Knot DNS outperforms any other open-source DNS server available, with peak numbers exceeding 500,000 responses per second over UDP with a 10GbE network connection. Now, the famous Winston Churchill's quote may have come to your mind: "I only believe in statistics that I doctored myself". The DNS benchmarking scripts used to calculate this are freely available and everybody is thus able to reproduce the results. We discovered one important thing while benchmarking DNS: the network card chipset can make a huge difference. As a rule of thumb, the Intel server NIC chipsets are never a bad choice.

While the raw numbers are important if your DNS server is under attack (and that's just started to be common in last few years), it's also important to avoid becoming part of the attack in the first place. Paul Vixie and Vernon Schryver developed Response Rate Limiting (RRL) as an answer to recent Distributed Denial of Service (DDoS) attacks that use third party DNS servers with spoofed source IP addresses to reflect traffic to innocent victims. Knot DNS has implemented RRL since the 1.2.0 release to give DNS administrators the ability to be good netizens by not participating in these attacks, even inadvertently. This is especially important for high-performance DNS servers with high-speed connectivity, such as TLD servers.

CZ.NIC started DNSSEC signing in the .CZ zone back in September 2008 and has reached 37% penetration with 434,000 signed DNSSEC domains. Thus it shouldn't be really surprising that DNSSEC was on the feature list for Knot DNS from the beginning. The server will make sure that DNSSEC signatures don't expire and will maintain the SOA serial number. Knot DNS could serve DNSSEC-signed zones since its first public release, but it has also introduced the ability to sign the domains since the 1.4.0 release.

Domain signing is currently labelled as a technology preview, since the configuration, interface, and utilities might change in the future. However, the code is stable and if you want to just sign zones, you should give it a try.

Knot DNS also comes bundled with standard DNS utilities: kdig, khost and knsupdate that implement their BIND 9 counterparts.

Future Development

The upcoming Knot DNS 1.6.0 (aimed for end of 2014) will bring a reworked DNSSEC signing that will include Key and Signing Policy, its own DNSSEC key management utilities, inline (on-the-fly) signing, and migration from OpenSSL to GnuTLS. The latter switch was already planned due to much better support for PKCS#11 in GnuTLS that allows storing encryption keys in Hardware Security Modules (HSM). The recently discovered OpenSSL vulnerabilities just emphasized the need for heterogeneity in cryptographic libraries used by DNS servers.

A DNS server may be used for small personal zones, large TLD zones containing millions of records, as well as in deployments involving tens or hundreds of millions of small zones. While simple configuration files suffice for the first and second scenarios, it becomes cumbersome to read a huge configuration file with millions of configuration records when the DNS operator needs to add and remove zones from that file frequently (e.g., every second). Therefore, the need for a provisioning protocol has emerged and is on the roadmap for 2015.

Notable users

We obviously cannot list all Knot DNS users, but here is a list of some noteworthy users. As you might have guessed, we eat our own dog food, thus Knot DNS powers a whole one-third of .CZ nameservers (the rest are running BIND 9 and NSD). .CZ is not the only TLD to deploy Knot DNS; it has been handling .DK since 2012. RIPE NCC has deployed [PDF] Knot DNS to run in a slave nameserver cluster serving 77 ccTLDs and 4,200 reverse zones with a peak traffic rate of 120,000 queries per second. We got O2 Czech Republic on board with Knot DNS 1.5.0 for its reverse IPv6 zones delegations, and CESNET, Czech National Research and Educational Network, has been running Knot DNS since late last year. The latest notable addition to the user base has been Active24.cz with more than 200,000 domains.

Get it

Developers can download and compile Knot DNS sources (releases or Git), but Knot DNS also has packages for most Linux distributions: Debian, Ubuntu (in a PPA), Fedora and Fedora EPEL, OpenSUSE, Arch Linux, and Gentoo. There's also a OpenWRT metapackage and a Knot DNS formula for HomeBrew.

I hope you will give Knot DNS a try. If you run into a problem, there's an issue tracker and the knot-dns-users mailing list for assistance. There's also Twitter and Google+. The Knot DNS team would be happy to hear from you, with problems, or success stories.

[ The author is a Chief Scientist at CZ.NIC and is involved in Knot DNS development. He's also a Debian Developer and an open-source enthusiast. ]

Comments (7 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: Multi-factor authentication with U2F; Mozilla Developer Network leaks information; new vulnerabilities in gcc, samba, tor, ...
  • Kernel: The 3.17 merge window opens; Year 2038 preparations in 3.17; Control groups: a look under the hood.
  • Distributions: Reconsidering ffmpeg in Debian; CyanogenMod, ...
  • Development: Shifting toward Qt; XBMC's new name; Gallery enters hibernation; Thunderbird's future; ...
  • Announcements: Qt to be spun off into a separate company, Ada Initiative report, LF Scholarships, ...
Next page: Security>>

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds