User: Password:
Subscribe / Log in / New account

Leading items

LSB 2.0 and C++

The Linux Standard Base is a standardization effort aimed at making Linux friendly for application vendors. By nailing down issues like which libraries should be available, how packages are to be managed, where files should reside, etc., the LSB seeks to create a standard environment which will be present on every compliant distribution. Application vendors can then build their offerings for that environment and, with luck, have them run everywhere.

A major release of the LSB (version 2.0) is in the final stages. The most recent plan had been to release this version at LinuxWorld, but, for reasons we are about to get into, that release may be delayed a bit. Version 2.0 adds a number of things; included therein is a description of the environment which should be available for C++ applications. A great many commercial programs are written in C++, so, for many vendors, the LSB is of little use until it covers that language. So the C++ description is a high-priority part of the LSB 2.0 release.

The standardization of the C++ environment has run into some opposition, however, as seen by this posting to the gcc list and the subsequent discussion. Many people, including a number of gcc developers, are unhappy with the choices that have been made for the LSB 2.0, and are pushing for changes.

The core of the problem is that the LSB specifies that compliant systems must offer a modified version of the "v5 ABI." This is the binary interface used by gcc 3.3; current versions of gcc 3.3, however, are not compliant with the specification. Patches exist toward a future 3.3.5 release which will bring it into compliance; this release will probably happen, though no promises to the effect have been made.

The real problem, however, is that gcc 3.3 is already old technology, and is considered to be a dead end. Current development efforts are going into gcc 3.4 and even 3.5; gcc 3.4 can already be found on some systems (such as your editor's Fedora Rawhide box). gcc 3.4 is widely held to be a superior release; it has much improved performance, better interoperability with other C++ compilers, and better standards support. It also has a different and incompatible binary interface, of course. Since the C++ environment is only now being nailed down by the LSB, it is asked, why not go with the newer, v6 ABI, which will actually be relevant into the future?

The reasons appear to come down to the following:

  • The LSB is explicitly mandated to focus on existing, deployed technology. At this time, none of the mainstream distributions are shipping with the v6 ABI. Standardizing on that ABI would violate the LSB requirements, and so will not be done.

  • The 2.0 release has already been delayed; making a major change to the C++ ABI specification would add another, long delay.

  • The LSB 2.0 specification is planned to be submitted to the ISO/PAS process. ISO certification would help vendors trying to sell Linux solutions into a number of governmental and corporate environments. That submission must happen by October, however, or the application process must be restarted from the beginning.

  • The v5 ABI is what (most) distributors are shipping now; standardizing on that ABI will make it easier for existing distributions to be brought into compliance.

Opponents argue that the version of the v5 ABI documented in the LSB has never been distributed either - though, in all fairness, the required changes appear to be small. The stronger complaints seem to be that the LSB has made its choices based on the short-term needs of commercial Linux distributors, to the detriment of what the community wants. Of course, determining what the community wants can be problematic, especially since Richard Stallman has prohibited the gcc steering committee from cooperating with the LSB process.

The truth of the matter is that the Linux Standard Base is in a bit of a bind. There is pressure from vendors to create a C++ standard in the near future, the LSB 2.0 process has already taken longer than expected, and, from the LSB's point of view, the v6 ABI has not yet reached a level of deployment or stability that would allow it to be used as the basis for a specification. The gcc C++ ABI remains a moving target, so any attempt to write a standard based on it is bound to encounter difficulties. The only option available at this time, assuming that the C++ section is not to be dropped altogether, is to go with the v5 ABI.

We talked briefly with Stuart Anderson, the lead developer for the LSB written specification. His belief is that the LSB 2.0 release will go forward essentially unchanged, though perhaps with an added statement regarding the C++ ABI and the fact that it will change in the future. The v6 ABI will then be incorporated into the LSB 3.0 release, which is currently planned for about one year from now. It is possible, however, that the C++ section will be dropped from the version of the specification submitted to the ISO.

Standards are a tricky subject in the free software world; they promote interoperability, but also freeze development in a community that values its ability to make changes and move forward. Occasionally a standard catches a development project at the wrong time; that appears to have happened with the C++ specification. As a result, some people are upset now. In a year or two, however, when the gcc C++ ABI has settled down and found its way into future LSB releases, few people will remember this episode.

Comments (34 posted)

Bash 3.0 released

August 4, 2004

This article was contributed by Joe 'Zonker' Brockmeier.

GNU Bash has been in the low 2.0x series for some time, so the version jump to 3.0 last week was something of a surprise, at least to those who haven't been following Bash development closely. Since Bash is a core piece of infrastructure for most of the Linux community, we decided to take a look at the 3.0 release and find out what changed, and what users could expect from the new release.

To that end, we touched base with Bash maintainer Chet Ramey, who was kind enough to reply to our questions about the latest release. The first question on our mind, of course, was "why the version bump?"

You have to look at the changes from 2.05 to 3.0, not any of the intermediate releases. The idea was to introduce major changes in intermediate releases following bash-2.05, let them stabilize, and then increment the major version.

The changes in 3.0 include support for the bash debugger and internationalization support, as well as a number of smaller features that had been requested for some time (time-stamped history entries, better brace expansion) and better POSIX compliance. To that you add the multibyte character support introduced in bash-2.05b and the code cleanups and programming improvements in bash-2.05a.

The whole set of changes deserves a major version bump.

Indeed, there are quite a few changes in this release. A look at the CHANGES file or the NEWS with the release source shows a slew of bugfixes and changes to Bash and Readline.

One interesting new addition to Bash 3.0 is new type of brace expansion. The syntax for the new brace expansion is {x..y} where x and y can be an integer or a single character in ascending or descending order. For example, the set {z..a} would match all of the letters from z to a in descending order. (z,y,x, etc.) The set {1..1000} would match the each of the integers from 1 to 1000.

Another new feature of interest is the addition of history timestamps. This allows users to see when commands were run, which can provide some useful and interesting information.

There are several new options with Bash 3.0. The "failglob" option will probably be of interest to many users. When set, this option will cause an error when a glob expression fails to match any files -- as opposed to running the command anyway. The new "pipefail" option tells Bash to return a failure status if any command in a "pipeline" fails, as opposed to the default behavior of returning the status of the last job in a pipeline.

Of course, one might wonder if all of the improvements and changes in the release will affect existing shell scripts. Many Bash users have a number of shell scripts that are vital to our day to day work, this writer included, and aren't eager to see them break on a new version of Bash. According to the release notes, there are some incompatibilities between 3.0 and Bash version 1.14, but no mention of 2.0x versions. According to Ramey:

Any major incompatibilities are the result of changes for POSIX compliance. There have not been comparable major additions to the shell's syntax as there were between 1.14 and 2.0.

This writer has tried a number of shell scripts against the Bash 3.0 release, and didn't find any incompatibilities or issues. In fact, a (permanent) switch to a Bash 3.0 login shell may be in my very near future.

We also asked Ramey what, if any, features were planned for future versions of Bash. Ramey said that there were already plans for future releases

Programming support: associative arrays, better integration with the bash debugger (a separate project), small improvements for programming convenience (e.g., a += operator to append to a variable value), and some object-oriented features like ksh's discipline functions for variables.

I'm intrigued by zsh's loadable module system as well.

As for interactive use, I think there's room for improvement in the programmable completion system.

Readline needs better support for threaded use (multiple threads in a single process all using separate instances of readline). This is very hard to do today.

Interested users who don't want to wait for Bash to ship with their favorite distribution can find the source on Ramey's Bash page, or the GNU mirrors.

Comments (13 posted)

OSRM's patent study

Open Source Risk Management has been in the limelight for a while as a result of its Linux insurance policies. This group has, just in time for LinuxWorld, issued a press release on software patents and the Linux kernel. The PR describes a survey performed by Dan Ravicher; it contains both good and bad news. The good news is that Mr. Ravicher performed a study of all U.S. software patents which had actually been litigated, and concluded that the Linux kernel infringes none of them. On the other hand, 283 patents were found which have not seen a day in court, but which could, perhaps, be used to make claims against Linux.

It will, doubtless, come as a great surprise that OSRM is now gearing up to sell insurance policies to Linux users who fear patent infringement suits. A mere $150,000 per year buys $5 million in coverage.

There are certainly good things to be said about what OSRM is doing. Insurance against patent suits may give some large users the confidence they need to go forward with Linux development and deployment. The insurance pool could be used to aggressively challenge the validity of patents which are brought to bear against Linux - if the insurers choose to take that approach. The invalidation of a couple of patents could be a powerful deterrent for any other litigious patent holder who has thoughts of going up against the Linux community.

A white paper (PDF format) published by OSRM suggests that invalidation of patents is not the only, or even first approach that OSRM will take. An alternative which is discussed there is obtaining a license for the patent which applies to GPL-licensed software. This license might even be purchased:

"First of all, the patent holder can always be compensated with lump-sum, annual, and/or milestone royalty payments," continued Ravicher. "And, remember, the patent holder that signs a GPL-compliant license for free and open source software can still enforce its patents and seek money or injunctive relief against proprietary software."

The interesting fact here, of course, is that the GPL would make it very hard for OSRM to solve a patent problem only for its policy holders. If patent holders decide to target those users who are insured by OSRM (because that's where the money is), the entire community could benefit from the settlements. But OSRM could find itself in a situation where everybody waits for somebody else to buy the insurance and be the target.

The OSRM white paper also talks about rewriting code to sidestep patent suits. But, says OSRM:

Re-engineering is a powerful weapon, but it must be used sparingly so that Linux developers can concentrate on technological advances, not alternative implementations of current function. OSRM will consult directly with leading kernel developers, and in particular with the Open Source Development Laboratory ("OSDL)", Linus Torvalds' employer and the "Center of Gravity" for ongoing Linux kernel development, to seek consensus prior to any future recommendation for re-engineering.

One can only hope that they think very carefully before going out and issuing "recommendations" to the kernel development community.

OSRM describes itself as "vendor-neutral" more than once in its PR. But that is not entirely true: OSRM is a vendor of insurance products that, by some strange coincidence, address just the threat that the PR describes. Just to be sure you don't miss the point, the PR also discusses the multi-million dollar cost of defending a patent suit in court. This work may not be FUD in the normal sense, but it cannot be denied that OSRM's press release does seek to inspire a certain amount of fear, uncertainty, and doubt in Linux users.

OSRM is not without a potential conflict of interest here. A long list of scary patents can only help to sell OSRM's products, so its researchers have every incentive to be as inclusive as possible. The list itself is not directly available to the public. Interested parties can apparently get it, but only after being warned about exposure for triple damages for "willful" infringement. That is a risk that many will choose to avoid, so most of us will have to trust Mr. Ravicher when he says 283 problematic patents exist. Then again, many people see that number as implausibly small, given the large number of bogus software patents in the U.S.

The PR claims that "OSRM is active in promoting systematic patent policy reforms to address the issue at its roots, patent policies themselves," but is not particularly forthcoming on what form that activity takes. So we asked:

This is something we address regularly as we talk with various influencer audiences, press, analysts and policy groups. Most recently, Bruce Perens (who is on OSRM's board of directors) recently went to D.C., where he held several meetings with various policy groups about the problems with the patent system, and the particular threat to open source. We'll continue working with those and other groups, including the Public Patent Foundation and Electronic Frontier Foundation, to push policy reform.

Here is another statement from the PR:

What it boils down to is that Linux has patent risks; but they can and will become conventional insured risks, just an everyday cost of doing business. OSRM's whole mission is to make the issue of Linux liability simple, routine, and manageable.

Who wouldn't like to become part of the "everyday cost of doing business" with Linux? OSRM only stands a chance of collecting its piece of that "everyday cost" as long as Linux users and developers see patent suits as a threat. That should be kept in mind when pondering the company's motivations and actions. The community is little served by headlines throughout the mainstream media that Linux violates almost 300 patents, but an insurance business may well benefit.

So is OSRM guilty of spreading FUD? They say not:

OSRM has helped the community by actually studying what that risk exactly is and concluding that it is not an unmanageable or doomsday amount of risk. Rather, the OSRM study showed that it's a normal amount of risk that would be associated with any software as successful as Linux. Those who see the message as sparking fear are not familiar enough with our messed up patent system, which is truly the entity to blame for the results of the analysis.

OSRM also pointed out to us that it can only be successful as long as free software is successful. Since fewer users means fewer customers for OSRM, the company has no interest in scaring people away. People in the free software community have been warning about patent threats for years; all OSRM has done is to try to quantify the risk.

It is worth noting that OSRM's patent insurance will be restricted to the kernel. The kernel, however, is a very small part of any deployed Linux system, and litigious software patent holders will certainly not restrict themselves to that one piece. Purchasers of OSRM's patent insurance will not have decreased their exposure by much.

And that exposure does exist. There is no doubt that Linux will be the target of a high-profile patent suit sooner or later. We (and many, many others) have been saying that for a very long time, to the point that many people may not believe it anymore. The SCO case has shown the world just how strongly the community will fight back when it is attacked, and how good the community is at digging up interesting history - such as prior art. The prospect of going up against the community may well deter a number of casual patent shakedowns. Even so, somebody will eventually give in to the perceived promise of easy money (or, perhaps, the salvation of a failing business) and go on the attack. It is just a matter of time.

Anything we can do to prepare ourselves for that day is good. Insurance policies are almost certainly a useful part of that preparation, and it is good that companies like OSRM are stepping up to provide those policies. But we should not forget that OSRM's interests are not precisely aligned with those of the community; if software patents went away, so would that part of OSRM's insurance business. A company like OSRM must walk a fine line; let us hope that they continue to stay on it.

Comments (12 posted)

A patent obstacle in Germany

Readers who made it all the way through the OSRM article may be wondering: what harm can a list of potential patent problems do, anyway? Consider this: in Munich, the Green Party, which is a steadfast opponent of software patents, compiled a list of patents which could be infringed by the city's future Linux-based IT system, should software patents be enacted in Europe. That list is available as a German-language PDF file. The intent was clearly to spread awareness of the potential consequences of software patents in Europe.

The tactic may have worked a little too well: the first request for bids in the Munich project has been put on hold while the city examines its legal risks. At this time, Munich apparently remains committed to the change, but the process will be slowed down while the lawyers do their thing. The European Union has not, yet, adopted software patents, but software patents are already complicating life anyway.

Given events in much of the rest of the world, Europe is about all that stands in the way of a worldwide software patent regime. If software patents can be stopped there, there may be a chance of, someday, reforming the system elsewhere. If Europe falls, the job gets harder for everybody. So the upcoming, presumably final battle over the EU patent directive is critically important. There are signs that European governments are beginning to understand the problem. If making the issue clearer requires a delay in a high-profile municipal Linux deployment, it may turn out to be a price well paid.

Comments (4 posted)

Page editor: Jonathan Corbet
Next page: Security>>

Copyright © 2004, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds