|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for May 15, 2014

Collaborative GPL enforcement

By Jake Edge
May 14, 2014

Embedded Linux Conference

GPL compliance is probably a more important topic for the embedded Linux community than it is for any other free-software community, Bradley M. Kuhn said to start his Embedded Linux Conference (ELC) talk. After three years of trying, he was glad to be able to give his presentation at this year's ELC. In it, he covered a wide range of information about the GPL itself, compliance with it, how the GPL has been enforced, and where enforcement is heading next.

It turns out that he has spent the bulk of his career enforcing the GPL, first at the Free Software Foundation (FSF) starting in 1999 and now at the Software Freedom Conservancy (SFC). Those are two organizations that are doing what he called "community enforcement". GPL enforcement is, he said, his only claim to fame—something he has embraced over the last few years.

GPL operation

There is a difference between how the GPL operates in theory and how it works in practice, Kuhn said, and the latter only becomes clear when you try to enforce the license. In theory, the GPL is a copyright license and copyright is "more or less standardized" throughout the world. Like all copyright licenses, the GPL grants permission to do things that would not otherwise be allowed with the copyrighted work. But the GPL "hacks" copyright, into copyleft, by making those permissions dependent on granting the four freedoms to any downstream recipient of the code. Copyleft is one of those things that it is easy to understand once it has been explained, but is hard to come up with when no one else has done so—it was "a stroke of genius".

[Bradley Kuhn]

But in the real world, there are those who violate the GPL. If everyone "played by the rules", Kuhn said, he would use and advocate the Apache License. When there are violations, he believes that "social pressure" is always the first step to take.

When social pressure doesn't work, the copyright holder needs to use copyright law if they want to enforce their copyright (or left). He is not a fan of copyright law, in general, given the way that the movie studios and others have abused it, but copyleft enforcement depends on copyright law. He sees it as a case of "using the tools we have for the cause of good".

It is important to recognize that failing to follow the rules of the license means that the violator loses the right to distribute the GPL-covered coded. Further distribution is copyright infringement, he said, even if it is done in compliance with the license. The only way to get back the right to distribute is to "beg the copyright holder" for permission.

One of the complexities of modern GPL enforcement is that some current enforcement activities are not software-freedom motivated, Kuhn said. Oracle now holds the MySQL copyrights and enforces the GPL in a "corrupt use of copyleft". Oracle says that sending SQL statements to a MySQL server makes the client code a derivative work (thus subject to the GPL). In his mind, in any kind of enforcement, compliance must be the paramount goal. Oracle's goal is to convince people to buy licenses, not to get them to comply. That is not "community enforcement", which puts compliance above all other interests. Community enforcement is done for the public good, by or on behalf of the community.

It is the community (users) that report the violations that he ends up doing GPL enforcement on. Those violations are typically in some embedded device like a TV or a router, Kuhn said. Either the manual has an offer to provide the source, but no source is provided when someone tries to get it (he calls that "offer fail") or it is clearly running Linux but there is no offer to provide the source. The SFC gets a report of that sort weekly or even more often.

Standard procedure

Once a violation report is received, and enforcement is pursued, there is a standard procedure that gets followed. First, verify that there really is a violation, then send a "cease and desist" letter to the violator. "Cease and desist" is the proper technical term, but he doesn't like it because he would really rather see the violator keep using the software, but come into compliance.

At that point there is a loop. The violator is asked for the "complete, corresponding source" (CCS), as required by the GPL, and the SFC then builds that code and tries to make it work, which it almost never does. So SFC sends a report to the violator explaining why the code it sent is not the CCS and asks for it again. Sometimes a patch is sent with the report, he said, that will produce (or help produce) the CCS. That loop can happen many times. The record is 23 times through the loop, but five to seven is the median.

Once the CCS has been sorted out, the SFC asks the violator to inform its customers of that product that the CCS is available and to provide the CCS (as described by the GPL) going forward. It asks the violator to pay a reasonable hourly cost for the work it has done. After that, SFC restores the copyright permissions so that the now ex-violator can legally distribute the software again.

The money is "controversial" in the community, Kuhn said, but "no community enforcer is getting rich"—"maybe Oracle is" with its form of enforcement. In fact, since SFC is a 501(c)(3) US non-profit, you can see its tax filings online. That means you can see how much it got and how much it spent on enforcement, as well as the salaries of Kuhn and other decision-makers at SFC.

When he was at FSF, Kuhn was "the holdout" on the money question, as he didn't think FSF should ask for money. But, at one point, Dan Ravicher asked him who should pay for enforcement. Should it be those companies that donate to the FSF, most of which are in compliance? Or should it be the individual donors to the organization? The money has to come from somewhere, Ravicher told him.

In addition, if there is no deterrent to violating the license, no violators will ever voluntarily comply. If they know they can just "wait until you come knocking", without any financial penalty, they will do just that.

The financial settlements are confidential, but that is at the request of the violators. That upsets him, as he would rather see that information be public.

There are a number of things that SFC does not ask for as part of compliance. It has jumped through "amazing hoops" to make sure that products don't get junked because it is "bad for the environment". In one case, 80,000 units would have had to go to the landfill, but SFC found a way to avert that, Kuhn said. SFC also tries to avoid injunctions, though it has gotten them on occasion. When that happens, the violator has been a year or more out of compliance, had many warnings, and knew that an injunction had been filed for.

Another thing the SFC avoids is getting companies to switch away from using GPL-covered software. Instead, it tries to make it easy for those companies to continue using the software. Lastly, the organization tries to avoid lawsuits. Those are "always a last resort". By the time a lawsuit gets filed, it is only after "hundreds of hours" trying to get the violator to comply.

Building the code

The point of the GPL is not just to be able to examine the source code. The CCS includes "the scripts used to control compilation and installation", so users can actually build the code, not just look at it. That's part of the "freedom to modify", but it can be difficult to check that the scripts included in the CCS will actually build something that will work on an embedded device.

But ensuring that it will build something useful is important, as the WRT54G story shows. In 2003, there were "dozens" of reports about violations in the Linksys WRT54G wireless router. Discussions began between the FSF and Cisco (who had bought Linksys weeks before), but then someone posted the story to Slashdot. There is a mistaken belief that making a violation public will get it resolved more quickly, Kuhn said, but it actually makes it take much longer.

The FSF put together a group to enforce the GPL for that product, which included Erik Andersen of BusyBox and Harald Welte, who had copyrights in the Linux kernel. After many "rounds" of getting CCS candidates, the FSF eventually got everything working (except for two proprietary kernel modules). That CCS became the first check-in for the OpenWrt project, which is now a major replacement firmware option for wireless routers. OpenWrt credits the WRT54G enforcement action as the starting point for the project, Kuhn said.

The FSF was initially shy about lawsuits. Welte participated in the WRT54G enforcement, but tried to get the FSF to file more lawsuits, which it was loath to do. Kuhn said that there was a conference call every week for 30 or 40 weeks in which Welte asked "why haven't you sued them?". In retrospect, Welte was right, Kuhn said. When it became clear that FSF was not going to do so, Welte filed multiple lawsuits in Germany and was "quite successful" in enforcing the GPL in those suits. These days, though, Welte is working on other projects, so his gpl-violations.org project is mostly defunct now, except for hosting the mailing list.

By mid-2006, Andersen had become unhappy with the lack of GPL compliance for BusyBox, particularly in routers and network-attached storage (NAS) devices. He asked SFC to help with BusyBox license enforcement, so SFC became his agent for enforcement, while also receiving other BusyBox developers' copyright assignments for enforcement.

Since 2007, SFC has always had more than 100 violations queued up for enforcement. The list of violations currently stands at more than 300. The enforcement that it is has done on both BusyBox and Linux has made a real difference, Kuhn said.

Samsung is an example of a compliance success story, he said. SFC sued Samsung at one point over code in one of its TVs. That suit was settled and the CCS that came out of it was the basis for the SamyGO project, which creates replacement firmware that enables features like video recording on certain Samsung TV models. More recently, SFC worked with Samsung to fix a GPL-compliance problem in the company's exFAT filesystem. Normally violations take quarters or years to fix, but that one was resolved in weeks. It shows that, as Samsung now knows, compliance is not actually all that difficult, Kuhn said.

But why are there so many violations? He said he doesn't think downstreams (like device makers) are the problem here; the problem comes from upstream. He has tried to get violators to go on the record blaming their upstream suppliers, so that he can go after the supplier instead, but no one seems willing to do that. All of the violators ask that he not talk to their upstream about compliance, as the violators want to work with the upstream on compliance, which is a bit puzzling to him.

What developers can do

Developers, and embedded developers in particular, can help stop these violations. When you get code from a supplier, ensure that you can build it, he said, because someone will eventually ask. Consider using the Yocto project, as Beth Flanagan has been adding a number of features to Yocto to help with GPL compliance. Having reproducible builds is simply good engineering practice—if you can't reproduce your build, you have a problem.

He recommends putting the CCS online and noting the URL in the manual. Lots of people think that the GPL requires that, even though it doesn't. But doing so makes compliance much easier. He doesn't want to have to test offers from the manual to get the source code; "your fulfillment department will screw this up", he said. You can avoid all of that by having the source code online.

If possible, help select the suppliers and ask them about CCS before buying from them. Companies can also demand legal indemnity from their suppliers. Verizon got indemnity from a supplier who promised not to put any open-source software into a product, which saved Verizon from being the target of an enforcement action when the device was found to contain both Linux and BusyBox.

For many years, BusyBox enforcement was used to require device makers to comply with the licenses of other GPL code they were distributing. Typically that was the Linux kernel. But the community was split about using BusyBox as a lever to get kernel sources. Some kernel developers were unhappy about that, while others were supportive. Current BusyBox maintainer Denys Vlasenko convinced Kuhn that it should not only be BusyBox carrying the load for enforcement and that Linux kernel developers should get involved as well.

Matthew Garrett had been asking Kuhn to help him enforce GPL compliance on the kernel for some time. Garrett is a kernel copyright holder, but for years it was just easier to continue doing enforcement with BusyBox. Once Kuhn changed his mind on that, it turned out there were other kernel developers interested in enforcing the GPL for the kernel as well. That led to the GPL compliance project for Linux developers, which was created in May 2012.

That compliance project has now spread beyond just BusyBox and Linux, with Samba and Mercurial joining up. SFC is also doing "passive enforcement" for other projects where there are no known violations, but that SFC will enforce compliance if any are found. Garrett and David Woodhouse have publicly stated that they engaged SFC to enforce the GPL, but there are other kernel developers (roughly a dozen in all) who have joined the efforts anonymously. SFC is also in discussions with two "major free-software projects" to do enforcement for them.

Kernel modules

But there is an "elephant in the room" with respect to Linux and the GPL: kernel modules. Kuhn and the SFC lawyers believe that Linux kernel modules are almost always derived works of the kernel (thus subject to the GPL). Many corporate lawyers disagree. Since there is limited case law in this area, there is little guidance. That means there are no general rules, so it comes down to the specific facts of the case.

Since both sides believe they are right, this is the kind of dispute that turns into a big court battle. Kuhn's "political opponents" call that battle "the ground war of GPL", he said. He believes that it is time to have that ground war. He also believes that a GPL case will go before the US Supreme Court in the next 20 years or so, he said when answering a question from the audience.

Kuhn wrapped up his talk with an invitation to anyone who has code upstream in the kernel to "join the coalition". For some, their employer will hold the copyright. Others may not want to enforce the license, which is fine as it is not required, he said, and he doesn't blame those who don't want to do so. But for those who do, he asked that they see him after the talk as he had brought along forms for them to sign.

Comments (31 posted)

Mozilla proposes a middle way in the net neutrality debate

By Nathan Willis
May 14, 2014

The concept of net neutrality has been a hot topic in recent weeks, following on the heels of statements by US Federal Communications Commission (FCC) Chairman Tom Wheeler that have been widely interpreted as hostile to net neutrality. Many tech-industry players came out against Wheeler's statements, and called instead for the FCC to reclassify Internet service as a commodity that would be protected against discriminatory prioritization schemes and other deals. But Mozilla proposed its own solution—one that involves a re-examination of Internet service itself, but that might have advantages over the more broadly advocated reclassification proposal. The essence of Mozilla's proposal is recognition that Internet service providers (ISPs) have a service relationship not just with their subscribers and their networking peers, but with third-party, Internet-delivered businesses as well.

At the heart of the current net-neutrality debate is the concern that "last mile" ISPs—that is, those who sell connectivity directly to residential subscribers—will be allowed to slow down traffic from some Internet companies (such as Netflix) unless those companies agree to pay the ISP a "fast lane" access fee. Noteworthy in this scenario is that Netflix (or another Internet company) does not operate the network connecting its facilities to its customers. The ISP is connected to other networking providers—with whom it does have business arrangements for backbone service or peering connectivity—but it routes the traffic originating from Netflix (and other sites or services) to ISP subscribers simply because that traffic comes over the wire from the neighboring network. Netflix pays for the connectivity and bandwidth its servers require on its side, which may be many network operators removed from the subscriber's ISP.

The issue, then, is that in this scenario, the last-mile ISP is interfering with some traffic and demanding extra fees to deliver it to subscribers, even though those subscribers and Netflix are already paying for the bandwidth that is actually consumed. These additional fees could be used by the ISP to make Netflix a more expensive service than an alternative (for example, DVR-ready cable TV programming also offered by the ISP). And the fee structure could be used to make it more expensive for any new service to start up and compete with the services already getting fast-lane treatment. For example, a new Netflix-like competitor, whose total bit-rate (and, thus, video quality) would always be lower than Netflix's simply because the last-mile ISP throttles that last segment of the network connection for the new company.

The fact that such business arrangements are even possible stems from the way the FCC legally regulates different communications services according to different sets of rules. ISPs have historically been classified as "information services" under the Telecommunications Act of 1996; while there are comparatively few regulations that address what "information service" providers can and cannot do, there are regulations that restrict what "telecommunications services" (such as phone networks) can do, particularly when it comes to discriminating on the basis of content. In 2010, the FCC laid out a set of network-neutrality rules in its Open Internet Order, but in January 2014 the D.C. Circuit Court of Appeals ruled that the FCC did not have the authority to impose many of those rules on ISPs because ISPs are not currently classified as "telecommunications services."

Thus, when word got out in mid-April about Wheeler's proposed new rules, which would allow "commercially reasonable" fast-lane deals, most of the response from net-neutrality advocates focused on convincing the FCC to reclassify ISPs from "information services" to "telecommunications services." That change would essentially clear the way for the Open Internet Order of 2010 to come back in full force.

Of course, one would certainly expect the ISPs to fight against such a reclassification as fiercely as their budgets would allow, since they stand to miss out on lots of revenue they would like to collect. But what makes Mozilla's proposal to the FCC different is that it does not hinge on the information-to-telecommunications reclassification.

Instead, Mozilla's proposal (which is available in full as a PDF document) argues that the current definition of what an ISP does is insufficient. The historical viewpoint has been that an ISP has two classes of business relationships: one with its subscribers, and one with the other network providers it is connected to (peers or backbone networks). Mozilla argues that there is in reality a third business relationship: one between the ISP and remote content providers (i.e., Netflix, Google, and any other service that is offered to the public through the Internet). The ISP's relationship to the subscriber could still be classified as an "information service," the argument goes, but it is offering a "telecommunications service" to the remote businesses:

[R]emote delivery services provided by last-mile network operators to arms-length edge hosts, allowing them to communicate with that operator’s subscribers, represent a distinct legal category of services from user-facing Internet access services and from interconnection and peering.

Re-evaluating the nature of the ISP business is not a small step, of course. Mozilla argues that the prior viewpoint, where subscribers and peering networks are the only relationships involved, is a relic of the earliest days of the Internet, when it was composed of multiple standalone networks, and ISPs routed traffic from (say) the edge of MIT's network to the edge of Stanford's network. Commercial Internet services changed that equation fundamentally:

This history is past, and gone with it is the assumption that it is sufficient to view a last-mile network operator as having only two duties, to interconnection/peering partners and to end users. Now, technology enables fine-grained network management creating potential commercial relationships with remote, arms length endpoints. Therefore, a last-mile operator must be viewed as having a separate duty with respect to remote endpoints, in addition to its duties to end users and interconnection/peering partners. Privity in network traffic management has been changed, fundamentally, through deep-packet inspection and other advanced network management technologies. And it is that change that the Commission must address.

No doubt, the uses of the Internet have evolved considerably since the early 1990s, but that reasoning alone would not likely be sufficient to make the FCC adopt a substantially different policy. Consequently, much of the proposal is devoted to showing that providing access to "remote delivery services" (which is the term used in the proposal for Internet businesses in the abstract) meet the specific definition of "telecommunications service" already set out by FCC rules and the Telecommunications Act of 1996.

Specifically, the proposal says, telecommunications services "must include a 'transmission'; it must be offered 'directly to the public'; and it must not include, or must be separated from, any additional information services." Internet traffic is certainly transmitted, the proposal says, and Internet businesses certainly offer their services directly to the public (regardless of how the public accesses the Internet).

Whether or not Internet businesses "integrate" with other information services is not as straightforward, but the proposal argues that the FCC has historically stated in the past that the "information service" component of the ISP business referred to particular features, "specifically domain name resolution, email services, hosting services, and other featured services." The remote services under consideration (like Netflix) do not handle these duties.

The final aspect of Mozilla's proposed change is that, by not requiring the reclassification of ISPs as telecommunications services, its proposal would allow the FCC to address net neutrality without overturning the agency's regulatory precedents and thus triggering the re-examination of the existing regulations and policies that govern ISPs. That is a practical argument, but for a government agency, not throwing decades of existing regulation into limbo is probably an appealing idea.

On May 6, the Mozilla blog post was updated to clarify a few questions that had circulated in the wake of the proposal's publication. For example, the proposed change would not place ISPs' peering and interconnection business under the "telecommunications service" banner. ISPs would still be permitted to create "fast lane" arrangements not in the "last mile," so content-delivery networks and various quality-of-service deals between providers would still be permitted. The proposed change would also not introduce any FCC regulation over Internet businesses themselves; it would instead protect them from unfair actions by ISPs.

Mozilla has also set up a wiki page to track the progress of its proposal. It is hard to say what the future holds, though. Interesting though the proposal may be, there is scant evidence available suggesting that it has swayed the FCC. The FCC is slated to vote on its new rules on May 15. For his part, Wheeler has already responded to the public backlash, indicating that the wording of the rules put forward to the FCC will see some sort of revision from what was disclosed in April. In particular, reports are that the revised rules would impose an outright ban on "fast lane" deals that prioritize content produced by an ISP's subsidiary.

The specifics, however, have yet to be seen. Moreover, whatever rules the FCC votes on—and however that vote turns out—recent events have shown that opponents of net neutrality will take the FCC to court if they dislike the outcome. At that point, many rulings, appeals, and petitions will surely follow, so it could be years before anything concrete is decided, much less put into action.

Comments (15 posted)

A setback for Google against Oracle

May 14, 2014

This article was contributed by Adam Saunders

The US Court of Appeals for the Federal Circuit (CAFC) has overturned [PDF] a district court ruling on API copyrightability in Oracle v Google, which makes for a dramatic reversal in that litigation. With Oracle the victor (for now), the judgment looks, at first glance, to be troubling for the US software industry. The case is far from over, however, so there could be more sharp turns as it continues its way through the court system. For one thing, the CAFC raised the question of "fair use", which could upend things again depending on what decision is made.

Much has happened since LWN last covered the case, which all started a few months after Oracle acquired Sun (who developed Java) in 2010. That's when Oracle sued Google in the US District Court for the Northern District of California for alleged patent and copyright infringement in its implementation of Java as part of the Android operating system.

In his decision [PDF] for that court, Judge William Alsup ruled that APIs are not subject to copyright. The jury also ended up throwing out the claims of patent infringement. Oracle appealed the copyright ruling (but not the jury's decision on the patents) to the CAFC, and last week it got what it hoped for: a complete reversal of that ruling. In its 69-page ruling, the CAFC demonstrated a lot of things. It showed an excellent grasp of the technicalities of computer software, the Java programming language, and the differences between Oracle's Java and Google's Java-like system that powers its Android operating system. It showed a strong grasp of American copyright case law and legislation, tracing the history of various doctrines in copyright law and explaining much of the basics of copyright law. If I were a US law professor teaching copyright, I would definitely include this ruling in my course syllabus, if only for the sheer depth of the decision.

Most interesting (and potentially distressing) for software developers and businesses — indeed, the entire US software industry — is the actual reasons why the CAFC overturned Judge Alsup's decision. In this case, after talks for a license to Java with Sun broke down, "Google copied the declaring source code from the 37 Java API packages verbatim [...] [and] wrote its own implementing code, except with respect to: (1) the rangeCheck function, which consisted of nine lines of code; and (2) eight decompiled security files" (pages 10-11). It is important to note for the purpose of this ruling that Google has actually conceded to this verbatim copying in court. Having reviewed the facts, the CAFC goes through argument after argument on why it reversed the lower court's ruling.

As the CAFC was handling a copyright appeal from a court within the Ninth Circuit territory, it applied Ninth Circuit copyright doctrine. That includes an "abstraction-filtration-comparison" test to determine whether one program is "substantially similar" to another (the bar to meet for copyright infringement via copying): first, you abstract away the expression of the software idea from the idea itself; then, you filter out the parts of the software that aren't subject to copyright; and lastly, you compare the alleged infringing work with the plaintiff's work. The CAFC brings this test up to bat away the argument that APIs aren't copyrightable because they are functional works, as opposed to expressive forms of creativity like music: "This test rejects the notion that anything that performs a function is necessarily uncopyrightable" (page 23).

The CAFC threw out Google's "merger doctrine" defense (i.e. when an idea has only one viable form of expression, you can't distinguish between the two and therefore can't copyright either) on the grounds that Google could have rewritten the few thousand lines of Java source code and achieved the same effect: "nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result" (page 32).

The CAFC also tossed out Google's "short phrases" defense regarding method declarations (basically, Google argued that you can't get a copyright on a short phrase). The court asserted that there was sufficient creative expression for them to meet the threshold of copyrightability: "Because Oracle 'exercised creativity in the selection and arrangement' of the method declarations when it created the API packages and wrote the relevant declaring code, they contain protectable expression that is entitled to copyright protection" (page 34).

The "scenes a faire" doctrine — meaning you can't get a copyright on certain expressions if they are "commonplace[,] [...] indispensable [...] and naturally associated with the treatment of a given idea," (page 35) — also doesn't help Google. That's because Google didn't demonstrate that the Java code it used was commonplace and indispensable in that fashion.

Rounding out the issue of copyrightability, the CAFC found the structure, sequence, and organization of Oracle's Java APIs to be "creative and original", as well as one of many possible ways to implement the Java programming language itself. The CAFC notes that, as this finding is consistent with the abstraction-filtration-comparison test it referred to earlier, it demonstrates the copyrightability of Oracle's Java APIs.

Although this is indeed a dramatic reversal of a ruling which seemed to adhere to the expectations of many in the software world — that APIs aren't copyrightable because of the need for interoperability — it is important to keep the ruling in perspective. First, this ruling only applies to courts within the Ninth Circuit, which encompasses most of the western US. Second, the court has not stated that interoperability and reimplementation of APIs without authorization are banned in that region. Indeed, Oracle itself makes this clear (as cited in the ruling): "Oracle claims copyright protection only in its particular way of naming and organizing each of the 37 Java API packages" (page 43). Thus a reimplementation with new source code, names, and structures would avoid the CAFC's ire. Unfortunately, such dramatic changes would leave you with a different API, so that leeway seems somewhat useless to software developers.

Third, the CAFC didn't rule on "fair use". The fair use doctrine gives courts some discretion to find that certain unauthorized uses of copyrighted works are not infringing. The test includes the following (which are not exclusive):

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.

The jury at the district court couldn't decide whether or not Google's use of Java was fair, and Judge Alsup didn't address the issue because he didn't think that the APIs were copyrightable at all. Since the CAFC (rightly) feels that it's more appropriate to let the lower court handle this issue, as it has much more familiarity with the whole of the case than does the CAFC, it has remanded the case back to the district court to make a fair use determination.

The CAFC did specifically note that Google may be able to persuade a jury that its use of Java was fair. Unfortunately, fair use as a defense in these situations won't prevent a significant chilling effect on independent software developers and smaller companies who may have been interested in reimplementing APIs as part of a commercial project. Indeed, this ruling may have opened the floodgates for "API trolling" within the Ninth Circuit.

It's worthwhile to note, as the FSF has, that Google could have avoided this legal mess by using a GPL-licensed version of Java, such as IcedTea. But Google chose not to. Given last week's ruling, it seems possible that there are people on Google's Android team feeling some regrets about that decision.

It's also important to realize that this case is not closed. With the fair-use issue back to the district court to make a ruling, and an appeal route on the CAFC's copyrightability ruling to the Supreme Court available to Google, this litigation isn't going to stop anytime soon. More twists and turns are possible in the coming months and years.

Comments (35 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: ClamAV 0.98.3; New vulnerabilities in kernel, libxfont, owncloud, xen, ...
  • Kernel: Braking CPU hotplug; Supporting Allwinner SoCs.
  • Distributions: A Cyanogenmod 11.0 M6 test drive; GoboLinux 015; ...
  • Development: Race detection with ThreadSanitizer 2; PyPy 2.3; Emacspeak 40.0; Card-sorting KDE's system settings; ...
  • Announcements: Oracle’s Java API code protected by copyright; Oracle circumvents EXPORT_SYMBOL_GPL(); Firefox gets closed-source DRM; ...
Next page: Security>>

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds