|
|
Subscribe / Log in / New account

Leading items

The state of software patents after the Alice decision

By Nathan Willis
July 13, 2016

TXLF

In 2014, the Alice Corp. v CLS Bank International decision by the US Supreme Court struck a significant blow against software patents. But it was not the end of the fight for advocates of reform, as Deb Nicholson from the Open Invention Network (OIN) explained in her talk at Texas Linux Fest 2016 in Austin. The Alice decision resulted in numerous patent re-examinations and in changed expectations, but it did not eliminate the threat of patent-infringement lawsuits—nor are software-patent proponents finished fighting back.

Narrowing patentability

The key facet of the ruling, Nicholson said, was that the court set out a new, two-part test to be part of the determination of whether or not an invention is patentable. The first part of the test is whether [Deb Nicholson] the invention is primarily an abstract concept—a mathematical formula, a mental process, an economic practice, a law of nature, or so on. The second part of the test is whether or not the non-abstract part of the claim is merely "foo on a computer" or "foo over the network."

The test, she explained in response to an audience question, is in addition to the existing tests a claimed invention must pass to be considered patentable. The invention must also be non-obvious, for example, and it must be new, be possible in the real world, and cannot break the law. "You can't get a patent for how you bury bodies that you've killed or how you cheat on your taxes," she said.

Together, the two halves of the Alice test mean that the court no longer considers broad or simplistic ideas implemented in software to be a patentable invention, which radically reduces the possible scope of software patents. But, Nicholson said, that doesn't mean the scope of patentability is now "small;" it's just that two years ago it was much worse.

In the wake of the ruling, though, many software patents and applications have been affected. Between July 1 and August 15 2014, according to one study, 830 patent applications were withdrawn. New patent-infringement lawsuits are down by 40%, and in the first year, courts invalidated 286 patents or patent applications out of 345 that were reviewed. The latter statistic equals an invalidation rate of 82.9% which, she noted, has been widely repeated—often in less precise terms. Opponents of patent reform, it seems, get a lot of leverage out of warning businesses that "80% of patents may be invalid" under the new rules.

She then cited a few examples of the patents thrown out by the Alice tests. Digitech Image Technologies, for instance, had a patent that amounted to "combining two data sets in software;" it was invalidated by a circuit court in July 2014. Planet Bingo had a "bingo over the Internet" patent that was invalidated in August 2014. buySAFE held a patent on, essentially, "creating a contractual relationship online" that was invalidated in September 2014.

Other factors

Although the Alice decision was a big step forward for reform, Nicholson pointed out that it did not take place in a vacuum. Other changes in recent history have also improved the patent landscape.

First, other court cases have revised the patent-eligibility process, too. The Mayo v Prometheus ruling said that making observations and "using a little intelligence" did not amount to a patentable invention. That case revolved around a big pharmaceutical ("Big Pharma") patent on monitoring a patient's vital signs and adjusting dosage size in response. In Octane Fitness v Icon Health, the court ruled that predatory plaintiffs who lose must pay the defendant's legal fees, which changes the profitability equation for infringement suits. The Akamai v Limelight decision struck down the notion that plaintiffs could sue the end users of a web service in addition to suing the site's owner. Subsequent cases have upheld these rulings.

Second, the U.S. Patent and Trademark Office (USPTO) has made changes of its own. It is placing a renewed focus on assessing the quality of patents, so it is instructing examiners on what to look for. The USPTO has been hosting free webinars educating participants on patent quality, and has even issued memos clarifying the two-part Alice test.

Nicholson said there has been pushback from software-patent proponents as well, however, which indicates the impact Alice has had. "I don't know if you read patent lawyers' blogs," she said, "but they're pretty hilarious." In addition to the "80% of patents are invalid" claim mentioned earlier, she cited several other reactions from patent lawyers, including one who had posted a list of "words to avoid" in a patent application (such as "computational" and "business process") and another complaining that the Alice test "intentionally biased" against software patents.

The unchanged

Nevertheless, she said, the Alice ruling does not retroactively erase 20 years of bad software-patent history. The old, terrible patents already granted remain in force until they are invalidated in a court ruling or through a USPTO re-examination, "so they can still be used as a stick to beat people with."

Patent-infringement lawsuits are still profitable for plaintiffs and still expensive for defendants. As a result, the settlements offered by patent trolls are (intentionally) still cheaper than fighting back. When defendants do fight back, plaintiffs have an array of tactics available to make the suit more expensive. For instance, there is "discovery abuse," where the plaintiff requests heaps of essentially meaningless documents simply to make the defendant incur more costs.

The Alice case also did not make "good" software patents go away. The USPTO can still find a claim to be patentable under the new test and, in fact, with the harder test now in place, a "good" patent may be more difficult to fight. On that note, the ruling did not do anything to curb "jurisdiction shopping," either. The Eastern Texas district is still where half of infringement cases go to trial, and it is still friendliest to plaintiffs.

There are new challenges to the Alice test making their way through the court system that could roll back some of the progress of the past two years. One is the McRo v Namco case, which centers around a software patent for streamlining lip-syncing in computer animation. Many people seem to regard the patent as a good one, and a ruling might be made as early as the Federal Circuit court's fall session, with possible ramifications for other software patents.

Finally, there has been a lot of patent-reform legislation proposed, but none of it has been passed. The legislation, such as the Innovation Act and the PATENT Act, could curtail practices like discovery abuse if they are enacted.

Broader challenges

There are yet other fronts on which software-patent opponents need to be vigilant, Nicholson said. The USPTO's inter partes review (IPR) process allows anyone to challenge the validity of a patent. Now, patent-reform opponents are challenging that procedure. Big Pharma is lobbying to make IPR only accessible to "experts." Other court cases from the technology sector have challenged the constitutionality of IPRs (though IPRs were deemed constitutional) and have argued to have different standards applied in the review process than in the patent application process.

Worse yet, she said, Big Pharma has finally figured out that it has something to gain by teaming up with Big Software—or, rather, with Big Software-Patent companies. The two industries are now running joint lobbying operations.

In addition, the global landscape is still difficult for advocates of software-patent reform. The patent systems in many other countries have not caught up to the Alice decision and may not for quite some time. Japan and China are catching up to the U.S. in the number of patents granted annually. And although "Europeans love to say they don't have software patents when I go over there," she added, "they actually do." There are different levels of patentability in each country, she said. Germany recently granted a software patent to Image Stream, for example, and there are many software patents held by Nokia.

Further complicating the international challenge is the effect of treaties and trade agreements. Recently, Eli Lilly (a U.S. pharmaceutical company) sued the government of Canada for violating the North American Free-Trade Agreement (NAFTA), claiming that Canada was denying it free trade by refusing one of its patents. That suit will be heard in an international court that does not even publish its case schedule, so there is little information available. The Trans-Pacific Partnership (TPP) could have an even greater impact on patents if it is enacted.

In conclusion

Combating software patents—and other abuses of the patent system, like design patents—is a long-term process, Nicholson reminded the audience. OIN runs several programs it hopes will protect free-software developers from the ills of bad patents, such as its Linux patent pool, the License On Transfer Network, and Defensive Publications.

But Nicholson told the crowd there are other ways they can help improve the patent landscape in the long term, too. They can contribute to the campaigns run by non-profit organizations like the Electronic Frontier Foundation and the Free Software Foundation, she said. Both are working to oppose the software-oriented provisions in the TPP, for example, among their other activities.

Individuals can also be powerful advocates for change within their own companies, pushing them to embrace a defensive, rather than offensive, approach to patents. And they can support the pending patent-reform legislation to lawmakers. Finally, they can continue to advocate for free and open-source software. The more we collaborate together, Nicholson said, the less we'll want to sue each other.

Comments (8 posted)

Docker adds orchestration and more at DockerCon 2016

July 13, 2016

This article was contributed by Josh Berkus

DockerCon 2016, held in Seattle in June, included many new feature and product announcements from Docker Inc. and the Docker project. The main keynote of DockerCon [YouTube] featured Docker Inc. staff announcing and demonstrating the features of Docker 1.12, currently in its release-candidate phase. As with the prior 1.11 release, the new version includes major changes in the Docker architecture and tooling. Among the new features are an integrated orchestration stack, new encryption support, integrated cluster networking, and better Mac support.

The conference hosted 4000 attendees, including vendors like Microsoft, CoreOS, HashiCorp, and Red Hat, as well as staff from Docker-using companies like Capital One, ADP, and Cisco. While there were many technical and marketing sessions at DockerCon, the main feature announcements were given in the keynotes.

As with other articles on Docker, the project and product are referred to as "Docker," while the company is "Docker Inc."

Catching up: Docker 1.11

In version 1.11, the project restructured how Docker works almost entirely in order to pave the way for later features. Prior to that release, the Docker daemon, container manager, and container runtime were a unified program with a single API.

Docker 1.11 separated these functions into three separate pieces: the Docker Engine takes commands from the UI, passes appropriate commands to the containerd daemon, which starts each container using the runc binary. Notably, runC is the first container runtime built according to the specification from the Open Container Initiative. This restructuring has caused some problems, especially with external software integration, and meant that few new features were added to 1.11.

The architecture changes also delivered some strong benefits, not the least of which was an alpha release of "native" versions for Mac and Windows platforms in March. These versions use the built-in hypervisor support included in those operating systems to run Docker under a Linux kernel, instead of using VirtualBox as the prior Docker Toolbox and other solutions did.

Docker 1.12 and built-in orchestration

In contrast to the "big break-up" in the prior version, 1.12 will involve integrating what had been separate tools into the Docker Engine. Docker founder Solomon Hykes explained how and why Docker is integrating container-orchestration features that had previously been included only as external tools. According to him, the developers felt that existing orchestration tools had "solved the problem," but were "usable only by experts." Orchestration consists of scheduling and managing deployment of containerized microservices across a group of servers.

[Solomon Hykes]

The goal in integrating more things into Docker was to make orchestration usable by non-experts. As such, in Docker 1.12, a full suite of orchestration tools based on Docker's previous generation of tools, primarily Swarm and Compose, will be integrated into the Docker Engine. These orchestration changes consist of four major features:

  • Swarm mode
  • Cryptographic node identity
  • A new service API
  • A built-in network routing mesh

Users can enable Swarm mode in Docker 1.12 to have each node join a named cluster of nodes. This causes the Docker Engine to start up a built-in distributed configuration store (DCS), which shares information among the nodes in the cluster using the RAFT consensus algorithm. Other orchestration tools use external DCSes such as etcd or Consul to store cluster metadata. Hykes said that setting up a separate DCS was a significant barrier to deployment for many users.

The second feature, cryptographic node identity, actually encompasses a bunch of encryption features added to Swarm mode. This includes cryptographic keys identifying each node, built-in TLS-encrypted communication, and fully automated key rotation. All of that depends on an integrated public key infrastructure (PKI) feature that is now also part of Docker Engine. Hykes said that this creates a completely secure system by default.

Docker 1.12 also includes a new service API that allows developers and administrators to define applications as services, so that they can be deployed to a Swarm cluster. The services facility includes support for application health checks and auto-restart of failing containers. This seems to work very similarly to Deployments in Kubernetes.

[Andrea Luzzardi & Mike Goelzer]

The last piece of the new orchestration stack is what Hykes called a "routing mesh." The project has added a built-in network overlay and DNS-based service discovery for containerized services, similar to CoreOS's Flannel. This new feature supports built-in load balancing and works with external load balancers. According to Hykes, this is implemented using Linux IP Virtual Server (IPVS) for performance and stability.

Simple orchestration demo

Andrea Luzzardi and Mike Goelzer of Docker Inc. demonstrated the new orchestration features by setting up a three-node Swarm and deploying services to it. Luzzardi started from a new machine running Docker 1.12, and initialized the first node:

    # ssh node-1
    node-1# docker swarm init
    Swarm initialized: current node is now a manager

This creates a one-node "cluster." Adding nodes to the cluster requires telling each of them to join that node, by telling them to connect to the node by DNS name on port 2377:

    # ssh node-2
    node-2# docker swarm join node-1:2377
    This node joined a Swarm as a worker.

Deploying a containerized microservice to this cluster uses the new service command. Luzzardi showed deploying the Instavote Python container from Docker Hub, and had it listen on port 8080 in the cluster:

    node-1# docker service create --name vote -p 8080:80 instavote/vote

He then showed that you could connect to the web service on any node on port 8080. The service can be "scaled" using the same service command. For example, the command below scales up to six containers by adding five more:

    node-1# docker service scale vote=6

Luzzardi and Goezler finished by showing automated redeployment of containers on node failure. They also demonstrated rolling updates of container versions.

Docker for Mac and Windows

"Native" Docker for Mac and Windows has been available since March in an invitation-only beta. Hykes introduced a new release of Docker for Mac that came about from the feedback, bugs, and test cases submitted by the beta testers. Tester reports were invaluable, especially for troubleshooting hardware compatibility.

According to Hykes, creating Docker for Mac and Windows required hiring new engineers with deep systems knowledge, which is why Docker Inc. acquired Unikernel Systems in January. The company also made use of hires out of the gaming industry for user-experience improvements. He promised a "seamless" developer experience.

Aanand Prasad, an engineer at Docker Inc., demonstrated the new Mac integration. He live-debugged the Instavote demo application, showing off being able to reload the application based on editing code in a desktop editor on the Mac. This gives Mac users a similar experience to programmers on Linux desktops.

As of DockerCon, Docker for Mac and Windows are now public betas.

Comparisons with other tools

The orchestration features in Docker 1.12 are quite similar to orchestration features offered by existing tools, such as Kubernetes and Mesos Marathon. For example, Kubernetes offers service deployment and auto-failover, encryption support, rolling updates, pluggable network overlays, and service discovery. The older version of Docker Swarm also has some of those.

This is in line with Hykes's keynote. He emphasized that Docker engineers haven't invented anything new; instead, they've made complex infrastructure that was already available easy to use. "We're making powerful things simple," he said.

Further, version 1.12 will enable Docker Inc's own tools to reach near-parity on orchestration with tools offered by other companies or externally governed open-source projects. As Docker Swarm and Compose had previously lagged competing solutions in features considerably, this puts a lot of pressure on projects like Mesos and Kubernetes to add features and address ease-of-use issues. Kubernetes seems to be focusing on adding features; version 1.3 was released in early July and includes many new configuration options for microservices as well as enhancements to scalability.

Hykes also assured attendees that the older Swarm and Docker Compose APIs would continue to work and be supported.

Docker 1.12 is currently in its third release candidate. The Docker for Mac and Windows betas include version 1.12. Linux users will need to get the 1.12RC by downloading the "experimental" Docker packages.

Public clouds and the future of Docker

Hykes finished up by announcing integrated public cloud tools: "Docker for AWS" and "Docker for Azure." These two offerings automate deployment of the new Docker Swarm on Amazon Web Services or Microsoft Azure, respectively, including integration with accounts, permissions, and network security. People can apply to test these by requesting an invitation on the Docker web site.

The tools and features announced at DockerCon 2016 once again change the landscape of container tools. The near-native Mac and Windows versions remove what was perhaps the largest barrier to wider developer adoption of Docker as their main deployment technology. It's possible that they also remove a strong reason for developers to move to Linux on the desktop.

The container ecosystem is still fast-moving and changing substantially every few months. While it's hard to know what to expect in the next three or four months, we know that we can expect it to be different.

[ Josh Berkus works on container technology for Red Hat. ]

Comments (9 posted)

Core improvements in digiKam 5.0

By Nathan Willis
July 13, 2016

Version 5.0.0 of the digiKam image-management application was released on July 5. In many respects, the road from the 4.x series to the new 5.0 release consisted of patches and rewrites to internal components that users are not likely to notice at first glance. But the effort places digiKam in a better position for future development, and despite the lack of glamorous new features, some of the changes will make users' lives easier as well.

For context, digiKam 4.0 was released in May of 2014, meaning it has been over two full years since the last major version-number bump. While every free-software project is different, it was a long development cycle for digiKam, which (for example) had released 4.0 just one year after 3.0.

The big hurdle for the 5.0 development cycle was porting the code to Qt5. While migrating to a new release of a toolkit always poses challenges, the digiKam team decided to take the opportunity to move away from dependencies on KDE libraries. In many cases, that effort meant refactoring the code or changing internal APIs to directly use Qt interfaces rather than their KDE equivalents. But, in a few instances, it meant reimplementing functionality directly in digiKam.

A relatively simple example is found in what happens when the user deletes an image. In digiKam 4 and earlier, the deleted file would be moved to the KDE trash directory, removing it entirely from digiKam's internal library. In digiKam 5.0, the program now maintains an [digiKam 5.0] internal "trash" folder; deletions are simply staged there until the user empties the trash. For a lot of users, this means it is now easy to undelete an image for the first time.

A bigger change was required for the database interface. The old digiKam releases used KDE's KIO library, primarily because early versions of SQLite (which was digiKam's database storage backend) were single-threaded and would slow digiKam to a crawl. Subsequently, however, SQLite has gained robust multi-threading support. digiKam 5.0 now talks to the database layer directly, removing another dependency. Quite a few of the digiKam's image-manipulation and export plugins also used KIO; they, too, were ported away from that library, although they still follow the KDE Image Plugin Interface (KIPI) API.

Whether or not Linux users will notice any performance or resource-usage improvements as a result of the migration away from KDE libraries remains to be seen, but one major benefit of the migration work is that digiKam now runs on Mac and Windows systems with essentially full feature-parity to the Linux builds—and significantly better stability. The release announcement points out that the OS X builds still rely on some external packages provided by MacPorts, but the Windows builds can be compiled and run standalone.

In the long term, the plan is to make digiKam a pure-Qt application. That work is not yet complete, but the release notes estimate it as "at least 80%" finished.

Features

Porting aside, there are several other interesting new features in the 5.0 release. One is a tool to tweak image colors using 3D look-up tables (LUTs). This method of altering colors is an extension of [digiKam 5.0 LUT tool] how files are normally converted from one color space to another; it is most familiar to many users from the filter effects found in Instagram and other mobile-phone camera apps.

Another change is that digiKam's DNG (Digital Negative) conversion tool has been migrated into the batch-job manager. DNG was designed to be a catch-all superset of camera raw file formats, so it is usually used as a conversion target. Consequently, users are most likely to need it when importing sets of images, so not having it available for batch-processing jobs created a stumbling block.

Several of the other image-management tools, such as the metadata editor and the geolocation editor, have now been made available from every mode of the digiKam interface (that is, in the single-image-editor mode, the gallery mode, and in the lightweight "ShowFoto" editor mode). This, too, fixes an inconvenience rather than adding new functionality, but it will likely be appreciated by many.

A more substantial addition is the return of support for storing digiKam's databases in MySQL or MariaDB. The program uses several databases by default (storing preview data, for instance, separate from image metadata), but users can also configure multiple databases for different collections if they desire. More than five years ago, there was an initial implementation of MySQL support, but the developer maintaining that code departed and it began to bit-rot. That left digiKam with SQLite as its only supported database backend, which was not suitable for collections exceeding 100,000 items.

There is now a new database maintainer, who has cleaned up and modernized the code. Users can select MySQL or MariaDB as a database option from the very beginning (previously, one had to create a SQLite database first, then convert it). Remote database servers are supported in addition to local connections. Along the way, the digiKam database schemas were altered, although migration from the old schema to the new should happen automatically when upgrading.

Fundamentals

The updated database functionality may sound like a minor detail—after all, 100,000 images sounds like a lot. But that database limit actually applies to everything in digiKam's database, including tags, metadata, and even face-recognition information, so far fewer photos were needed to bump into the practical limits of SQLite. As a result, the change helps bridge the gap for high-end users—who are ostensibly digiKam's target audience. There are several other workflow improvements in the new release, such as a "lazy" resynchronization tool that will push metadata updates out to the database opportunistically rather than interrupting the user to wait for a resync, and revisions to the metadata settings panel.

The awkward truth of image management in free-software circles is that so few people actually use it. Ask any circle of active photographers in a conference or hackathon setting, and you are likely to hear that the majority simply keep track of their images in a directory hierarchy organized by date, perhaps with "star" ratings keeping track of the best pictures from within their image editor of choice.

DigiKam is, in theory, one of the best tools available for imposing more order on image collections that a filesystem alone can. But, to all appearances, factors like the limitations of SQLite and the slowness of KIO have, for several years, hindered its adoption by the users who take the most photographs. As a result, the changes in digiKam 5.0 are some of the most important that the project has implemented in some time—even if they appear low-key from the outside.

Comments (2 posted)

Page editor: Nathan Willis
Next page: Security>>


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds