|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for May 30, 2014

Should the IETF ship or skip HTTP 2.0?

By Nathan Willis
May 29, 2014

As the Internet Engineering Task Force (IETF) moves closer to finalizing the HTTP 2.0 standard (a.k.a. HTTP/2), there is a counter-call for the as-yet-unreleased standard to be dropped. Proponents of that move contend that effort should be put into a follow-up that fixes several problems that (it is argued) are intrinsic to HTTP 2.0. An Internet standard never seeing the light of day is nothing new, of course, but it is still an understandably difficult decision to make for those people who have put in considerable time and effort.

HTTP/2 has been in development since 2007. As we have noted before, the revised protocol adds techniques like request multiplexing and header compression to how HTTP is sent over the wire, but it intentionally does not change the semantics for requests, response codes, and so on. The goal is to better optimize HTTP traffic flow, such as reducing latency. The initial draft of the new specification was derived from Google's SPDY, which began as an in-house experiment.

Today, most browser vendors and quite a few high-traffic web services (including Google) support the yet-to-be-finalized HTTP/2. That plus, perhaps, general fatigue with the length of the standardization process, has led some people to ask that the new revision be declared done and given the official IETF seal of approval. On May 24, HTTP Working Group Chair Mark Nottingham proposed marking the latest draft as an Implementation Draft at the HTTP Working Group's upcoming June meeting, then issuing a Last Call (LC). If there is no major objection raised after the LC, HTTP/2 would make its way out of the working group and toward standardization by the Internet Engineering Steering Group (IESG), which makes the final call.

But not everyone in the HTTP Working Group is satisfied with the state of the HTTP/2 draft, and some of the criticisms run deep. In reply to Nottingham, Greg Wilkins said that "I do not see a draft that is anywhere near to being ready for LC [Last Call]." He enumerated what he sees as four major problems:

  • The state machine described for processing multiplexed HTTP streams does not match the states that the rest of the specification describes for HTTP/2 streams.
  • There are unsolved problems with the HPACK header compression algorithm, including inefficiencies and risks that incorrect implementations will leak information.
  • The protocol allows data to be included in HTTP headers; since headers are not subject to flow control, segmentation, or size limits, malicious parties could exploit this to unfairly monopolize a connection.
  • There is no clear layering between HTTP/2 frames, requests, and streams.

Several other members of the group concurred with Wilkins's concerns. Nottingham replied that there was still time to fix problems in the specification, but said that the pressure from implementers to establish and adhere to a schedule was important to consider, too:

We’re clearly not going to make everyone satisfied with this specification; the best we can do is make everyone more-or-less equally dissatisfied. Right now, I’m hearing dissatisfaction from you and others about spec complexity at the same time I’m hearing dissatisfaction from others about schedule slips...

As to the specific complaints, Nottingham acknowledged that some of the pieces may not be ideal, but remain the best that the participants have been able to create. HPACK, for instance,"is more complex than we’d like, in that there isn’t an off-the-shelf algorithm that we can use (as was the case with gzip)." Yet, after repeated discussions, the group has always decided to stick with it. Moreover, there have already been discussions about HTTP/3, and "while there was a ~15 year gap between HTTP/1.1 and HTTP/2, it’s very likely that the next revision will come sooner."

On May 26, Poul-Henning Kamp posted a rather pointed response (titled "Please admit defeat") to Nottingham's email, specifically the prospects for a sequel to HTTP/2. If the working group already knows that HTTP/2 will require a follow-up in HTTP/3 to fix important problems, he said, then the group should simply drop HTTP/2 and develop its successor.

The WG took the prototype SPDY was, before even completing its previous assignment, and wasted a lot of time and effort trying to goldplate over the warts and mistakes in it.

And rather than "ohh, we get HTTP/2.0 almost for free", we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them.

Now even the WG chair [publicly] admits that the result is a qualified fiasco and that we will have to replace it with something better "sooner".

Kamp argued that pushing out HTTP/2 would waste the time of numerous implementers, as well as introduce code churn that may carry unforeseen security risks. Unsurprisingly, Nottingham did not concur with that assessment. In addition to suggesting that Kamp's wording overstated matters (taking issue, for instance, with the "fiasco" sentence quoted above), Nottingham replied that HTTP implementers feel that the protocol draft is close to being ready to ship, despite any shortcomings. At this stage, he said, technical proposals are what are required.

Nottingham also pointed out that one of Kamp's objections was that HTTP/2 leaves unfixed some bad semantics that have been around since HTTP/1.1. Many people might agree, Nottingham said, but changing the semantics of HTTP is specifically out of scope for the HTTP/2 effort, since it would break compatibility with existing browsers and web servers.

There may be quite a few things about HTTP that still need fixing after HTTP/2, but that is one of the reasons Nottingham cited for wrapping up the HTTP/2 standardization process: once it is completed, the community can move on. There has been a fifteen-year (and counting) gap between HTTP/1.1 and HTTP/2; the longer the gap, the more difficult it is to not break compatibility with existing implementations—if for no other reason, there are simply more browsers and sites.

At this point, it is still possible that HTTP/2 will undergo more revision before it makes it to the final stage of standardization. Several other working group members had concerns about HPACK, and it has been proposed that the compression algorithm be made a negotiable parameter, so that future revisions could drop in an improvement. What does seem clear, however, is that HTTP/2 is moving forward even if not everyone is satisfied with it.

Lack of universal agreement, of course, is not an uncommon problem with standards. As Nottingham noted, the browser and web-server vendors are more-or-less ready to see HTTP/2 reach official approval—which would seem to place HTTP/2 well in line with the IETF's longstanding mantra of "rough consensus and running code." There may indeed be problems that are not discovered until implementation is widespread; perhaps the best option for dealing with them will be to start work on solutions well before another fifteen years have elapsed.

[Thanks to Paul Wise and James Andrewartha for bringing this story to our attention.]

Comments (27 posted)

Virtual data center management with oVirt 3.4

May 29, 2014

This article was contributed by Brian Proffitt

In the world of virtualization, there's little doubt that cloud computing gets the lion's share of attention. Made popular by public services offered by Amazon and the OpenStack cloud platform, cloud computing remains an attractive option for many IT managers looking to consolidate their physical hardware and get their applications deployed as efficiently as possible.

But cloud may not always be the best answer. IT departments can tap into the benefits of virtualization without undergoing the sometimes-painful migration to cloud computing. Another answer lies in the realm of technology known as virtual data center management.

The short way of describing virtual data center management is "cloud computing without the elasticity." This may seem overly simplified, until you realize that all cloud computing is—and has ever been—a tool set to manage the deployment and maintenance of virtual machines. What makes cloud "cloud" are the additional features that enable developers and system administrators to spin up additional virtual machines in an automated way, usually based on demands from the applications themselves. This is the baseline description of elasticity, and elasticity is the secret sauce of cloud.

There are lots of instances of IT shops migrating their systems to public or private cloud platforms and proudly reporting that they are "in the cloud." If the applications and services that are running on these cloud systems are indeed taking full advantage of the automation and elasticity cloud affords, then they are accurate in their self-descriptions. But if they are not using elasticity, then essentially what they have created is a virtual data center.

This leads to the first big questions that should be answered: why go to the bother of setting up and deploying a cloud platform when you aren't going to use it properly? Why not use a virtual data center platform that has everything you need and is much easier to install and configure?

The virtual data center landscape

In the open-source arena, there are several tools that fit this description: XenServer, a CentOS-based distribution that uses XAPI as the core technology to manage virtual machines; Ganeti, Google's cluster-management platform that's designed to handle both KVM and Xen virtual machines; and Proxmox Virtual Environment, a Debian-based distro that's dedicated to KVM virtual machine and container management.

There are also proprietary systems out there: the ones that comes up the most are VMware vSphere and virtual machine management with Microsoft's HyperV. They are decent tools, but you will be paying quite a bit to keep your licenses for those platforms going.

Then there is oVirt, the virtual data center management platform that is the upstream for the Red Hat Enterprise Virtualization (RHEV) product and Wind River Open Virtualization.

oVirt is the very definition of a virtual data center management platform, enabling users to manage their virtual machines by coordinating those machines at whatever level they choose. Whole data centers of virtual machines can be managed, as well as clusters, virtual networks, physical hosts, and virtual machines ... right down to the individual disks being used. Management can be done either through the administrator web portal, or via the command-line interface, REST API, or the oVirt software development kit.

With all of these features, it is surprising to many in the oVirt community that it isn't more widely discussed. The user base is strong (since those who use oVirt typically are very loyal), but not as broad as one would think. But oVirt has started to gain much more attention in recent months, which is a nice change from its closed-source origins when it was a virtual unknown in the data center ecosystem.

The secret origin of oVirt

Seven years ago, oVirt started as a humble little virtual machine manager written in Python by developers at the Israeli firm Qumranet. This was the component that would eventually become oVirt Node, a minimal hypervisor host that can still be run independently from the rest of the oVirt platform today.

The rest of the oVirt components, however, were created with the purpose of developing a virtual data center platform that would run on Windows—not Linux. The oVirt Engine, which acts as the control center for the oVirt environment, was originally written in C# as a .NET application. Two years later, when Red Hat acquired Qumranet in late 2008, this was exactly the technology Red Hat gained: a fairly mature virtual data center management platform that happened to be proprietary and would initially only run on Windows.

This was not exactly an intuitive move by the world's biggest Linux company. But Red Hat did not let that situation linger. Even when it released RHEV 2.2 in 2010, the RHEV-Manager (the commercial counterpart to oVirt Engine) was a C# application that only worked on Windows, and the console was an ActiveX browser extension. From the end of 2008 to the end of 2011, oVirt never saw a lot of traction, from either developers or users.

That's because, even as there was an RHEV 2.x product on the market, there never was an "oVirt 2"—there was only the RHEV 2.0 series that was for a while proprietary. Until 2011, oVirt would essentially disappear, entering a chrysalis where it would ultimately metamorphose from a .NET-based closed-source application to one that was open source and written in Java.

And while those two years would prove to be a good thing for oVirt, it was not without cost: two years off the radar of the open-source community that would turn to other tools to get things done with their virtual machines.

The release of oVirt 3.0 in 2011, and the subsequent downstream release of RHEV 3.0 in early 2012, marked a huge transition from closed to open source, and re-introduced a very capable virtual data center manager to the open-source ecosystem. Since that time, oVirt has done a lot of catching up in the virtual data center arena and its latest releases are doing a lot to bring people on board.

Welcome to oVirt 3.4

Earlier this year, the latest major release of oVirt, 3.4, debuted with a slew of new features that are all designed to make it easier for users to configure oVirt and lower the barrier to entry for getting started with it.

One of the cooler features is the self-hosted engine, which will enable the oVirt engine to be run as a virtual machine on the host it manages. Hosted engine will solve the basic chicken-and-the egg problem of having your oVirt engine tied to a single machine that's never able to be migrated. Now, if needed, virtual machine administrators can migrate the VM on which the engine is installed to another host, which greatly helps with fault tolerance.

Thanks to efforts from IBM and its El Dorado Research Center in Brazil, oVirt 3.4 has PowerPC-64 support, which gives oVirt a cross-architecture capability not found in competing virtualization platforms.

In older versions, you could use lots of different kinds of storage in oVirt, but a single oVirt data center was always locked into a specific sort of shared storage. In 3.4, data center storage is either local or shared across a network, and shared-storage data centers can include multiple types of storage domains. This enables VMs that need to access several virtual disks to allocate these disks on different storage domains, such as NFS, iSCSI, or Fibre Channel.

Network configuration was also simplified in oVirt 3.4, particularly with multi-host network configuration. That enables the administrator to modify a network already provisioned by the hosts and to apply those network changes to all of the hosts within the data center that the network is assigned to. This not only reduces the number of steps required to reflect a logical network definition change, it also reduces the risk of having a host's network configuration not be synchronized with the logical network definition.

Building on the scheduler features that were introduced in oVirt 3.3, the 3.4 release capitalizes on the scheduler to apply affinity (and anti-affinity) rules to VMs that will manually dictate scenarios in which those VMs should run together on the same hypervisor, or separately on different hypervisor hosts. Power-saving rules can also be added to the scheduler, as well.

Scheduling improvements have also been added to oVirt Manager, which has the capability to flag individual VMs for high availability. In the event of a host failure, these VMs are rebooted on an alternate hypervisor host. In earlier versions, it was possible that the resultant utilization of a cluster during a host failure may either not be allowed or could cause a notable performance degradation when HA VMs are rebooted. The new HA VM Reservations feature in oVirt 3.4 serve as a mechanism to ensure that appropriate capacity exists within a cluster for HA VMs in the event the host on which they currently reside does unexpectedly fail.

Wrapping up

In the last year, the oVirt community has seen a steady growth in vitality. Its mailing list and IRC traffic is growing and the numbers on download traffic are seeing an upward trend as well. The community is also seeing more variety in the use cases deployed in production environments. The wallflower days of oVirt are definitely coming to an end, as people discover there's a free and open platform available to manage virtual machines at any scale.

[ Brian Proffitt works for Red Hat as the Community Manager for oVirt and co-Community Lead for Project Atomic. ]

Comments (14 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: TrueCrypt abruptly shuts down; New vulnerabilities in kernel, mod-wsgi, torque, webmin, ...
  • Kernel: Expanding the kernel stack; Who audits the audit code?; Seccomp filters for multi-threaded programs; Debugging ARM kernels using fast interrupts.
  • Distributions: A look at FreedomBox 0.2; Debian squeeze LTS, Kali, ...
  • Development: PyPI, pip, and external repositories; Perl 5.20.0; Scribus 1.4.4; Git 2.0.0; ...
  • Announcements: CII announcement, EFF: Hacking the Patent System, news from FSF and FSFE, ...
Next page: Security>>

Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds