Posted Dec 14, 2007 20:04 UTC (Fri) by jd
Parent article: Specifying codecs for the web
...the question is one of whether the supplied codec is intended as:
- The protocol that will be usable and used in typical scenarios for typical users
- The protocol that will be usable and used in lowest common denominator systems that support HTML5
- The fallback protocol to be used by HTML5 when the server wants to deliver video content in a format the client can't process - ie: something you can guarantee being able to convert to and play in all cases
- The default format for a codec should be specified similarly to language, as a browser and server preference, with multiviews handling differences
There is a world of difference between these scenarios. If it's the first case, then you need a format comparable with good quality video formats even if encumbered, because that it what typical users will want. Then, and only then, do you need to worry about producing a very high-end codec.
If it's the second case, quality is much less important. You need something that can be universally implemented and produce comparable results. It shouldn't matter whether you're using an HDR display, a standard CRT and basic graphics card, or Lynx and aalib. The quality of output may vary, but they should all produce valid results for the same input.
The third case - which honestly I would argue is much more important for a client/server technology - would be to have a meta-codec. It should be possible to (with minimal loss and effort) convert any usable codec into the meta-codec and convert the meta-codec into any other usable codec. Then it doesn't make any difference who actually implements what. The browser remains neutral of the specifics, it only requires that it CAN be done, not how.
A meta-codec might be easier to sell to all vendors, as it doesn't harm their sales by encouraging people to use something else. However, it would require all browsers to have an on-the-fly translator to convert the meta-codec into whatever codec the user has specified as a preference.
Using multiviews to negotiate how to handle differences in capability would put a lot more onus on web developers. However, this technology already exists, is already in use and moves the necessary changes from HTML5 into the HTTP protocol.
The advantage of this is that HTTP and HTML have a really bad mix of content and capability. Really, capability should be negotiated at a lower level and content should be kept at a high level. If absolutely necessary, add a middle layer for metadata and environmental data. But keep the specifics of how to do things outside of that component intended to only contain the specifics of what to do.
The disadvantages of such an approach are that this would require a whole new standards committee to be involved, it would incite gigantic flame-wars over who had responsibility for what, and it would potentially lead to the breaking of a lot of pre-existing software that assumes that the current mishmash is the way to do things.
Personally, I'm at the point of saying that no browser actually adheres to the standards and quality control is an exercise in futility, servers are turning into miniature OS' just to support all practical methods of doing things, along with the weight of all this code and the tools needed to keep the software and content maintained, is beginning to exceed the usefulness of the web as-is.
In the same way HTML replaced Archie, Gopher and WAIS, standards only last until it becomes easier to replace them than maintain them. If my view that HTML is reaching end-of-life is shared by enough others, then whether HTML5 supports Theora is of no interest or importance. It becomes easier to provide the alternative than to fight for something that might be no more coded than any other of the HTML standards preferences or requirements have been.
to post comments)