It isn't just the GPL that is being updated. Creative Commons is working on
changes to its licenses also, and for some of the same reasons. It was announced
early in August
that changes were in the works, and you can read the
proposed draft language on that page, and while it was hoped that the
license would be finished by the beginning of September, the discussions
continue on the CC public discussion board. A major sticking point? What
to do about DRM.
There is already an anti-DRM clause in the Creative Commons licenses which
reads like this:
You may not distribute,
publicly display, publicly perform, or publicly digitally perform the
Work with any technological measures that control access or use of
the Work in a manner inconsistent with the terms of this License
What is proposed are some amendments to clarify
the language, but some, particularly in the Debian camp, worried that the
language in the draft was inconsistent with the Debian Free Software
Guidelines, and instead proposed a kind of parallel distribution
clause, in order to give programmers freedom to code for both open and
Creative Commons project lead Mia Garlick opened the
topic up for discussion.
Some find it ridiculous to argue that the way to promote freedom is by
allowing DRM, with its potential to take CC works and close them off. They
see DRM as the fast track to destroying the share-alike community that
Creative Commons authors are choosing to be a part of. The whole point of
having such a license, after all, is precisely to avoid the sort of total
freedom to do whatever
you wish with the work, as would be possible by the author choosing to
release into the public domain.
As one comment put it, allowing DRM on CC'd works in the name of
freedom is like saying the way to promote democracy is to vote in a
And so the upgrading to CC version 3.0 is going through a
similar discussion as the GPLv3. Because of the opposition, the dual
license idea isn't currently in the draft, as Garlick explained:
Consequently, CC is currently not proposing to include this new
parallel distribution language as part of version 3.0; however,
because it is not clear whether the Debian community will declare the
CC licenses DFSG-free without it and because it represents an
interesting proposal, we felt that it was appropriate to circulate
the proposal as part of the public discussions of version
It's a fascinating discussion, and polite. If you
wish to join in, here's where
you go. You must subscribe
to post a comment.
To get up to speed on what has already been
a PDF that summarizes the discussion so far, along with Creative
Commons' reactions to various suggestions, available here.
The Debian point of view, as far as I can see, is being expressed by
Evan Prodromou, and the contrary view by many, but outstandingly by Rob
Meyers and Greg London. You can find the archives by author here. My
best suggestion would be to start
here, and just click on "next message" for a while to follow the
discussion in a straight line. At that starting link, London suggests
making sure "DRM can't be used to take a work private
or set someone up as sole source for DRM-versions
of works," and Meyers answers
Prodromou's expressed concerns about "licensees being free to distribute
works in their format of choice." Prodromou expresses
Sony's not going to change their platform for
us. They're just not.
Millions of users aren't going to throw out their PS2's because they
can't play Free Content games on them. It's not going to happen. So the
question becomes whether we're going to hamstring Free Software
developers who want to port to this kind of platform. What purpose does
it serve, besides restricting the freedom of those developers?
Again, I'll contrast to Free Software applications running on
proprietary operating systems. If the GPL had forbidden running or
developing a Free app on a propriety OS, there would be no Free Software
Letting people make their own accommodations with the increasingly DRM'd
world means we will see Free Content on more platforms, not less.
Turning up our nose and saying that our content is too good for DRM'd
platforms won't stop DRM; it'll just impede the distribution of Free
I don't like DRM. I think it sucks. But license provisions are the wrong
place to fight it.
in this comment:
There are millions of people who have
game consoles, text readers, and music players that require some sort of
DRM. And even if it's just one person who can't use a work on one piece
of hardware, it's still wrong.
Of course, that's when the
discussion gets really interesting. Meyers points
Embracing DRM will not move the movement
forward. Unless you spin it
My son tells me that Sony are now allowing people to play vanilla
MPEGs on PSPs. So problem solved. We don't need a blanket DRM
permission to use free culture on PSPs.
When one comment states,
"That's why pleas for DRM are *not* pleas for user freedom," Prodromou
Parallel distribution doesn't restrict freedom. It gives *at
same freedoms as distributing in an unencumbered format, *plus* the
freedom to run on a DRM-only platform. That's more freedom, not less.
which London responds:
If it means you can put FLOSS work on an DRM-only
player, and you can't play non-DRM versions on the player,
and you cant even legally convert your works to a
DRM-compatible format without paying iSuck Corp a lot
of money, then the barn door is open and it's only
a question of when the wolves are coming in.
Another issue, and again this is identical to efforts in GPLv3, is to
internationalize the license. The CC proposed solution is this, according
to the August announcement:
Another big feature of version 3.0
is that we will be spinning off
what has been called the "generic" license to now be the US license
and have crafted a new "generic" license that is based on the
language of international IP treaties and takes effect according to
the national implementation of those treaties. This may only be
something that gets IP lawyers excited but I thought it might be good
to share this draft with the community as well in order to ensure
full transparency and in case people were interested and/or had any
And finally, there is discussion
on just what the definition of "noncommercial" is.
I would suggest that you
take the time to read all the comments
themselves in August and September, though, and not just rely on the PDF
summary, as there is already a comment
indicating the summary didn't get every point precisely as the commenter
intended. Besides, figuring out the appropriate response to DRM is a very
important task, one the community needs to get right.
Comments (6 posted)
Lawrence Lessig appeared at the third edition of the Wizards of OS to
launch Creative Commons Germany. He returned at WOS4
to talk about free culture. As it turns out, Mr. Lessig has
recently moved to Berlin to spend the next year working on his next book,
so there may well be other opportunities for the locals to hear him speak.
For the rest of us, though, it was a rare treat.
He started by talking about the composer John Phillip Sousa, who had
expressed frustration (to a Congressional committee) with the "talking
machines" which were just becoming
common in his time. These machines, he feared, would turn the public into
mere listeners, rather than people who participated in the creation of
music. Many years later, Mr. Lessig notes, this "read-only" approach to
culture has indeed taken over, especially in the U.S.
The talk then shifted to the founding of the U.S. Republican party, which
was based, at that time, on the idea of "free labor." Working for others
was seen as a form of indentured slavery - especially given the kind of
labor contracts which were in use at that time. The idea motivating the
Republicans was a vision of a country where people owned their own means of
production and worked for themselves. Needless to say, things did not work
out that way. Industrialization pushed the economy in a different
direction, and, by the 1870's, 70% of the workers in the U.S. were
employees. Free labor, he says, is a "fantasy" now.
The idea is beginning to come back, however, as the net is enabling more
people to own their own production equipment. We are also seeing similar
trends in politics - the 20th century mode of being told what to think by
politicians on the television is giving way to a blog-driven participatory
democracy. It's becoming a read-write system. And that, Mr. Lessig says, is
how things have been for most of our history; the 20th century was an
aberration in this regard.
Moving back to culture, Lessig noted that the Internet can enable both
read-only and read-write culture. In the read-only mode, the net is a
channel by which we can consume culture created elsewhere. The classic
example here would be iTunes, which allows the purchase of music for
specific devices, to be used in specific ways. The Internet can be a way of
perfecting the control held by content owners.
But it need not be that way.
To demonstrate the read-write alternative, he showed a few videos taken
from the net. These varied from silly works involving reworked anime clips
set to music rather different from that used by the original creators
through to highly political pieces. Something to offend everybody - but
highly amusing. Text, says Lessig, is "the Latin of our time"; video is
the way to communicate in this era. Unfortunately, many of the videos he
showed have been subjected to takedown notices and other attacks from
copyright holders. Lessig also mentioned a film which won a prize at
Cannes; it was made for all of $218, but then the creator was faced with a
$400,000 bill to clear the rights for the background music used.
There are many differences between the read-only and read-write views of
culture, starting with the way that the read-write view departs from the
"couch potato" mode. Read-write culture is a participatory medium. The
read-write culture is also far larger, by almost any measure. It certainly
involves more people, but it can also be economically larger.
Unfortunately, current copyright law heavily favors the read-only mode. It
controls the right to make copies, but, in the digital world, any use of a
work involves copying it. So every use requires permission. Content
holders are making full use of this legal view, which, in the end, means
they have control over how people use culture.
Copyright law, in other words, conflicts with the read-write net. It
Jack Valenti described "piracy" as his own terrorist war. We are, it
seems, fighting a war where the terrorists are our own children. And the
tools which are being deployed in this war, in the name of stopping piracy,
are also killing read-write culture.
So what do we do about all this? The first step, says Lessig, is to enable
free culture in any way we can. And that requires building free tools.
The free software community, for all of its successes, has not yet
succeeded in building a comprehensive set of friendly tools which can be
used by artists. We need to fight DRM in any way we can, support free
codecs and protocols to the greatest extent possible, and support free
We must also build a legal platform for free culture. The Creative Commons
license is aimed at that goal. It seems to be having some success; by one
measure, there are now as many as 140 million CC-licensed works
available on the net.
Finally, Lessig says, we must reach out and support the creation of free
culture on proprietary platforms. In particular, the estimated one million
Flash developers should be brought into the read-write world. That
involves encouraging them to share their code, putting "view source"
buttons on Flash products, etc. By reaching out to these people, we'll
grow the support for free culture, and, ultimately, free platforms. Free
software, he says, was not initially built on free platforms; free culture
will need to take a similar path.
In summary, says Lessig, the 20th century is best described as the
"weirdest century." But it's over. If we can grow the free culture
movement, we will enter truly into the read-write world, and we'll all be
richer for it.
During the question period, Mr. Lessig was asked what he thought of Richard
Stallman's refusal to support the Creative Commons licenses. The day of
that announcement, he responded, was one of the most depressing of his
life. He stands by the Creative Commons licenses, however. The artistic
community still has not really had the discussion of what rights it needs
to be truly free. There is no artistic equivalent to the "four freedoms"
for software. Until that discussion has happened, the Creative Commons can
only defer to the free-culture friendly musicians it is working with
(Gilberto Gil was mentioned) and go with what they suggest. Mr. Lessig
does not feel that he knows better, and will not try to force a particular
vision of freedom on them - even if it means losing Richard Stallman's
The question was asked: don't the Creative Commons licenses constitute an
admission that many of the rights often claimed under fair use do not
actually exist, since those rights must be codified separately in a
license? That can be a problem, he responded, which is why these licenses
have always been written as a grant of additional rights beyond all of
those already permitted by law. In the end, it comes down to a choice of
trying to build this legal platform, or doing nothing at all; they chose to
Comments (16 posted)
One problem which must be faced by any cooperative project is that of
quality management. If anybody can contribute to a work, how can a project
ensure that its output is up to the standards it has set for itself? A
panel session on this topic highlighted three very different
approaches to this issue.
Ullrich Pöschl, a researcher at the Max Planck Institute for Chemistry, is
trying to address a number of problems with the scientific publishing
world. Publication is crucial to scientists - it is, in the end, the one
concrete result from their work which matters. But the process to
publication is long and frustrating, and can often be hampered by personal
agendas and scientific conservatism. Your editor who, in a previous life,
actually published a paper in a
refereed journal can attest to what a
painful process it can be. There are also problems with scientific fraud
and (much more often) plain old carelessness. Scientists, in their rush to
get their work out, will often not take the time to produce work of the
needed quality. Quite a few papers are published which contribute little
and actually dilute the pool of scientific knowledge.
On the other side, scientific journals are tremendously expensive, and they
publish last year's work. There are a lot of pressures for faster - and
more open - access to scientific results. It seems that a more open
approach would benefit everybody, but only if the quality level can be
Ullrich is a founder of a relatively new journal (Atmospheric Chemistry
and Physics) which has set out to demonstrate a
new approach to scientific publication. This journal has retained much of
the classic scientific publication process - every paper is still reviewed by
anonymous referees whose questions must be answered to the editor's
satisfaction. Where things differ is in the openness of the process.
When a paper is submitted, as long as it's not complete junk, it will be
immediately published as a "discussion paper" on the journal's web site. It is
clearly marked as an unreviewed paper, not to be taken as definitive
results at that time. While the referees are reviewing the paper, others
can post comments and questions as well. These others are limited to "registered
scientists," since the desire is to keep the conversation at a high level.
The comments become part of the permanent record stored with the paper, and
they can, at times, be cited by others in their own right. The editor will
consider outside comments when deciding whether the paper is to be accepted
and what revisions are to be required.
After using this process for five years, Atmospheric Chemistry and Physics
has the highest level of citations in the field. Citations are important
in the scientific world: they are an indication that a given set of
research results has helped and inspired discoveries elsewhere. The high
level of citations here indicates that this publication process is
succeeding in attracting high-level papers and filtering out the less
Things are at an early stage - out of approximately 7,000 scientific
journals, about five are currently publishing with this sort of technique.
Others are interested, however, and that number can be expected to grow in
Martin Haase then took the podium to talk about quality management in
Wikipedia. While Wikipedia is a useful resource, there have been a number
of well-reported problems. Some articles can be flat-out wrong, or,
sometimes, distorted to meet somebody's political goals. Maintaining and
improving Wikipedia's reputation will require getting a handle on
Some measures being taken by Wikipedia are:
- Putting restrictions on anonymous access. In particular, anonymous
editors cannot create new articles.
- Getting a better handle on attribution of work. Wikipedia maintains
an article editing history now, and has lists of contributors. Some
people, it seems, have been surprised to learn this, and have
changed the style of their contributions afterward.
- A two-level reviewing process. Articles which have been heavily
reviewed and deemed to be correct can be designated as "featured"
articles. This process, however, turns out to be slow, so a new, less
rigorous "good article" designation has been created as well.
- Specific metadata about validation is being added to articles.
- There is a mechanism for creating permanent links to specific
versions of articles. These links can be used by outside sites to
link to a "known good" version of an article with no need to worry
about what subsequent changes could bring.
While agreeing that improving the quality of Wikipedia articles will be a
never-ending process, Martin seems to think that the measures being taken
will move things in the right direction. He warned explicitly about
"expertism" - requiring that articles be written by experts in the field.
It can be hard for experts to write articles for people who are unfamiliar
with the field - their work tends to be jargon-heavy and written at the
wrong level. They also tend to run in schools, and expert-written
articles tend to reflect the views of one school only. Limiting
contributions to experts would, in Mr. Haase's view, rob Wikipedia of much
of its usefulness.
The third panelist, Larry Sanger, disagrees. Larry was a part of the
creation of Wikipedia, but has since fallen out with that project. So,
while claiming to be a "big fan of Wikipedia," he spent much time
criticizing it. Wikipedia, he says, was meant to be the wild side
of Nupedia, it was never supposed to be the whole thing. With only
half of the original design, he says, it is not surprising that things have
So what has gone wrong? According to Larry, the Wikipedia rules are not
enforced uniformly, leading to lots of abuses. Anonymous editing attracts
trolls and other people whose main purpose is not the creation of a
top-quality encyclopedia. The Wikipedia community is insular and hard to
join. And there is no place for academics, people who are experts in
their field. Wikipedia people may fear expertism, but Larry, instead, is
on a campaign against amateurism. This amateurism, he says, is behind many of the
problems with Wikipedia, but the community will not recognize these
problems, and, thus, he says, will never fix them.
So Larry is going to fork Wikipedia. His project, called The Citizendium, will, he says, be very
different. It will start out very much the same, however: the same
software, and copies of all the Wikipedia articles. Those articles will
track changes to their Wikipedia equivalents until they are changed
locally, at which point they will become a hard fork. There are no plans
to fork the software. In essence, the Citizendium intends to make full use
of Wikipedia's free licensing (as is its right) to bootstrap the new site,
and only move away from Wikipedia content when and where it feels it has
something better to offer.
There will be some distinct roles for members of the Citizendium project. People who
are deemed to be sufficiently expert in a given field will be called
"editors"; regular contributors will be expected to defer to the editors in
their field of expertise. These editors will be self-selecting, but they
must publicly state their credentials. Editors can mark an article as
being "approved," indicating that, in their opinion, it has reached a
certain level of quality.
There will be no anonymous editing allowed in the Citizendium, and no
pseudonyms either. All contributors must work under their own names.
There will be a number of rules on how contributors and editors are
supposed to work, with quick expulsion from the project for those who do
not follow them. To that end, there will also be "constables," whose job
is to enforce these rules.
There are vague plans for a meeting to draft and approve the charter under which
the project operates. For now, however, the Citizendium is very much Larry
Sanger's project, with goals and processes set by him. Whether it will be
able to build a community and maintain it while keeping quality high
remains to be seen.
Comments (15 posted)
Page editor: Jonathan Corbet
Next page: Security>>