|
|
Subscribe / Log in / New account

Interview with Second Life's Cory Ondrejka

January 17, 2007

This article was contributed by Glyn Moody

Cory Ondrejka, CTO of Linden Lab, has some serious programming credentials. Before joining Linden Lab in late 2000, he worked on US government projects and Nintendo games; as well as writing much of the original core code for Second Life, he also designed the Linden Scripting Language (LSL), and wrote the LSL execution engine. He talks to Glyn Moody about the background to Linden Lab's decision to take the Second Life client open source, how things will work in practice, and what's going to happen server-side.

When did Linden Lab start to think about the possibility of opening up Second Life's source code?

We've been thinking about it fairly seriously for, gosh, nearly three years now. The effort to really get there is something that got kicked off pretty early in 2006.

Was there any particular stimulus at that time?

We started looking at what our residents were doing in preparation for some speaking we did at [O'Reilly's] ETech in March. One of the things that we discovered was that a very large percentage of our residents – something on the order of 15% of people who logged in - were using the scripting language. So you start realising that there are tens of thousands of people at least, probably more like hundreds of thousands at this point, who have written code related to Second Life. And so it seems a little bit silly to not enable that creative horsepower to be applied to our code as well.

Was the decision to open the viewer's code a difficult one?

I think internally, as an organisation, buying into the idea is something that we were able to get to relatively quickly. People sometimes don't realise that the kind of work you have to do to be able to open source is exactly the same work that you're doing to close exploits and fix bugs. It's actually not a separate set of tasks in many ways.

Over 2006 there was also a very active reverse-engineering effort called libsecondlife that has something like 50 or 60 developers on their mailing list. They've been doing a very impressive job of reverse engineering the protocols and figuring out what's going on. They were finding exploits quite regularly and doing a good job of sending them to us, and saying: Hey, we found this, you guys might want to fix it.

What we found, of course, is that it doesn't really matter whether we open source or not, the exploits are going to get found - that's what has happened in all software. And so why not make it easier for folks like libsecondlife, if they're going to be poking around anyway? Let them have the code so that they're more likely be able to fix things that they find, and broaden it to a larger community of developers than just the developers who wanted to get involved in a reverse-engineering effort.

Why did you choose GNU GPLv2 license for the code?

We ended up talking about that a lot. We were basically surveying what license is still the dominant license in the open source community: it's GPLv2, and so in our minds it has a lot of legitimacy. It's also the one that gives us the most flexibility down the road, where if we want to do a dual-licensing scheme, or a more-than-dual licensing scheme, it's a lot easier to come from GPL than sort of back into it.

In fact, you already offer a commercial license, I believe?

We do. I think that for now we would be sort of surprised if a lot of people jumped on the commercial license today, but we have a lot to learn. This is a very big step: there's never been a product that was in the dominant position that then open sourced. Open source is usually used by folk who are either trying to gain market share, or projects that are very early stage. So in that sense, we're trying to be pretty careful and conservative in our decision-making process, because this is in some ways new ground. Much like three years ago, when we gave intellectual property rights back to our residents, and allowed them to own what they made, that was a very new step in this space, and so I think we're continuing the tradition of bleeding edge in our decision making.

When did you start the detailed preparatory work, and what did that entail in terms of preparing the viewer code for release?

It really got started in May and that process continued until the release. It was everything from doing external security audits, hiring additional staff, making sure that you could build it on all the platforms, and building the manifests for all the zip and tarballs we were going to distribute.

Did you have to do much in terms of making the code more legible or more modular?

I think we haven't done as much of that as we would like. Now, of course, nobody who has actually written code and then released it ever thinks the code is clean or modular enough; in fact there are pretty big changes coming down the pike to make the code better.

And that was a pretty active topic of debate: do we wait until after those changes to release the code? We decided that it made more sense to get the code out there. You can always find reasons not to open source, and ultimately it's better to let people begin getting expertise in the code even if we warn them: Hey, this part of the code is going to be changing. And what's neat is that less than 24 hours after we put the code out we've already accepted a user patch.

Could you say a little about these big changes that are coming through?

What we need is to be able not to have to update monolithically. Right now, we take down the grid, we update everybody's viewer, and everything comes back up. And obviously that's neither scalable nor testable. And so there's this long series of changes to be made to let us upgrade in a more heterogeneous way. And we are beginning to publish what those changes are going to be so that people know that they're coming and what to expect.

What are the things that you haven't been able to open source?

Well, for example, streaming textures in Second Life use the JPEG2000 compression standard, j2c, and we use a proprietary bit of code to do the decompression. Now libjpeg, which is the open source version of this, does j2c, but it's way too slow. So one of our first challenges to our user base is: Hey, go smack libjpeg around a bit, and optimise it and then we will happily swap it in.

Why do you distribute binary copies of libraries that are almost certain to be found on any GNU/Linux system -- zlib and ogg/vorbis, for example?

It just seems simpler to give people really complete sets and say: If you go through these steps you will build successfully. There are few things more frustrating than getting all excited about getting some code and you go to build it and it barfs. So we've really been trying to take steps to make sure that doesn't happen. Within about an hour and a half of us putting the code up, there was a picture up on Flickr of somebody who had compiled and made a change already.

In terms of the timing, Linden Lab's been very circumspect in talking about this move: the signals were later this year rather than at the beginning. Why is it happening now, much earlier than you originally indicated?

Linden Lab has always been probably more open than is good for us about what we're trying to do when. We have always talked about features that we're working on, and given estimates of when we were trying to release them. Like most software, we usually end up being a little bit later on those than we'd like to be. And so going forward, we're trying to do a better job of underpromising and overdelivering rather than the opposite. So if people get mad at me because I deliver stuff faster than I was going to, I think I can live with that. I like to beat expectations from here on out.

What do you hope to gain from open sourcing the viewer?

First of all, we expect to get a better viewer. We think we will do a better job of finding bugs and exploits with the Second Life community looking at the code. If you go out medium to longer term, I think we will see active feature development as the community gains expertise with the code and we continue to implement protocol changes to make it easier to implement the features. More importantly, I think we're going to be building expertise in running an open source project because this is just step one for us in terms of where we think Second Life needs to go.

Second Life is growing very rapidly at this point. We think that it is a Web-scale project, not a game-scale project. We will not be happy if at the end of the day we only have ten million users; I think we would all see that as a tremendous failure. So, if we're going to scale to Web levels, obviously we need to keep open-sourcing the pieces that make sense to open source. In order to do that, we need to build expertise at running open source projects, and being part of open source projects, and engaging the open source community. So we've taken the piece that we were first able to do that with, and we're going to learn a lot over the next couple of quarters.

Were you surprised by the large number of positive comments on the blog posting that announced the move?

There's no question that the Second Life community is the most creative, capable, intelligent, community ever targeted on one project in history. To give them the ability to make the project even more their own - it does not surprise me that they're pretty psyched about that.

What are the resources that you've put in place to work with the community that you hope to build around the code?

Right now, we basically have an army of one, Rob Lanphier, who did this before. He was at RealNetworks, and spearheaded the Real server open source project Helix.

What's he going to be doing, and how will the code submissions be processed?

He is going to be helping to hire a team, because we're eventually going to need a whole team to be just managing the ingress of code. Right now, he help set up JIRA, the project management software, which the users can register on and submit bugs and patches. They have a wiki for the open source project, and he has been pretty much managing that.

The QA team is also directly plugged in to the patch submission process so that they can pull patches in, test them on private set-ups, see what's going on. The developers will be keeping an eye on things as well. Like a lot of what Linden Lab does, it's going to be a relatively diffuse project.

You mentioned JIRA for issue tracking, what about the actual code management?

We use Subversion. There isn't yet a public Subversion repository, but we're getting there.

Will you be giving accounts on that to outside contributors?

I don't know exactly what Rob's plan is for that, but I would assume that there's going to be something like that. I expect the libsecondlife people will have a Subversion repository up before we do anything, anyway. They may host the code also -- they're pretty aggressive about doing that.

To foster external contributions, how about moving to a plug-in architecture?

I think that all of us agree that a plug-in structure on the client makes sense. It's just a matter of figuring out whether we want to leverage an existing one or re-invent the wheel.

You've indicated that you view opening up the client as a learning experience for open-sourcing the server in the right way: what additional issues will you need to address here - presumably the proprietary Havok physics engine is going to be a problem?

Certainly, there is the question of proprietary code. We may be able to do exactly what we did on the client side, where we are distributing binaries. In six months, when this [move to open up the client] is successful, it may make for very interesting conversations with folks. We can say: Hey, look, you are the leader in this sector, you should open source, here's why we did it and it worked. And I think the fact that there aren't any proof-points of that is maybe part of what scares companies from doing that. I think we're going to be a very interesting test case.

Obviously the server raises a host of security issues. We have a roadmap that we think solves those, and we're going to be sharing that roadmap sometime this quarter with the community, once we get it sufficiently refined that we're happy with it. We see a host of use-cases for servers where we need to make some pretty profound architectural changes in terms of how trust is established between user and server, between servers and each other, and servers and backend systems. But we see a path, and so it's just a matter of applying development resources to that path and moving along it.

What kind of things are you having to deal with?

In broad security terms, [it's] about code running on hostile machines. Right now, all of the simulator machines are machines that we own in our co-los. It's very different to have that code running on a machine in your garage, even though you're probably a trustworthy guy. That raises issues of trust. Once you have code running on hostile machines, it doesn't really matter whether you have the source or not: you can start doing things. And so we need to trust the simulators less, which means moving some of the things that the simulators currently do in a trusted fashion, out of them.

Does that mean centralizing certain Second Life services?

That depends. Let's say you were a large research organization and you wanted to be able at times use Second Life in a more private way. You might want to control even some of the centralized services. But what you don't want is just a fragmented set of parallel universes that can't talk to each other because you then lose the benefit that makes Second Life so strong, which is the fact that all these communities can connect across traditional geographic and community boundaries. And so the secret sauce becomes how do you architect it in a way that allows both Internet and intranet use.

Do you think that these future worlds will be part of the main Second Life geography or will there be portals from them through to your world?

Well, I think the answer is "yes", because there are some use-cases where it makes sense to be part of the big world, and other cases it makes sense to be a portal away.

Presumably you've also got to deal with issues like identity as avatars move between different worlds, and the tricky one of money?

It's almost like you've read my list: you're dead-on. What's good is, unlike six and half years ago, when we got rolling on this stuff, some of these have been partially solved by the Web. There are much better exemplars today than there were six and half years ago. And so for a lot of what we're going to be doing we can use existing technologies.

What does that imply about the convergence of 3D virtual worlds with the Web?

I think that when you look at anything in problem space, there's some things that the Web does very well. Text, it does it really well; one-to-many, it does it very well; sequential solo consumption of content, it does really well. But there are some things that shared collaborative space and virtual worlds and 3D do really well: if you need place, or you need people to be consuming the content together, where the audience matters, or knowing that we're consuming at the same time matters, or when you need simultaneous interaction.

So I think it's a little odd to imagine that either of those hammers will solve all problems. Instead, what you want is to be able take problems and move into them into the correct space. If you're doing text entry, doing it in 3D is just a big pain in the butt. So there are places for the Web, and there are places for virtual worlds, and I think what you want is as much data to flow between those two as smoothly as you can.

Finally, once you've opened up the code to the client and server, what will be left for Linden Lab to make some money from?

I think that would be a little bit like implying there's no business to be had on the Web if you give away Apache. The Web has shown us where a lot of the value is: identity, transactions, search, communities. And so nothing that we've talked about requires that Linden Lab give up any of those pieces. I think the key is for us to enable growth, building a much, much bigger market, and attempt to make money where it makes sense.

Glyn Moody writes about open source and virtual worlds at opendotdotdot.


Index entries for this article
GuestArticlesMoody, Glyn


to post comments

Interview with Second Life's Cory Ondrejka

Posted Jan 18, 2007 10:22 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

I agere with most of what's being said here, but ye gods there's some hyperbole.
there's never been a product that was in the dominant position that then open sourced. Open source is usually used by folk who are either trying to gain market share, or projects that are very early stage
is rubbish unless interpreted very narrowly, and is decidedly questionable even then.
There's no question that the Second Life community is the most creative, capable, intelligent, community ever targeted on one project in history.
is frankly laughable. People spent centuries on e.g. European cathedrals, and you certainly can't describe them as not creative works, or their builders as stupid. It's got a big userbase for a software product, sure, but `in history' is larger than that.

And as for Subversion: isn't something that will end up with this many developers a perfect match for a wide-scale distributed version control system, more gittish than subversiony?

Interview with Second Life's Cory Ondrejka

Posted Jan 25, 2007 9:17 UTC (Thu) by renox (guest, #23785) [Link] (1 responses)

>>there's never been a product that was in the dominant position that then
>>open sourced. Open source is usually used by folk who are either trying to
>>gain market share, or projects that are very early stage
>is rubbish unless interpreted very narrowly, and is decidedly questionable even then.

Uh? Could you give examples?
I certainly don't remember any other product which was open sourced when it was in a dominant position.

Interview with Second Life's Cory Ondrejka

Posted Jan 25, 2007 21:19 UTC (Thu) by bronson (subscriber, #4806) [Link]

It's a meaningless question. Most popular open source projects have been open source right from the start, immediately disqualifying them. That leaves VERY few contenders to choose from, probably numbered somewhere in the tens. I think that's what nix meant when he said "razor thin".

It's like saying, "how many all-star quarterbacks went to Notre Dame and drove a LeSabre while there?" The question eradicates most of its candidates so it's futile to try to draw a meaningful general conclusion from its results. (my apologies to Joe Montana if he drove a crappy Buick...)

That said, there's one very obvious example that's been all over the news lately. It's easily the most popular language for business and it has a small triangle named Duke as its mascot. I have a few others in mind but it would take some work for me to make sure. And, since the question is meaningless, I shan't take the time.

Does that make sense?

Is a GPL client relevant

Posted Jan 25, 2007 20:28 UTC (Thu) by alext (guest, #7589) [Link]

I haven't looked into it but the question in my mind is do you have to sign up to a particular terms of use when you go on to use Second Life. If so do those t&cs mean they can change things later to lock out any client software that they don't agree with or want. Hence the total and ultimate control and resides there and having a GPL'ed tool to use it with could become meaningless once they have had enough input from that group of volunteers to keep them ahead of any emerging opposition.

The perfect option for the competition comes then from paying for volunteer work with a credit in the company so if it ever amounts to anything of value you are obliged to pay them back. I've seen too many past vague offers quietly forgotten when the owner gets married and a quiet voice in their ear tells them to keep it all for themselves (that's just an example of one such risk).


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds