|
|
Log in / Subscribe / Register

Calling for a new openSUSE development model

From:  Stephan Kulow <coolo-AT-suse.de>
To:  openSUSE Developers <opensuse-factory-AT-opensuse.org>
Subject:  Calling for a new openSUSE development model
Date:  Thu, 14 Jun 2012 10:46:32 +0200
Message-ID:  <4FD9A4E8.5090004@suse.de>

Hi,

It's time we realize delaying milestones is not a solution. Instead,
let's use the delay of 12.2 as a reason to challenge our current
development model and look at new ways. Rather than continue to delay
milestones, let's re-think how we work.

openSUSE has grown. We have many interdependent packages in Factory. The
problems are usually not in the packages touched, so the package updates
work. What's often missing though is the work to fix the other packages
that rely on the updated package. We need to do a better job making sure
bugs caused by updates of "random packages" generate a working system.
Very fortunately we have an increasing number of contributors that
update versions or fix bugs in packages, but lately, the end result has
been getting worse, not better. And IMO it's because we can’t keep up in
the current model.

I don't remember a time during 12.2 development when we had less than
100 "red" packages in Factory. And we have packages that fail for almost
five months without anyone picking up a fix. Or packages that have
unsubmitted changes in their devel project for six months without anyone
caring to submit it (even ignoring newly introduced reminder mails).

So I would like to throw in some ideas to discuss (and you are welcome
to throw in yours as well - but please try to limit yourself to things
you have knowledge about - pretty please):

1. We need to have more people that do the integration work - this
  partly means fixing build failures and partly debugging and fixing
  bugs that have unknown origin.
  Those will get maintainer power of all of factory devel projects, so
  they can actually work on packages that current maintainers are unable
  to.
2. We should work way more in pure staging projects and less in develop
  projects. Having apache in Apache and apache modules in
  Apache:Modules and ruby and rubygems in different projects may have
  appeared like a clever plan when set up, but it's a nightmare when it
  comes to factory development - an apache or ruby update are a pure
  game of luck. The same of course applies to all libraries - they never
  can have all their dependencies in one project.
  But this needs some kind of tooling support - but I'm willing to
  invest there, so we can more easily pick "green stacks".
  My goal (a pretty radical change to now) is a no-tolerance
  strategy about packages breaking other packages.
3. As working more strictly will require more time, I would like to
  either ditch release schedules all together or release only once a
  year and then rebase Tumbleweed - as already discussed in the RC1
  thread.

Let's discuss things very openly - I think we learned enough about where
the current model works and where it doesn't so we can develop a new one
together.

Greetings, Stephan

-- 
To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org
To contact the owner, e-mail: opensuse-factory+owner@opensuse.org





to post comments

Calling for a new openSUSE development model

Posted Jun 14, 2012 17:58 UTC (Thu) by hadrons123 (guest, #72126) [Link] (25 responses)

Stop having deadlines.It makes us nervous and failing to keep up the deadlines, brings down the spirit.

Follow a rolling release model like Arch Linux. Arch linux is a team of less than 100 individuals and still they are awesome in bringing new packages on time.

I 'm sure opensuse can do better than that, if they switch to a complete rolling release model.

Calling for a new openSUSE development model

Posted Jun 14, 2012 20:57 UTC (Thu) by drag (guest, #31333) [Link] (20 responses)

Having deadlines is important if you are planning on doing actual releases because it reduces the number of shenanigans that developers can get away with. If they see the release slipping because of some other developer then they will think 'hey, I can just take the time to pile on some new features or bump up the revision to the latest release from upstream'. This inevitably leads to a sort of mudslide of changes that pile up and you end up with nothing but continuous delays.

Deadlines gives other people justification to reject changes and roll back features that won't make it on time. This has worked out very well for distributions like Fedora which see relatively little delays each time they do a release.

Now deadlines don't make sense if you are doing a rolling release. OpenSuse could do that, but it's going to cause headaches for users unless they stick to a strict policies regarding binary compatibility and such things.

This allows you to continuously feature creep, but if you want to make significant changes (like move from old init to systemctl) then it makes it extremely difficult as regularly scheduled updates used by regular users WILL break systems.

To me the solution isn't messing with deadlines or release styles.. it's something organizational and something that has to do with management style that needs to change. It points to a systemic organizational issue caused by management and thus any fix has to be addressed there.

They have a limited pool of talent and resources. They need to figure out how to either have their organization do less work so they can concentrate on quality control more or figure out how to get people to 'work smarter' and have quality control integrated naturally into the build process.

My personal choice would be to simply latch on to other distributions and 'steal' their work (probably Fedora) so that OpenSuse can concentrate on differentiate features that matter to users and be able to spend more time on quality control.

That way they can improve the status quo without having to spend additional resources.

That is 'work smarter'.

Calling for a new openSUSE development model

Posted Jun 14, 2012 22:53 UTC (Thu) by Wol (subscriber, #4433) [Link] (6 responses)

Unless, couldn't you sort-of copy Debian?

Let's say you do six-month releases, then you roll for five like Sid or Cooker, then you do a feature freeze and a mad one-month bug-fest. Then you could "name and shame" :-) if people aren't fixing blocking bugs, and anything that's not a bug fix is simply "come back for the next release".

I know it's not necessarily the best solution, but it's a bit of an obvious one. Says me who runs gentoo :-) but SuSE is my distro of choice when I'm supporting other people.

Cheers,
Wol

Calling for a new openSUSE development model

Posted Jun 15, 2012 0:20 UTC (Fri) by drag (guest, #31333) [Link] (5 responses)

> Unless, couldn't you sort-of copy Debian?

Debian is infamous for poor release management. This is one of the major reasons for the popularity of Ubuntu. They do good quality when it finally releases, but I don't think it's a good model to follow.

The best idea I can think up is to just piggy back on Fedora and concentrate on developing what makes OpenSuse special.

Calling for a new openSUSE development model

Posted Jun 15, 2012 1:46 UTC (Fri) by miguelzinho (guest, #40535) [Link] (4 responses)

Ubuntu does good quality releases and Debian has poor release management?

I'm sorry but my experience is completely the opposite. I have been screwed many times by buggy Ubuntu packages when using a new release early that I've have adopted a Windows user mindset to wait a few weeks to get updated packages after a release, because they will fix a lot of untested and postponed bugs thanks to the release deadline.

OTOH with Debian I do not recall having any serious problems when using an early release at all.

Calling for a new openSUSE development model

Posted Jun 15, 2012 7:26 UTC (Fri) by seyman (subscriber, #1172) [Link] (3 responses)

> Ubuntu does good quality releases and Debian has poor release management?

I believe drag's "They do good quality when it finally releases" comment is aimed at Debian, not Ubuntu.

> OTOH with Debian I do not recall having any serious problems when using an early release at all.

One Debian release (sarge? etch?) was respinned within 24hours of its release. Sarge was delayed time and time again leaving people who needed stability stuck on etch for years.

Calling for a new openSUSE development model

Posted Jun 15, 2012 13:12 UTC (Fri) by drag (guest, #31333) [Link] (1 responses)

> I believe drag's "They do good quality when it finally releases" comment is aimed at Debian, not Ubuntu.

Yes. That is correct.

> One Debian release (sarge? etch?) was respinned within 24hours of its release. Sarge was delayed time and time again leaving people who needed stability stuck on etch for years.

I've used Debian for many years and enjoyed taking advantage of the package management system to do some very crazy stuff. It's nice and flexible. Very useful.

But in terms of release management is that the timing is undependable. It's very difficult to use Debian if you must coordinated with other people in a large infrastructure when the best estimation you can come up with in terms of a release is 'well, maybe next year'.

Calling for a new openSUSE development model

Posted Jun 15, 2012 13:18 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

The last several releases were fairly predictable (about 1 release every 2 years). It's the same with RHEL, btw.

Calling for a new openSUSE development model

Posted Jul 4, 2012 21:48 UTC (Wed) by zack (subscriber, #7062) [Link]

> Sarge was delayed time and time again leaving people who needed stability stuck on etch for years.

That was 2005, 7 years ago. I wonder how long a Free Software distro should be held accountable for something like that.

Now look at http://en.wikipedia.org/wiki/Debian#Release_history and take the average of release cycle duration for releases since then. It's 22.6 months, with a very low variance, and it's been like that for 7 years.

That might not match the definition of "time-based release", but it's pretty reliable if you ask me.

Calling for a new openSUSE development model

Posted Jun 14, 2012 23:23 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

Do people actually use OpenSuSE like Debian (upgrading only when new release comes)? I've yet to see this.

(Of course, I'd prefer if ALL but one or two desktop distros die out)

Calling for a new openSUSE development model

Posted Jun 15, 2012 10:18 UTC (Fri) by jengelh (subscriber, #33263) [Link] (11 responses)

In the enterprise realm, such compaction has already occurred. It's RH and SUSE that fill in the two spots.

Calling for a new openSUSE development model

Posted Jun 15, 2012 11:52 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

Not really. CentOS (well, it's just another name for RHEL, but still) and Debian (also known as Ubuntu) are very much alive and well.

And on Desktop we have Ubuntu, Fedora, SuSE and sometimes RHEL.

Calling for a new openSUSE development model

Posted Jun 17, 2012 0:11 UTC (Sun) by cmccabe (guest, #60281) [Link] (9 responses)

It depends on what you mean by "enterprise."

If you mean web companies and software companies, then yes, there is quite a bit of Ubuntu. Very occasionally you'll see Debian or even something like Gentoo. Non-technology companies tend to stick to Red Hat 5 or 6, at least here in the U.S.

It's easy to forget when you work in software, but software is not a very large part of the overall economy. Sectors like oil, bulk chemicals, healthcare, and so on completely dwarf software and those guys have computers too.

So if you love monocultures, come over and work on enterprise software! You'll also get the fun of dealing with 5-year old system software and its bugs. RHEL 5 is very much alive in the enterprise space.

Calling for a new openSUSE development model

Posted Jun 17, 2012 8:46 UTC (Sun) by k8to (guest, #15413) [Link] (8 responses)

Can confirm.
Over the last 3 years I've seen an increasing enterprise consolidation around Red Hat Enterprise Linux 5. RHEL 6 is now gaining but still a minority. Of course the real majority is CentOS but I consider it a variant of same.

Ubuntu, SuSE, Debian are all far and away second fiddle. SuSE has lost ground, I'd say, while Ubuntu has held steadish, so that they're both now at a very rare-to-hear-about-them point.

Personally I'm happy SuSE is waning because of the crazy shenanigans they've pulled several times now in libc symbol binding games. SuSE is starting to feel like a place where you should only run software compiled by SuSE or yourself. Third party binaries are a minefield.

Calling for a new openSUSE development model

Posted Jun 17, 2012 9:53 UTC (Sun) by jengelh (subscriber, #33263) [Link] (7 responses)

>the crazy shenanigans they've pulled several times now in libc symbol binding games.

Can you actually subtantiate that claim on a /technical/ basis or is this just the typical "I hate X, gonna move to Y"-type rant from a spoiled user?

Calling for a new openSUSE development model

Posted Jun 18, 2012 3:51 UTC (Mon) by k8to (guest, #15413) [Link] (6 responses)

Your level of hostility is off the charts.

Suse 10.1 through 10.4 specifically bound the resolver library tightly by modifying the symbol table with the system allocator to work around firefox bugs. This meant that any executable using an alternate allocator on these versions would end up trying to free memory with the custom allocator which wasn't allocated with it. A crash at this point was about the best outcome you could hope for.

The bug is documented and filed.

There are others I remember less well.

Take your baseless presumptions and go home.

Calling for a new openSUSE development model

Posted Jun 18, 2012 3:53 UTC (Mon) by k8to (guest, #15413) [Link]

Basically this is "you're a liar unless you do all my homework for me." And I consider it trolling.

Calling for a new openSUSE development model

Posted Jun 18, 2012 9:42 UTC (Mon) by jengelh (subscriber, #33263) [Link] (3 responses)

>Suse 10.1 through 10.4

My point is that blaming something today for issues it has had in the past is unjustified. (Just like certain political developments in the real world.)

Calling for a new openSUSE development model

Posted Jun 18, 2012 9:52 UTC (Mon) by k8to (guest, #15413) [Link] (1 responses)

I'm not done debugging the current problem that only shows up on SLES and no other linux distribution that is resulting in memory corruption completely at random.

Calling for a new openSUSE development model

Posted Jun 18, 2012 9:53 UTC (Mon) by k8to (guest, #15413) [Link]

Meanwhile, we do have customers who run those versions, so it's not exactly the past.

Calling for a new openSUSE development model

Posted Jun 18, 2012 15:55 UTC (Mon) by nevyn (guest, #33129) [Link]

> My point is that blaming something today for issues it has had in
> the past is unjustified.

So much sarcasm ... so little time. I shall save my energy and just highlight your words.

Calling for a new openSUSE development model

Posted Jul 8, 2012 1:21 UTC (Sun) by cmccabe (guest, #60281) [Link]

I actually like using SuSE on my personal desktop, so I'm sad to hear about these 10.x problems. Do you have a bug number? My Google skills are weak today, apparently. The number of people using SuSE may be small, but it's not zero (for us), and I hope I don't end up having to debug something like this in the future.

Calling for a new openSUSE development model

Posted Jun 15, 2012 9:41 UTC (Fri) by jezuch (subscriber, #52988) [Link] (2 responses)

> Stop having deadlines.It makes us nervous and failing to keep up the deadlines, brings down the spirit.

If I didn't have deadlines I'd never get anything done. I'd just procrastinate forever.

Calling for a new openSUSE development model

Posted Jun 15, 2012 11:46 UTC (Fri) by xxiao (guest, #9631) [Link] (1 responses)

I don't think this is fair for Debian. There must be some good reasons for Debian to be the No.1 in usage(considering many distros are based on Debian, considering both server and embedded world). On the release part, you can always use testing, plus, the release is getting better each cycle.

I have never used SUSE, I somehow feel it's on the same path as Mandriva now.Might be time to just give up, and move on.

Calling for a new openSUSE development model

Posted Jun 17, 2012 16:11 UTC (Sun) by jospoortvliet (guest, #33164) [Link]

might be a US thing then or the market you're in... SUSE has a ~30% marketshare in big enterprises and significantly more in some niches like 80+ in SAP-on-linux or linux on systemZ and plenty presence on supercomputers, stock exchanges etc.

Calling for a new openSUSE development model

Posted Jun 15, 2012 9:42 UTC (Fri) by Felix (subscriber, #36445) [Link]

> Stop having deadlines.It makes us nervous and failing to keep up the deadlines, brings down the spirit.

IMHO deadlines are fine as long as the scope is not fixed as well. When the deadline is there, get the release out. Whatever is not ready by then needs to be backed out (or even better it should have never added to that release branch) and gets done for the next release. The kernel is one of the prime examples how well this can work.

automated testing

Posted Jun 15, 2012 9:24 UTC (Fri) by rwst (guest, #84121) [Link] (23 responses)

If testing took O(1) time then you could release time-based often, just those package trees that pass testing, like gentoo does. So why is it so hard to include into a distribution what should be standard for a project (and probably is standard in most Java projects)?

automated testing

Posted Jun 15, 2012 10:56 UTC (Fri) by Wol (subscriber, #4433) [Link] (22 responses)

You miss something that's bitten me on several occasions ...

Running the latest version of SuSE at the time (and this is why I switched to gentoo ...) I needed the latest version of lilypond. Wouldn't build.

They'd upgraded their dependencies - lily does tend to be a bit bleeding edge on occasion. So you get nasty circular loops where updating a package means you need to upgrade the libraries - which breaks other packages. So you upgrade them, rinse and repeat.

Individuals and rolling distros can handle this (fairly) easily. Enterprise, hardened, distros it's a lot more difficult.

Cheers,
Wol

automated testing

Posted Jun 15, 2012 16:21 UTC (Fri) by k3ninho (subscriber, #50375) [Link] (21 responses)

Surely this one's bitten enough people that versioned libraries are available somewhere in the Linux or Posix specs? No? Oh well.

K3n.

automated testing

Posted Jun 15, 2012 16:29 UTC (Fri) by cmccabe (guest, #60281) [Link] (20 responses)

Yes, Linux has shared library versioning. That doesn't mean the distro is going to actually use it.

automated testing

Posted Jun 15, 2012 17:08 UTC (Fri) by nix (subscriber, #2304) [Link] (18 responses)

More to the point, you need not just shared library versioning but pervasive use of versioned symbols and no ABI breaks for this. This is so far from being true that it is almost laughable: hardly any upstreams bother with versioned symbols in any real sense (using them to avoid ABI breaks), and when distros try to version their own libraries' symbols despite upstream not doing so, it causes compatibility problems with upstream and half the time they have to remove the versioning again.

automated testing

Posted Jun 15, 2012 17:30 UTC (Fri) by drag (guest, #31333) [Link]

This is why it's useful for application developers and users to ignore the whole mess completely and just make sure to bundle everything they need that has a significant chance of breaking with the application.

automated testing

Posted Jun 15, 2012 21:06 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (14 responses)

> hardly any upstreams bother with versioned symbols in any real sense (using them to avoid ABI breaks)

I'd like to use it, but AFAICT, there's darn near 0 support for C++ and it makes the headers not work with other compilers in the same way. For C++, sure, I can type out the mangled name, but that's a pain and should be unnecessary. I'd really just like to decorate functions and methods with a VERSION_SYMBOL("real_symbol", VERSION_STRING) right on the function. I was also unable to find out how method support works with it. Not to mention that I also need to support MSVC for this project which isn't going to handle headers with versioned symbols in it.

Of course, my search-fu may have just come up with nothing for these problems.

automated testing

Posted Jun 16, 2012 13:40 UTC (Sat) by nix (subscriber, #2304) [Link] (13 responses)

Yeah, you're mightily screwed with C++ as well. I forgot that case.

There should indeed be an __attribute__((__symbol_version__,"...")), though you'd need to consider the case of symbols having more than one version too (e.g. default versions, though possibly you could have a __default_symbol_version__ attribute for this). But there isn't.

You don't need anything like this for C++

Posted Jun 19, 2012 20:12 UTC (Tue) by khim (subscriber, #9252) [Link] (12 responses)

C++ has it's own portable way of doing this:

namespace my_super_duper_library {
  inline namespace version_0 {
    func foo(int);
  }
}

in next version

namespace my_super_duper_library {
  inline namespace version_1 {
    func foo(long);
  }
}

Users always use my_super_duper_library::foo and everyone is happy.

You don't need anything like this for C++

Posted Jun 19, 2012 20:28 UTC (Tue) by nix (subscriber, #2304) [Link] (11 responses)

And use 'using' instead of a default symbol version? I suppose that would work. If you put a using inside each symbol namespace, you can have the effect of symbol nodes depending on other symbol nodes as well.

(In any case, mathstuf was wrong and so was I: GNU ld supports extern "C++" in version scripts to force appropriate C++ mangling of names, using the name mangler in libbfd.)

You don't need anything like this for C++

Posted Jun 19, 2012 21:08 UTC (Tue) by joib (subscriber, #8541) [Link] (9 responses)

No need for 'using' tricks. I guess you're missing the point, which is that in C++11 there is a new feature called "inline namespaces" which allows one to designate a default namespace. So in the provided example, my_super_duper_library::foo generates a symbol reference to my_super_duper_library::version_1::foo (mangled, obviously). Code which was compiled against the old version of the library will keep on using my_super_duper_library::version_0::foo. Thus providing a way for the library developer to enhance the API while keeping ABI compatibility.

You don't need anything like this for C++

Posted Jun 19, 2012 21:41 UTC (Tue) by nix (subscriber, #2304) [Link]

Aaah! Nifty. Yes, I did miss the 'inline'.

You don't need anything like this for C++

Posted Jun 20, 2012 0:58 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (2 responses)

For another project, I had come up with a namespacing strategy for it (it was API compatible via a namespace protocol = protocol_v1 or something similar for whenever things bumped), but I had thought for some reason that it wouldn't work similarly to symbol versioning. Since the compiler would, even in that case, expand the namespaces out so that things are always the same, it would work. It'd be nice if the symbol versioning documentation would mention this as a solution for C++ (the inline namespace is nice, but the stuff I've been writing tends to need to work on RHEL5 and MSVC still).

You don't need anything like this for C++

Posted Jun 20, 2012 1:08 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

You can sort of emulate inline namespaces by adding lots of "using" statements to outer namespace, but it's not pretty.

You don't need anything like this for C++

Posted Jun 20, 2012 14:54 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Well, the idea is that protocol_v2 would do the using stuff from protocol_v1 for the compatible parts, then replace the functions that need replacing. The default protocol would be updated with the namespace protocol = ; statement.

You don't need anything like this for C++

Posted Jul 3, 2012 19:49 UTC (Tue) by cmccabe (guest, #60281) [Link] (4 responses)

One thing I'm curious about. Are you intended to duplicate all of your declarations in both namespaces, or are you intended to do something like this:

namespace version1 {
  void foo(int64_t a);
  void bar(int);
}

namespace version2 {
  void foo(float b);
  using namespace version1;
}

If you are intended to do something like what I've just described, there are obvious problems with implicit type conversions. For example, foo(0x123456789LL) will give you the version1 function, whereas foo(0x1234) will give you the version2 function, which is probably not what you want.

Ultimately, this seems like just another variant of "new version, new datatypes / functions," which you can easily do in C or older versions of C++. You can even hide the old version declarations in the header file by default using macros. I think FUSE does this-- if you don't set FUSE_VERSION to an "old" number, you can't see the old deprecated APIs in the header file.

You don't need anything like this for C++

Posted Jul 4, 2012 18:16 UTC (Wed) by jwakely (guest, #60262) [Link] (3 responses)

foo(0x123456789LL) will give you the version1 function
Are you sure? I don't understand your example, as there isn't actually an inline namespace there, but if version2 is meant to be inline, then both calls to foo would get the float version. i.e. adding your version2 would have change the API in incompatible ways, as the original foo would no longer be used by anyone. You could do that, but it's probably not what you want.

If you use a using declaration instead of a using directive then both overloads are found and overload resolution picks the best one.

Using declarations work better with inline namespaces. Consider version 1 of the API:

namespace api {
  inline namespace v1 {
    void foo(int) {
      // implementation
    }
    void bar() {
      // implementation
    }
  }
}
Code that calls api::foo(1) links to api::v1::foo(int).

Later, version 2 is released:

namespace api {
  namespace v1 {  // not inline now
    // as before
  }
  inline namespace v2 {
    void foo(int i) {
      // improved v2 implementation
    }
    void foo(float) {
      // new impl for float
    }
    using v1::bar;
  }
}
Code that was linked to api::v1:foo still finds that symbol, with the same implementation. Code that is compiled against the new version and calls api::foo(1) links to api::v2::foo(int). Both symbols can coexist in the library (just as with ELF symbol versioning.) Code that calls api::bar() gets api::v1::bar() in both cases, so there's no duplication (except for adding a using declaration to the v2 namespace.)

You don't need anything like this for C++

Posted Jul 5, 2012 22:20 UTC (Thu) by cmccabe (guest, #60281) [Link] (2 responses)

> Are you sure?

Well, let's try it.

#include <stdint.h>
#include <stdio.h>
namespace version1 {
    void foo(int64_t a) {
        printf("version1 foo\n");
    }
}
inline namespace version2 {
    void foo(float b) {
        printf("version2 foo\n");
    }
    using namespace version1;
}
int main(void) {
    foo(0x123);
    return 0;
}
gcc 4.6.2 gives me "foo.cpp:22:14: error: call of overloaded ‘foo(int)’ is ambiguous". So it's not as bad as I thought-- just a compile-time failure, not silently doing the wrong thing.

[snip example]
Yes, you could add an explicit "using" line for each version1 symbol, to avoid this problem.

This all seems very similar to just adding a "using namespace version2" to your header file. I guess some people are worried about the potential for conflicts with whatever is already in your namespace, though.

Overall it I give it a "meh." It's not that useful, but it's not seriously harmful either. Which means it's doing better than a lot of language extensions. On the other hand, I do think the copy constructor stuff will be useful (so you see, I'm not just a complete curmudgeon) :)

You don't need anything like this for C++

Posted Jul 9, 2012 10:46 UTC (Mon) by jwakely (guest, #60262) [Link] (1 responses)

It's not really "very similar" to using namespace version2, an inline namespace is a lot more transparent so users never need know it exists (unless they look at the mangled symbols in their objects.)

A using-directive just makes names visible, but they are not treated as first-class members of the namespace containing the using-directive and will not be found by qualified name lookup if there are declarations of the same name in the namespace containing the using-directive. That's not the case for inline namespaces.

Templates defined in an inline namespace can be instantiated and specialized as though they are members of the enclosing namespace, not possible via a using-directive.

An inline namespace is an associated namespace of types in its enclosing namespace and vice versa, so it plays nicely with ADL.

Don't dismiss the feature because you don't understand it yet.

You don't need anything like this for C++

Posted Jul 10, 2012 21:31 UTC (Tue) by cmccabe (guest, #60281) [Link]

It seems like the best practice is to put all of your C++ library declarations into one namespace-- for the sake of argument, call it foolib.

It seems like what you're recommending then is that you have something like this:

namespace foolib {
   namespace version1 {
       ...
   }
   inline namespace version2 {
       ...
   }
}

Given that use-case, I don't think we need to care about the scenario where there are declarations of the same name in the namespace containing the using-directive-- neither the library user nor the library implementor should be doing that.

Templates defined in an inline namespace can be instantiated and specialized as though they are members of the enclosing namespace, not possible via a using-directive.

That's fair. I suppose template specialization is where the "using directive" approach really breaks down.

The cost of this feature is that debugging becomes (even more) difficult, since you search the symbol table of your application for foolib::doit and you don't find it. Instead you find N versions and have to decide which one you're really using. But of course, none of us here writes bugs, so that shouldn't be a problem :)

You don't need anything like this for C++

Posted Jun 20, 2012 0:54 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

I saw the scripts do support C++, but I'd like the versions to live next to the code (just like visibility is done). The external scripts will just drift relative to the code.

automated testing

Posted Jun 17, 2012 0:22 UTC (Sun) by cmccabe (guest, #60281) [Link] (1 responses)

> More to the point, you need not just shared library versioning but
> pervasive use of versioned symbols and no ABI breaks for this. This is so
> far from being true that it is almost laughable: hardly any upstreams
> bother with versioned symbols in any real sense (using them to avoid ABI
> breaks), and when distros try to version their own libraries' symbols
> despite upstream not doing so, it causes compatibility problems with
> upstream and half the time they have to remove the versioning again.

You don't need versioned symbols. You just need to be sane about not breaking backward compatibility. So if you have a struct foo that you exposed to the world in version 1.0, in the new version you create a struct foo2 that has your new stuff in it. Or better yet, don't expose foo to the world-- expose an opaque pointer and accessor functions instead. If worst comes to worst, you can bump up the major API number and allow the old and the new version to be installed simultaneously. A lot of libraries have gone down this path, and it's not that bad.

Of course, versioned symbols are available for those who really, really want them. But each platform has its own subtly different implementation, and it's really difficult to actually get it right. As usual, simple is better.

Binary compatibility is do-able in C++ too. See http://techbase.kde.org/Policies/Binary_Compatibility_Iss...

automated testing

Posted Jun 17, 2012 13:22 UTC (Sun) by nix (subscriber, #2304) [Link]

You don't need versioned symbols. You just need to be sane about not breaking backward compatibility.
Yeah, but if you want real backward compatibility, which includes not breaking people merely because you did need to change an API, then you need versioned symbols.

But most people these days don't seem to give a damn about ABI or API compatibility. Wild change and close coupling is the order of the day. Hallelujah :((

(Hm, when did I become a grumpy old man?)

(As for the C++ binary compatibility constraints: they're horrendous, and I remain amazed that KDE can stick to them.)

automated testing

Posted Jun 21, 2012 16:28 UTC (Thu) by dgm (subscriber, #49227) [Link]

> Yes, Linux has shared library versioning. That doesn't mean the distro is going to actually use it.

I cannot believe you. Is really the bar _that_ low? We should expect MUCH more from distros. And developers.

Calling for a new openSUSE development model

Posted Jun 17, 2012 14:47 UTC (Sun) by joib (subscriber, #8541) [Link] (12 responses)

Some thoughts about the Linux distro model:

http://liw.fi/rethinking-distro-dev/

https://plus.google.com/u/0/109922199462633401279/posts/H...

https://plus.google.com/u/0/109922199462633401279/posts/V...

The model of "freeze the world, release at once" just doesn't scale. I think Ingo Molnar is right on the mark - there needs to be a minimal "core OS" distro, and the release cycles of the rest should not be tied to the release cycle of the "core OS". (The "core OS" release cycle should probably follow the Linux kernel, to bring new drivers to users ASAP.)

In order to solve the "foo" depends on "libbar" issue, where libbar is not part of the core OS, packages could either include whatever libraries they need (beyond those provided by the core OS), with e.g. some fs-level deduplication mechanism as suggested by Ingo M. Alternatively, one could provide separate libbar packages, provided that the system would be designed to support parallel installation of multiple package versions - for shared libraries maybe some RPATH type mechanism similar to OSX frameworks could work.

Calling for a new openSUSE development model

Posted Jun 17, 2012 20:11 UTC (Sun) by drag (guest, #31333) [Link] (11 responses)

It's just a blatently wrong approach to try to package all the software in existence for a general purpose operating system with each operating system release.

This is were it is fundamentally broken and is a terrible idea.

A operating system exists for the sole purpose of making applications easier to write and easier to use. It is quite possible to simply eliminate the OS completely and just run each application you want on bare hardware, but nobody wants to do that because it would require a stupidly large amount of work on behalf of the application designer and is not user friendly.

[quote]The model of "freeze the world, release at once" just doesn't scale. I think Ingo Molnar is right on the mark - there needs to be a minimal "core OS" distro, and the release cycles of the rest should not be tied to the release cycle of the "core OS". (The "core OS" release cycle should probably follow the Linux kernel, to bring new drivers to users ASAP.)
[/quote]

This is what is required.

You have a _minimal_ number of layers. Each layer is to it's job and changes within the layer have to minimally impact the layers above it and below it.

something like this from low to high:

* Hardware

* Kernel layer

* Minimal OS layer. The 'Unix layer'. Low level system utilities, daemons, and libraries specifically used to manage hardware, start up system, notify user space about changes to the environment, and system-specific administrative interfaces. Probably low-level graphical stuff too.

* Application environment layer. Or the application stack layer. For desktops this would be the 'Desktop Environment' of choice. For servers it could be 'LAMP', 'Java Tomcat', 'Oracle Database', 'C development', 'Android' or whatever.

* Applications installed by users/administrators or written by users for specific things they want to accomplish. Games, applications, server-side scripting for web apps, etc.

The first two layers form the 'Core OS'. Above that you start to branch out into multiple choices.

So irregardless of what 'core os' is users can go out and simply install 'KDE desktop environment' from KDE. Get Gnome from Gnome project. If distributions want to maintain repositories of software they don't build the packages themselves if possible and instead fetch them from the upstream projects.

Then you let each desktop environment layer producer fend for itself. Let them figure out how they are going to bundle libraries and handle versioning and all that. Application developers and users will then be free to choose whatever works best for them.

Calling for a new openSUSE development model

Posted Jun 18, 2012 14:24 UTC (Mon) by nix (subscriber, #2304) [Link] (9 responses)

Then you let each desktop environment layer producer fend for itself.
That works great until you realise that a good few things the GNOME guys regard as quintessentially GNOME (e.g. glib) are used not only by other desktop environments (both KDE and xfce use glib) but even by what could be considered core system components if they can be considered anything (e.g. both syslog-ng and systemd use glib). Indeed syslog-ng migrated *from* its own object model *to* glib -- asking it to migrate back so that a core system component does not depend on a piece of a desktop environment is unlikely to be received well.

Calling for a new openSUSE development model

Posted Jun 18, 2012 18:09 UTC (Mon) by joib (subscriber, #8541) [Link]

Well, then the "core OS" can include glib; I don't think it's worth getting too worked up by some idea that glib belongs "higher" up in the stack. If it's needed by some "core" component, then so be it. If GNOME, KDE, or some other (set of) packages requires a different version of glib, they can use a separate version (say, a different so version number or by using RPATH).

I suppose, in general one could (and should?) argue which packages/libraries should or should not belong to the core. But I don't think arriving at some consensus on this matter is the biggest hurdle for this approach, but rather getting buy-in for the general idea.

Calling for a new openSUSE development model

Posted Jun 18, 2012 19:03 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

glib by now should be considered a 'core' package. Besides, glib developers are very careful with backwards compatibility. So there's no reason to leave it out of the "core" profile.

IMO, all core libraries should be designed to be backwards compatible _forever_, just like libc or kernel ABI. That should be the main criterion for their inclusion.

Calling for a new openSUSE development model

Posted Jun 18, 2012 23:01 UTC (Mon) by nix (subscriber, #2304) [Link] (6 responses)

IMO, all core libraries should be designed to be backwards compatible _forever_, just like libc or kernel ABI. That should be the main criterion for their inclusion.
Agreed... but that rules out glib, glib 3 is not ABI nor API compatible with glib 2, and that was quite recent.

Calling for a new openSUSE development model

Posted Jun 18, 2012 23:05 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Well, it's OK in itself if glib-2 is going to be supported for the foreseeable future. Besides, right now it's a good time to make one clean break and design something that can endure and THEN standardize on it.

Calling for a new openSUSE development model

Posted Jun 19, 2012 1:49 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (4 responses)

I don't think it rules out Glib. Glib along with a significant amount of the GNOME platform(not things like GNOME Shell) including but not limited to GTK and Gstreamer maintain ABI compatibility for a long time. So does Qt et all.

When they do break it, they announce it widely and make both versions parallel installable which is really the best way to handle it unless you want to never ever break ABI compatibility which I think is a unreasonable expectation for anything higher up than say glibc.

Calling for a new openSUSE development model

Posted Jun 19, 2012 14:22 UTC (Tue) by nix (subscriber, #2304) [Link] (3 responses)

unless you want to never ever break ABI compatibility which I think is a unreasonable expectation for anything higher up than say glibc.
Why not? X11 did it. Furthermore, with symbol versions it's not even hard to do.

Calling for a new openSUSE development model

Posted Jun 19, 2012 21:33 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (2 responses)

Because higher up layers make mistakes which they dont want to be struck with forever. If it was trivial, everyone would be doing it already.

Calling for a new openSUSE development model

Posted Jun 20, 2012 16:12 UTC (Wed) by mpr22 (subscriber, #60784) [Link] (1 responses)

I wouldn't ask them to be stuck with their mistakes forever, as long as they honour the proper sequencing of "create new; deprecate old; discard old". Some projects tend towards conflating "create new" and "discard old" into a single step. This is an understandable temptation, but one that should, as a general rule, be resisted by anyone working on a project that has (and wants to keep) actual users outside the project.

Calling for a new openSUSE development model

Posted Jun 20, 2012 17:44 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

This is how popular libraries like glib do development. I was just noting that the "discard old" step is a ABI break.

Calling for a new openSUSE development model

Posted Jul 5, 2012 10:13 UTC (Thu) by philomath (guest, #84172) [Link]

You are (more or less) describing Arch Linux.
And that's (part of) why I use it.


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds