Carrez: The real problem with Java in Linux distros
The problem is that Java open source upstream projects do not really release code. Their main artifact is a complete binary distribution, a bundle including their compiled code and a set of third-party libraries they rely on. If you take the Java project point of view, it makes sense: you pick versions of libraries that work for you, test that precise combination, and release the same bundle for all platforms. It makes it easy to use everywhere, especially on operating systems that dont enjoy the greatness of an unified package management system." (Thanks to Torsten Werner.)
Posted Sep 24, 2010 15:03 UTC (Fri)
by jackb (guest, #41909)
[Link] (2 responses)
Posted Sep 30, 2010 15:15 UTC (Thu)
by rwmj (subscriber, #5474)
[Link] (1 responses)
Posted Sep 30, 2010 15:25 UTC (Thu)
by jackb (guest, #41909)
[Link]
Posted Sep 24, 2010 15:31 UTC (Fri)
by salimma (subscriber, #34460)
[Link] (16 responses)
Posted Sep 24, 2010 15:48 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
In rpm/deb packages specifying an exact dependency version is very unusual and frowned upon. Dependencies are assumed to keep compatibility on updates, packagers are provided overrides when it's not the case but they are not used most of the time. In Java/maven world versions are frozen and any change is supposed dangerous unless proven otherwise. Proving otherwise is work, and since everyone assumes ABI breakage on updates upstreams feel free to break the ABI all the time, so everything fossilizes at high speed.
Posted Sep 24, 2010 22:58 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
2) Maven is NOT about automatically pulling in the newest versions, but about _repeatable_ builds. I can pull any version of our software from our repository and build it without problems.
Also, Maven right now is about universal in Java ecosystem. In my fairly large Java project we only have a couple of artifacts that we had to deploy manually.
Posted Sep 25, 2010 7:25 UTC (Sat)
by lkundrak (subscriber, #43452)
[Link] (13 responses)
When you pick a Maven-managed project nowadays, the pom usually has dependencies on random maven plugins, depended on in build scope, often not packaged in a distro and not really needed, it's just that the developer once thought they're nice.
You comment them out and hit build. Assume you're lucky enough to have the right maven version for which a right combination of plugin versions is availeble. You'll probably almost always notice, that a huge pile of dependencies is being grabbed. From various sources, usually. Again, the repositories are hardcoded in pom file. Often, the repositories are not mirrored and sometimes just go away. So you google (bing?) for the artifacts drop them into your ~/.m2 manually. (Oh YUM, did you know how much do I love you?)
The story continues up until you run the thing; it crashes and you need to debug it, where you need to find the right source code that was used to build the binaries you somehow got from the internet some time ago. Given how much do developers love to depend on snapshots this is often impossible. Aside maven does not make it any easier to fetch source for artifacts that are still here, many maven repositories available don't even contain the source code, so you just need to google around for it. If it does not match, you won't immediately find out.
I've probably never encountered a situation where maven made things easier rather than more complicated. First of all -- I don't understand why a build system is even needed for Java. There are no preprocessor defines which would affect the compilation, nor anything similar which gives rise to stuff like autoconf and make for C projects. You can always reproducibly build your code with find -name '*.java' |xargs javac -classpath deps; find -name '*.class' |xargs jar cf lalala.jar. Everything else maven does is dealt with better by existing package and repository management tools. (Heard of rpm? yum?)
Posted Sep 25, 2010 14:03 UTC (Sat)
by salimma (subscriber, #34460)
[Link] (11 responses)
No matter what either of us would like personally, distro/platform-specific and language-specific packaging and dependency tools are here to stay. The only hope is that they learn to play better with each other. As I mentioned at the Wired Dream party, in this respect Python and Haskell does admirably well. Lua, Perl and (*eek*) Java, not so much.
Posted Sep 25, 2010 20:21 UTC (Sat)
by nim-nim (subscriber, #34454)
[Link]
All the distribution packaging tools have pretty much the same requirements. This is so true the commented article was written by an Ubuntu person (ie .deb), and the people commenting there had no problem saying the same in rpm terms. You'll see the same comments and complains in all the Linux Java forums, be it Fedora, Debian, Gentoo, etc side.
Python, Perl, etc have no problems being packaged because they have actually tried to build a functional module systems and there are not so many ways to make one that works, so the result maps directly to rpm/deb/etc
Those packaging requirements are so generic SUN/Oracle are pretty much taking them as-is to implement the Java module system planned for Java 8.
The problem with Java and Linux is not that Linux packaging is too foreign, it's that Java people never tried to build a working module system before, and instead of listening to people who did, have been busy ignoring them and re-making the mistakes that rpm/deb solved years ago.
You know what they say about ignoring Unix only to reinvent it? Linux/BSD packaging is the next stage of Unix. I happened a decade ago.
Posted Sep 26, 2010 2:09 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (8 responses)
rpm is a pig because the second major adopter of it (SuSE) is actually *older* than Red Hat, and has (had?) a very different naming convention (SuSE is, iirc, a Slackware derivative originally). So the naming convention for rpm was broken very early on.
That's why LSB is so important, and why I'm a bit disappointed in LSB, because it hasn't really addressed that problem as seriously as I think it should (although I'd be one of the first to admit it's a very hard problem, both practically and politically).
Cheers,
Posted Sep 26, 2010 12:06 UTC (Sun)
by dag- (guest, #30207)
[Link] (3 responses)
While all distributions that use DEB are based on Debian and RPM predates DEB for a few years. If anything you have to blame RPM's early popularity in the distribution world, not the lack of naming conventions or technical inferiority.
RPM was a revolution in the world of packaging when it was created (compared to Unix legacy packaging standards) and Debian obviously learned a great deal from RPM when later they designed their own system (and made a few mistakes in the process as well).
Posted Sep 26, 2010 20:51 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
Thing is, Red Hat adopted and popularised rpm, and they had their naming convention. Then SuSE adopted rpm, but kept their own distinct naming convention, and there's our problem. Why should RedHat adopt SuSE's conventions and break their historic compatibility? That's daft! But why should SuSE throw away *their* naming conventions, just because they start using the same package tool as Red Hat? That's daft too!
There's your problem - Hobson's choice - whatever you do someone gets hosed :-(
Cheers,
Posted Sep 27, 2010 16:24 UTC (Mon)
by tnijkes (guest, #40042)
[Link]
Where did you get that info?
Posted Sep 30, 2010 9:41 UTC (Thu)
by jschrod (subscriber, #1646)
[Link]
Posted Sep 27, 2010 11:46 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (3 responses)
This is not such a large problem. Most platforms (binary, perl, php etc.) have some kind of automatic dependency extraction, that is done by rpm to generate correct dependencies.
The biggest real issue is naming of development libraries, but that is also not so severe, as long as the distros use common provides (which should usually be 'upstream-name-devel = %version-%release' at minimum).
But, SuSE did used to do weird stuff, not having comprehensive buildrequires, but a separate system to manage it. With OBS, I think this is better.
Regardless, it is quite easy to make packages that build across all rpm-based distros without any modification ... as long as you don't use too many distro-specific macros, or conditionally define them.
Now, why is this still on-topic? AFAIK, java doesn't support any sane dependency extraction, without which it is more difficult to provide good runtime dependencies.
Posted Sep 29, 2010 21:33 UTC (Wed)
by marcH (subscriber, #57642)
[Link] (2 responses)
http://depfind.sourceforge.net/
Posted Sep 30, 2010 9:04 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
You could also argue any random network protocol is documented because one can use network analysis tools on it to find its innards.
Posted Oct 6, 2010 9:33 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
Please elaborate?
> You could also argue any random network protocol is documented because one can use network analysis tools on it to find its innards.
No, this is not the same :-)
You are not doing any kind of justice to DependencyFinder with this poor analogy. Unlike DependencyFinger, a network analysis tool:
I think I might guess what your point is but please elaborate better.
Posted Sep 27, 2010 12:05 UTC (Mon)
by buchanmilne (guest, #42315)
[Link]
Except that CPAN and distutils and various other tools can quite easily be made to cooperate with the distributions native package management tools. E.g., if I install the distribution-provided DBIx::Class package, and it is new enough for Catalyst, CPAN won't force me to upgrade (unnecessarily)/downgrade(which might break other apps) my distro-provided DBIx::Class (or itself) if I use CPAN to install Catalyst.
Same thing with compiling/installing modules with Extutils (packaging perl modules is relatively trivial, especially with CPANPLUS).
Unfortunately, maven seems to think it always knows best. I once tried to package some Java software that uses maven for build, and while I had all the required dependencies installed (including all the required maven plugins), after over two hours of trying to find out how to prevent maven from always downloading any slightly newer maven plugin, I gave up.
For distributions, the distributor wants to be able build the distribution, without relying on the entire internet. Maven seems to depend on the entire internet to be able to be "reproducible".
Feel free to provide links indicating how to prevent maven from trying to download anything from the internet .. and maybe I will try again.
Posted Sep 26, 2010 5:05 UTC (Sun)
by lacostej (guest, #2760)
[Link]
mvn idea:idea -DdownloadSources=true -DdownloadJavadocs=true
So maven do make some things easier.
I agree that the POM concept is flawed for some aspects. Some you solve by using an artifact repository and a proper settings.xml. For others you have to trust developers from making the right choices (versioning ranges).
The thing is that most projects are packaged by developers, not by package managers, who might think more of the ecosystem.
Most java projects are delivered with source code. I went randomly on a public repository: e.g you will find -sources artifacts here http://repository.jboss.org/maven2/org/jboss/jbossas/jbos...
It was probably not a best practice in maven early days. So old packages are probably lacking it. But for OSS projects there's always the source tag information in the POM. So don't say the source for Java projects isn't there.
Now if someone wants to repackage a project and its dependencies, it wouldn't be hard for a project like Debian to convert a POM into some sort of debian package. It might create multiple versions on the user's system, but it should be feasible.
As for trying to maintain a distribution with a reduced number of package versiones, that would require more work, in particular testing and validating that ranges work, but then someone has to teach the Java developer community about ABI and package management. And tell them to use time to do that.
It's usually not their problem. Try convincing them otherwise.
Posted Sep 24, 2010 15:40 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (20 responses)
But this integrated system (somewhat surprisingly) works. Actually works enough that the vendor guarantees that it works ("or some money back"). One couldn't have this without lots of testing, but if the underlying libraries are changed every six months (like on a typical Linux desktop), there's no way this testing could be done. This is not the greatness of the unified package management system, but the weakness of it.
Posted Sep 24, 2010 16:38 UTC (Fri)
by georgm (subscriber, #19574)
[Link] (4 responses)
Posted Sep 24, 2010 16:50 UTC (Fri)
by clump (subscriber, #27801)
[Link] (3 responses)
Posted Sep 24, 2010 19:00 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
Posted Sep 24, 2010 19:05 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link]
(if only Java use was the only problem there)
Posted Sep 24, 2010 22:41 UTC (Fri)
by Lennie (subscriber, #49641)
[Link]
And they look out of place no matter the OS used. ;-)
Posted Sep 24, 2010 18:49 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (10 responses)
Posted Sep 24, 2010 20:11 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (7 responses)
Famous last words... I have experience with testers who haven't got the faintest idea what the software does or about the operating system where the application runs. They can only tell if the icon is green or white, but that's all. Totally useless people. Testing does need qualified people. Maybe they don't need software developer skills, but they have to have good domain knowledge, have to have good knowledge about the environment, have to be persistent, have to think how to break the software, etc. For automated tests they even need some software developer skills.
Posted Sep 24, 2010 23:24 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (6 responses)
Knowing the environment is a complete joke when you just pile up *different* versions of the same jvm/java libs that all have different bugs and you only worry about not hitting them.
Posted Sep 25, 2010 20:41 UTC (Sat)
by NAR (subscriber, #1313)
[Link] (5 responses)
Posted Sep 25, 2010 22:29 UTC (Sat)
by nim-nim (subscriber, #34454)
[Link]
Posted Sep 27, 2010 12:09 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (3 responses)
Posted Sep 27, 2010 12:19 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (2 responses)
Posted Sep 27, 2010 15:28 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (1 responses)
Posted Sep 27, 2010 15:45 UTC (Mon)
by NAR (subscriber, #1313)
[Link]
Posted Sep 30, 2010 9:49 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (1 responses)
Don't take that personally, but I hope I won't have to use a product where you were responsible for test resources' allocation.
With a perhaps better understood analogy, your statement reads like the same mindset as `UI design changes don't require very qualified people, everybody can change some dialogs.'. *shudder*...
Posted Sep 30, 2010 13:53 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
And if you don't think that's how most big Java projects are "qualified", you've never looked below the shiny presentation veneer. They rely heavily on brute-force testing (either cheap testers or the same simulated via loadrunner and friends). That works to limit the number of obvious problems (and customer complains) on the exact configuration tested. But that's never been a good way to validate a design is solid enough to withstand the test of time, and can evolve later when requirements change. As soon as the environment changes, even minimally, everything needs to be redone (write once, run once, not everywhere)
Posted Sep 24, 2010 19:01 UTC (Fri)
by cesarb (subscriber, #6266)
[Link] (3 responses)
We learned from zlib that this kind of situation is a nightmare waiting to happen.
Posted Sep 24, 2010 20:19 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (1 responses)
Should this be an issue? The application is in (theoretically) a controlled environment, not publicly accessible, etc. If an attacker gets to the application, it's already a too big problem. Anyway, bribing the user of the application is probably simpler and cheaper.
Posted Sep 24, 2010 22:43 UTC (Fri)
by Lennie (subscriber, #49641)
[Link]
Posted Sep 24, 2010 23:18 UTC (Fri)
by sjlyall (guest, #4151)
[Link]
Considering that a new version of the timezone database comes out every few weeks I hope this isn't as bad any more.
Posted Sep 24, 2010 16:46 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (20 responses)
But then it's not alone, I recently found what I think is an Evolution bug. I tried to confirm that it was new after the minor version upgrade, by unwinding that upgrade with 'yum downgrade'.
But I gave up, because a minor version bump of Evolution replaces the data server, which replaces a core library used by everything from VOIP software to the GNOME panel - with a new soname. That's more babies thrown out with the bathwater than after anything short of a new libc and for what? What could possibly have changed in my mail client that meant it was necessary to become incompatible with all previous software? Judging from the soname this isn't the first time it happened, it's just the first time I was forced to confront it.
Posted Sep 24, 2010 19:29 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link] (16 responses)
What may be getting you is sloppy packaging. The packagers may be specifying that Evolution foo.bar requires library version foo.bar even though it should be able to run on library foo.(bar or higher).
Posted Sep 24, 2010 20:24 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (3 responses)
Posted Sep 24, 2010 21:07 UTC (Fri)
by dlang (guest, #313)
[Link]
Unfortunantly, the desktop application space is the worst area, and the "Desktop Environment" projects (as opposed to just applications that run on the desktop and will work on just about any desktop) are espcially bad. They seem to think that they dictate the entire system and nobody ever has any reason not to do things the way (with the specific software and version) that they prefer.
Posted Sep 25, 2010 8:22 UTC (Sat)
by nicooo (guest, #69134)
[Link] (1 responses)
Usually they depend on the same libraries, not each other. I know KDE has a stable ABI; why they choose to update everything at once probably has an explanation... I can't think of any right now.
Posted Oct 4, 2010 11:30 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Sep 24, 2010 21:04 UTC (Fri)
by dlang (guest, #313)
[Link]
Posted Sep 25, 2010 12:52 UTC (Sat)
by tialaramex (subscriber, #21167)
[Link] (10 responses)
Someone in the GNOME project (and most likely, specifically Evolution) had something so vitally important they wanted to change, that it was worth throwing away compatibility.
There's a good chance it didn't really touch compatibility, in my experience the majority of small library owners work on a mixture of superstition, urban legend and outright guesswork to manage their ABI. Some may believe re-ordering public structures to be prettier is harmless (yes, that's why some libpng versions were incompatible despite claiming the same soname...) while others imagine that renaming a structure member needs an ABI bump. Nobody like that _should_ be managing code in your out-of-box GNOME install, but we rely on volunteers, and since I'm not volunteering to go fix this I can't complain (well, clearly I do, but arguably it's not fair to)
Posted Sep 27, 2010 3:15 UTC (Mon)
by cmccabe (guest, #60281)
[Link] (9 responses)
Copied from stackoverflow.com (I couldn't find a HOWTO for some reason):
> The way you're supposed to form the x.y.z version is like this:
So if the developer bumped the major version number from libfoo.18.0.0 to libfoo.19.0.0, he basically waved a big red flag saying "ABI change!" In theory, at least.
Posted Sep 27, 2010 10:35 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link]
But riddle me this: if the change was so major as to need this ABI change, why doesn't it deliver even a single new feature worth telling the world about in the release notes?
Imagine if libc took this approach "download all new apps, we changed the order of the structure members in struct sigaction because we think the mask should be first" and then next week "sorry, download fresh again, this time we decided stat should put the inode number before the device ID..." and the week after "the arguments to recvmsg are re-ordered, and it was renamed recvmessage, we supply a macro so that your code will still build, but existing binaries no longer work".
Posted Sep 27, 2010 13:25 UTC (Mon)
by paulj (subscriber, #341)
[Link] (7 responses)
ELF formats have more fine-grained versioning systems now that can be granular at a symbol level, and even allow 1 library to support multiple *incompatible* versions of a symbol at the same time. This is used, e.g., by Glibc. The symbol versions are specified in a linker map. There are very few reasons to break compatibility with previous, stable interface once you use symbol versioning.
It's "just" a question of spending a little extra time on paying attention to the compatibility issues.
Posted Sep 29, 2010 6:22 UTC (Wed)
by jamesh (guest, #1159)
[Link] (6 responses)
As the types get more complex, it becomes harder to support multiple versions of those data structures within the same library. And object oriented designs that make use of inheritance are probably the most difficult (as found on most C++ projects and glib GObject based projects).
It is possible to design the data structures so they can be extended without breaking compatibility (e.g. GTK has maintained ABI compatibility for quite a long time, despite extensive changes to some widgets), but people don't always follow those guidelines, or get things wrong the first time. If the library is high enough in the stack with few users, the developers might not even feel it worth while to plan for future changes and just bump the soname when needed.
Posted Sep 29, 2010 11:01 UTC (Wed)
by paulj (subscriber, #341)
[Link] (5 responses)
Still though, even if you must introduce a new, incompatible data type, there's still no reason why your library can not support the same (runtime) call using both old and new data types. The old symbol, to which old binaries bind, simply expects the old data type - and the new symbol the new data type.
Compile time backward compatibility may require a little extra work again, of course, but its not rocket science.
Have a read of the Solaris and GNU linker documentation on symbol version scripts/maps. It's a pretty powerful mechanism. Solaris makes heavy use of them, given Suns' strong desires to have binary compatibility as a feature (also requires carefully documenting what guarantees you make for the stability of interfaces, and testing). It's a pretty old feature too...
The trouble is, this is effort and work that benefits unknown users - it doesn't immediately benefit the developer much and its not much fun. So it usually simply doesn't get done in the free software world, other than exceptions like, e.g., projects where there's a corporate sponsor to provide a focus on customer experience.
From a quick look with readelf at my local GTK+ library, it doesn't look like GTK+ uses symbol versioning.
Posted Oct 5, 2010 22:00 UTC (Tue)
by nix (subscriber, #2304)
[Link]
I suspect the X libraries don't use it because *introducing* symbol versioning would itself break the ABI, and the current ABI of libX11 et al predates symbol versioning by years.
Posted Oct 6, 2010 9:31 UTC (Wed)
by jamesh (guest, #1159)
[Link] (3 responses)
Using GTK as an example, if we made an incompatible change to the GtkWidget structure, there are 179 gtk_widget_* symbols that we'd need two versions for.
Now every widget in the library (and every library built on top of GTK) embeds the GtkWidget structure, so we would need two versions in order to support both the old and new API. There is more than 3800 symbols in GTK alone, so this is not a small job. If my application uses any libraries built on top of GTK, they will need to be updated in a similar way to support the new GtkWidget data type if I am to use the new version of the API.
Granted the problems are smaller if the incompatible change is made further down the class hierarchy, but I hope this explains why symbol versioning isn't the first tool developers reach for in these cases.
Posted Oct 6, 2010 19:32 UTC (Wed)
by paulj (subscriber, #341)
[Link] (2 responses)
However, you're mistaken that the applications must be updated. You can retain *source* compatibility even if binary compatibility is broken in some way. I.e. you're assuming the old GtkWidget definition retains that name and the new one gets a new name. However, you can also rename the _old_ definition (GtkWidgetOld or GtkWidget2_2) and have the new definition use the well-known source-level name, presuming it is still source compatible. With linker maps you can direct old apps (compiled with the old GtkWidget definition, i.e. GtkWidgetOld when it was still called GtkWidget) to functions that expect GtkWidgetOld. There is no requirement at all that the name of the structure be the same in the caller and the function, it's not part of the ABI.
Solaris made heavy use of this kind of stuff to preserve runtime compatibility even as data types could be changed incompatibly without changing source-level name (be it changed by default, or changed in the presence of whatever feature selection defines). Glibc probably does too.
Posted Oct 7, 2010 9:58 UTC (Thu)
by jamesh (guest, #1159)
[Link]
Since this thread started on evolution-data-server, consider an application using one of the widgets from the libedataserverui library. If GTK broke the ABI of GtkWidget, you would need a new version of the libedataserverui widgets to use with the new GtkWidget ABI. If that was not available, then your app would need to use the old GTK ABI.
As I said previously these sort of ABI breakages are quite painful, so effort is made to avoid them. For GTK itself we've maintained compatibility for 8 years, so it certainly is possible (although is a bit painful at times).
Would it be nice if evolution-data-server went through fewer ABI breakages? Sure, but I don't think symbol versioning would solve the problem.
Posted Oct 9, 2010 22:14 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted Sep 27, 2010 12:35 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (2 responses)
This isn't a generic problem with Linux or even rpm, this is because fedora/RH doesn't have a sane library policy (and they think that just making compat-* packages on an ad-hoc basis is sufficient).
I switched to Mandriva (well, it was Mandrake at the time) after RH 6.1 for this reason. Mandriva doesn't have this problem, as library packages are versioned (yes, it is a bit of a hack, but in practice it works just fine, except that you may end up with some orphan library packages from time to time, but there is a tool to automatically remove them):
The most obvious symptom of this is the fact that very few people actually run Rawhide, whereas a much larger proportion of Mandriva users run 'cooker'.
Of course, it would also help if upstream projects put a bit more effort into ABI stability.
Posted Sep 27, 2010 13:13 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
This isn't the case. compat-* is rarely used and mostly for legacy compatibility packages. Parallel installable library versioning is more popular c.f. gtk2 and gtk3 for example and yes, sane upstream versioning is important instead of hacking it at the packaging level.
Posted Sep 27, 2010 13:38 UTC (Mon)
by buchanmilne (guest, #42315)
[Link]
You are emphasizing my my point.
But again, this is on an ah-hoc basis, not by policy. So, if one user has a problem, they need to motivate for a specific package, which is unlikely to be successful for his one problem. Tell me when rawhide users can actually survive (say) a dbus API/ABI upgrade without losing any packages or having to temporarily uninstall the majority of their distro to transition.
On the other hand, in Mandriva, *every* library is supposed to be parallel installable (if you find one that isn't, file a bug), so upgrades are smooth, and you can run cooker and expect minimal breakage (and still be able to read mail in evolution between upgrading gnome-desktop to it's 18th ABI revision and the new build of evolution becoming available a day or two later).
Posted Sep 24, 2010 17:14 UTC (Fri)
by vmpn (subscriber, #55435)
[Link]
Posted Sep 24, 2010 18:08 UTC (Fri)
by elanthis (guest, #6227)
[Link] (1 responses)
In Linux, it's called a feature of Open Source ecosystem allowing rapid "progress" by changing driver interfaces every other day requiring drivers to be bundled with the kernel to work at all. In Java, apparently it's a bug of the ecosystem by breaking dependencies every other day and requiring libraries to be bundled with apps to work at all.
Posted Sep 25, 2010 2:12 UTC (Sat)
by njs (subscriber, #40338)
[Link]
Posted Sep 24, 2010 18:29 UTC (Fri)
by pranith (subscriber, #53092)
[Link] (36 responses)
Posted Sep 24, 2010 19:03 UTC (Fri)
by jonabbey (guest, #2736)
[Link] (8 responses)
Despite the sneers about 'Write Once, Run Anywhere', Java is still the most full-featured environment for writing fast code that can run on Unix, Mac, and Windows. It has a huge body of portable class libraries to handle almost any task, and there are hundreds of thousands of programmers educated in using it.
Java the language is nothing to write home about any more, but then you have Jython, JRuby, Clojure, Scala, Groovy, abcl, etc., etc., etc., all of which can target all of those Java run-times across all of those platforms.
Then you have Dalvik on Android, J2ME in every Blu-Ray player sold, and on and on.
It is true that Java has failed to be the One Ring to Rule Them All, with platform-independent technologies like REST, XML, SOAP and JSON displacing things that Sun might have wished to be done with RMI, but Java has become one of the leading environments for REST, XML, SOAP and JSON, so it all works out nicely for Java programmers still.
Posted Sep 24, 2010 22:49 UTC (Fri)
by Lennie (subscriber, #49641)
[Link] (7 responses)
Posted Sep 24, 2010 23:33 UTC (Fri)
by cesarb (subscriber, #6266)
[Link] (5 responses)
If I recall correctly, I read a long time ago a thread on doom9.org where people were reverse-engineering the VM used by BD+ (one of the several parts of the Blu-ray specification written to make copying harder). They found out it was a heavily-obfuscated variant of the DLX architecture, and reverse-engineered most of the system calls programs written on it could use. An open-source emulator was written which could emulate the programs from the discs they had.
However, they had a problem with newer discs, because one of the system calls on the code found in these newer discs called into the Java code, which they did not emulate. I do not know how or even if they worked around it; I did not follow the developments or even the thread (I just read it once).
You can probably read more about it on some thread linked to by http://en.wikipedia.org/wiki/BD%2B .
Posted Sep 26, 2010 18:01 UTC (Sun)
by elanthis (guest, #6227)
[Link] (4 responses)
Posted Sep 27, 2010 13:15 UTC (Mon)
by cesarb (subscriber, #6266)
[Link]
> BD-J does not only provide the menu/gui for convenient movie playback. The BD-J applications (Xlets) also interact with the content code (BD+) via TRAP_ApplicationLayer. A basic BD-J platform implementation is therefor required to properly repair BD+ corrupted movies.
Posted Sep 27, 2010 13:56 UTC (Mon)
by sorpigal (guest, #36106)
[Link]
Just imagine! Why get a computer and a web browser and all that complication if you can just insert your Info-Acces Blueray and get a sanitized subset of available information!
Posted Sep 30, 2010 15:09 UTC (Thu)
by BenHutchings (subscriber, #37955)
[Link] (1 responses)
Posted Oct 2, 2010 22:35 UTC (Sat)
by khim (subscriber, #9252)
[Link]
For Turing-completeness you need some kind of ENDLESS memory. It may be tape (as in Turing or Post machines), memory (well, most CPUs have finite memory so they are fully not Turing-complete - yet 2^64 is usually "good enough") or just a few counters (three is enough - but they must be unbound!). DVD VM has very limited memory so it's not Turing-complete...
Posted Sep 24, 2010 23:38 UTC (Fri)
by Kamilion (subscriber, #42576)
[Link]
Posted Sep 28, 2010 22:08 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (26 responses)
You must live in a cave...
> why do people still choose Java?
While Java has numerous shortcomings it has one rather unique quality: it gets you decent performance even from poor software developers. This is useful in the real world.
Posted Sep 29, 2010 15:06 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (25 responses)
Posted Sep 29, 2010 19:46 UTC (Wed)
by marcH (subscriber, #57642)
[Link] (24 responses)
Java also allows incredibly powerful IDEs, which not just allow poor developers to make a living but also allow senior developers to quickly pinpoint and fix/refactor stupid things.
Java limitations are frustrating for good developers. But it rocks for the real world outside LWN.
Posted Sep 29, 2010 20:00 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (23 responses)
Posted Sep 29, 2010 21:20 UTC (Wed)
by marcH (subscriber, #57642)
[Link] (22 responses)
It is easy but only after years of experience.
> And given the complete lack of rational arguments, it's just as easy not to believe them.
Granted.
Note: for an equally bold, but much more simplistic statement see the grand-grand-parent post.
Posted Sep 29, 2010 22:07 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (21 responses)
(Why do I even have to explain this?)
Posted Sep 30, 2010 6:24 UTC (Thu)
by marcH (subscriber, #57642)
[Link] (13 responses)
Because writing "enterprise" software, of which you apparently know little (lucky you), is almost not about algorithms but mainly about large amounts of boring boilerplate code for persistence, serialization and user interfaces. Have a look at what J2EE is (now JEE).
In the real world, grunts do not get to spend time reading Knuth and carefully select algorithms.
Posted Sep 30, 2010 18:51 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (8 responses)
A dumb programmer would do this:
Posted Oct 1, 2010 7:00 UTC (Fri)
by jschrod (subscriber, #1646)
[Link] (4 responses)
Posted Oct 1, 2010 14:11 UTC (Fri)
by HelloWorld (guest, #56129)
[Link] (3 responses)
Posted Oct 3, 2010 10:20 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
You really have no idea how much money boilerplate code can make.
Posted Oct 4, 2010 0:18 UTC (Mon)
by sbishop (guest, #33061)
[Link] (1 responses)
You asked for arguments, so here's one. You're "stupid" example is inefficient, yes. But it's also buggy. Consider what would happen if there were two even numbers in a row. And that's a good reason to use commonly used libraries for these kinds of things, of course. I'm sure you know that, but you don't appear to be a Java developer, so I wouldn't expect you to know what those libraries are. The parent post mentioned Apache Commons. That would work, but their Collections framework doesn't do generics. These days Guava would be used.
Posted Oct 4, 2010 1:13 UTC (Mon)
by HelloWorld (guest, #56129)
[Link]
Posted Oct 1, 2010 10:29 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (2 responses)
Posted Oct 1, 2010 11:19 UTC (Fri)
by HelloWorld (guest, #56129)
[Link]
Anyway, if you want an example in a functional language, here is how you'd do it in Haskell:
removeEvenNumbers = filter odd
Posted Oct 5, 2010 22:18 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Sep 30, 2010 19:15 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (3 responses)
Posted Oct 1, 2010 10:22 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (2 responses)
Yes I left this as an exercise (hint: cheat and use the Internet).
Posted Oct 1, 2010 11:38 UTC (Fri)
by HelloWorld (guest, #56129)
[Link] (1 responses)
Posted Oct 2, 2010 15:12 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
Posted Sep 30, 2010 10:38 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (6 responses)
Obviously you don't develop in a business context. There, the interesting parts of software development are in the requirement, architectural, and testing parts of a project, not in any algorithm designs which are 99.999% of the time *VERY* boring, being `get data, change some attribute, move it somewhere else, finished'.
[*] Being pedantic, there are few algorithms anyhow, where one *could* make such a choice -- most of the time, it's O(n^2) vs. O(n \log n). Equally well, it often doesn't matter at all, because the factors that O-notation abstracts away, and also $n$ that needs to be handled are too small anyhow. There's a reason why DEK does full analysis in his books, and not just O notation.
Posted Sep 30, 2010 18:54 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (5 responses)
Also, if the problem size is small anyway, why worry about performance at all and not just write the stuff in some other, more productive language?
Posted Oct 1, 2010 7:14 UTC (Fri)
by jschrod (subscriber, #1646)
[Link] (4 responses)
See my answer there.
> Also, if the problem size is small anyway, why worry about performance at
First of all, a large problem size does by no means equate with complex algorithms. E.g., in a current project of us, phone events of 40,000,000 customers must be rated and billed. Processing each event is simple, no algorithmic complexity at all. The challenge is an architectural design that supports or even furthers horizontal scalibity, i.e., the ability to distribute the processing over many servers.
Second, in business applications performance issues are more often caused by bad interface decisions, I/O problems, and bad database designs than by algorithms. Business applications are not about computing, they are about moving data from A to B.
As I wrote, complexity and challenges for business applications are found in the requirement and design phases, not in the programming phases. Actually, also in the testing phase if one goes beyond module unit tests and tests if a system really delivers the functionality and serviceability the customer *needs* (that's most of the time not the same he's ordered).
Third, I have yet to meet a non-toy project -- let's say 50+ people working on it -- where one is allowed to select its implementation language on purely technical grounds. I hope I don't have to spell out all the reasons for that. In the real world, implementation language(s) selection is as much a management and political decision as it is a technical decision, for better and worse.
Posted Oct 1, 2010 14:55 UTC (Fri)
by HelloWorld (guest, #56129)
[Link] (3 responses)
You could say that smart people could figure out the design and let grunt programmers implement it. But I don't think that works well in practice. The devil is in the details, and for a bad programmer, it's easy to write a horrible program from a design he didn't really understand.
Posted Oct 2, 2010 22:50 UTC (Sat)
by khim (subscriber, #9252)
[Link] (2 responses)
Yes - and it means most "business programs" are horrible. But with Java they work. With other, less restricting languages, they don't. The problem here is the fact that it's usually very hard to fire bad developer from "big business" so they must be used somehow - and Java is the best language for that. C# is not because it's nicer language and actually includes many higher-level features!
Posted Oct 3, 2010 1:23 UTC (Sun)
by HelloWorld (guest, #56129)
[Link] (1 responses)
Posted Oct 3, 2010 9:07 UTC (Sun)
by khim (subscriber, #9252)
[Link]
Sorry, but no. You can not disprove honest belief so easily: that's why we hava Judaism, Christianity, Islam, Buddhism, etc. I've already said why Java is "better": instead of small number of constructs which can be combined in many ways you have lots and lots of stuff which can be combined in limited way. This is easier to use if you try to randomly connect things and see what will finally work. Welcome to real world. Yes, probation time helps, but not absolutely: if you reject candidates one after another then HR department will eventually decide that it's better to fire you (citing unaccommodating as reason) rather then tolerate problems you cause (they lose bonuses if you reject lunkheads, you know). So in the end only absolutely hopeless imbeciles are fired. Everyone else stays and you must accommodate them somehow. Especially if they have good recommendations and finished prestigious universities. Yes and no. It's more like giving him power saw but only asking him to produce identical straight planks - and this is exactly what happens in real life. If you take a look on what is in stores then you'll see that work of genuine carpenters-designers are rare and expensive. Most furniture are mass-produced using the same design for thousands of items. Why? It's easy: it's cheaper this way. People who can design exist, often they can even use power saw, but they are rare and expensive. So furniture production is optimized for lunkheads: they know how to operate power saw or even more complex equipment, but they don't know how to design things and how to cope with timber blemishes (unless they've gotten explicit instructions). So other people (designers) are doing all planning work and low-lever people are just following the rules. Java world is designed for just such a use. And the main ingredients are not even language capabilities but IDE capabilities - but they are enabled by language capabilities. I've conducted hundred of interviews and "Java developers" are worst bunch: often they can not even write simple programs like you've shown above without help of IDE! And they will happily write O(N^2) solution you've presented - if it'll work then Ok, if it'll not work - some higher level person will speedup the thing by factor ten and will get separate bonuses for that work. Business world is not about good code. It's about strict adherence to specifications - and Java helps immensely here.
Posted Sep 24, 2010 22:36 UTC (Fri)
by alankila (guest, #47141)
[Link] (26 responses)
I don't think either looks very likely. I suggest linux distributors just accept that Java software is packaged with dependencies-included sense. It's really convenient for end user: because nothing can be assumed of the platform, applications come as single .jar files that you doubleclick to run (and they work).
That simplicity is a tough target to beat by any other scheme.
Posted Sep 25, 2010 1:55 UTC (Sat)
by nicooo (guest, #69134)
[Link] (2 responses)
The package manager.
> That simplicity is a tough target to beat by any other scheme.
Also the package manager. =)
Posted Sep 25, 2010 2:30 UTC (Sat)
by rsidd (subscriber, #2582)
[Link] (1 responses)
>> If someone wants to come up with a way to obtain a set of shared libraries everyone can use
>The package manager.
To quote the GP: "on all three platforms"? Realistically, even if Windows or Mac OS X were ever open in the past to Linux-style package management, they wouldn't do it today, with hard disk space being so cheap.
Posted Sep 25, 2010 3:25 UTC (Sat)
by nicooo (guest, #69134)
[Link]
Posted Sep 25, 2010 2:39 UTC (Sat)
by steffen780 (guest, #68142)
[Link] (22 responses)
Independent of that, there's a point in updating libraries - that fixes things. Using up to date libs means there's hundreds of bugs you'll never run into. And just because a developer's use of a program works just fine with ancient library version X, doesn't mean that users don't benefit from or even require e.g. unicode-related fixes in the newer library version X+10. The java model merely delays the hassle of ensuring your code works with newer dependencies, it doesn't avoid it. And nobody serious is demanding that every app developer always follows the latest commits of all his dependencies.
There's also no real reason not to release a source tarball alongside your binary release. It almost seems spiteful "well, on windows a source tarball is useless, so if you use Linux, BSD or OSX with MacPorts then just go away". Seriously, how hard is it to make a source tarball of a project? 2 seconds typing plus a few seconds for the compression to take place? How much effort is that compared to having to contend with random library versions that you have to manually update, and to create a giant mess called "a simple .jar"? Why would any upstream be _wanting_ to have to deal with libraries in this fashion, it's just baffling.
Java projects could release .jar (which, I want to note, is less handy then a self-extracting exe, because running a jar requires the user to install java herself) instead of the exe. What's the big deal?
So whilst I admit that the Java way has advantages in some situations, in others the Linux way has many advantages. For a start, being available by the distribution mechanism the user expects is important for a consistent user experience - Windows users expect to download some executable from a random website and run it - not my thing, but hey, to each his own. But Linux users expect to simply type the name of the program into synaptic, or apt-get, or yum, or emerge, and then have the system take care of everything. Why not let them?
And from a project's point of view:
Posted Sep 25, 2010 8:35 UTC (Sat)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
Linux tools have grown in the wild and focus on making good practices (regular updates, security, adapting to a changing environment, portability across compilers...) easier. Because if you don't play well with others, the FLOSS community marginalizes you.
Java tools (Apache included) have grown in the enterprise and focus on hiding the problems generated by the bad practices PHBs mandate for various reasons (Usually, short-term gain for someone else's long term pain. Except everyone is someone else's someone else, so everyone loses).
And it didn't help that this kind of practice ran SUN in the wall and SUN focused those last years on playing licensing games, not annoying its partners with correct, but unpopular changes, re-implementing other's stuff that already worked instead of fixing its own problems, and lastly tried to pimp itself up by promising more than it could deliver (Witness the crazy JVM release tree, with countless branches that all promised something different, none of which was ever finished. The first thing Oracle announced for Java was stopping this madness and merging all the branches, plus the bits it inherited from BEA, in a single JVM release). So the bad example came from the very top of the Java ecosystem.
Posted Sep 25, 2010 20:41 UTC (Sat)
by khc (guest, #45209)
[Link]
Posted Sep 25, 2010 20:50 UTC (Sat)
by NAR (subscriber, #1313)
[Link] (10 responses)
Unfortunately that's not true. Updating libraries (or any software) usually means replacing a set of bugs with a new set of bugs. Even old bugs get reintroduced from time to time, see the Linux kernel for current example.
Linux users expect to simply type the name of the program into synaptic, or apt-get, or yum, or emerge, and then have the system take care of everything. Why not let them?
The problem is that the Linux users can only get the version that is in that version of the distribution, they can't have an older or newer one. See example about Evolution in this page. I also presume there's no Linux distribution which doesn't have pulseaudio but has Firefox 3.5+ - you can't have this in Linux easily, but I can install Firefox 3.5 on the 9 years old Windows XP.
Posted Sep 25, 2010 21:40 UTC (Sat)
by sfeam (subscriber, #2841)
[Link]
That is simply not true. I can't speak to all the various distros, but certainly in mandriva there are back-compatibility packages for previous versions of common libraries going back quite a way. And you can often pull a newer version from Cooker.
Posted Sep 25, 2010 21:48 UTC (Sat)
by nicooo (guest, #69134)
[Link] (1 responses)
Gentoo, Arch, CrunchBang, Slackware, ...
Posted Sep 25, 2010 22:01 UTC (Sat)
by foom (subscriber, #14868)
[Link]
Posted Sep 26, 2010 6:18 UTC (Sun)
by drag (guest, #31333)
[Link] (2 responses)
Try installing Win98 audio subsystem to replace Windows XP's and see how far you get.
I know that in Linux-think it's hard to separate between applications and operating systems. We all have a tendency to want to think of them in a unified item... which is good in one way, but bad in another.
-------------------------------
The problem is that in Linux there is no real separation between what is 'applications' and what is the 'OS'.
I am sure that we all agree to the concept of 'do one thing right and do it well' and the concept of 'layers'.
That a complex system should have layers were you can work on improvements at one layer without disturbing the upper layers very much.
Like: Oh, I can upgrade my wireless drivers without breaking the browser.
Right? That makes sense.
That is why with the Linux kernel they make a clear distinction between 'No Internal API or ABI!!!!' and 'We support external API/ABI compatibility as a high priority!!!'.
The idea is that:
* 'Well we want the freedom to hack the kernel anyway we want. If we depend on internal ABI/API compatibility then that will hold us ransom to popular proprietary-only companies that will use their popularity to control us while contributing nothing!'
* 'Well, but we still want people to us our software. So we need to make sure that we can upgrade and change the kernel but not piss all the users off and break everything.'.
So, right: LAYERS.
So the API and ABI for Linux kernel takes the form of various IOCTLs, Posix file systems, Posix IPC, sockets, semephores, /dev directory, /proc, sysfs, etc etc etc. That is Linux kernel's API and they generally do a good job at preserving compatibility between different versions. Not 100%, but pretty decent all things considered.
But here is how things look from a distro perspective right now:
{Linux kernel}(Linux ext. API) --- > {OMG USERSPACE!!!!!1111oneoneoneo}
Everything above the kernel is a mush. A mixture of a hundred different things that people simply slap together and get working in whatever fashion pleases them.
For the most part I can build my own kernel and slap a Linux distro on top of that and it'll probably function decently. Sometimes I can break stuff, but it's going to be relatively minor. I can even quite happily use a Redhat kernel with Debian and probably things will work fine.
But compared to that installing a Debian packaged software on Redhat then all hell breaks loose and it's going to be just about as shitastic as it can get.
I can compile my own software and it'll work with pretty much complete disregard to the package system. I can also do binary downloads of many pieces of software... like Firefox or Chrome or Blender3D and as long as I do not try to work with the package management software then it'll usually work out very well regardless of the distro.
Even with proprietary software like Opera that uses C++ and QT, which I am told is going to be shit because of ABI issues and whatnot.... the binaries they provide are compiled the same completely regardless of what distro they are installed on.
Go and look at the different packages that Opera offers... They have a dozen different packages to work with a dozen different distros, but they only have a couple actually different binaries they provide. Last time I looked they would provide the same exact binaries for fedora, debian, ubuntu, etc etc... with the only significant difference being that they supported a old version of Ubuntu that was built using GCC 3.xx or something like that were (I am guessing) they did actually run into C++ ABI issues.
All the incompatibility exists in the package management systems. And it's just stupid stuff like 'were to stuff the documentation' or crap like that.
So Linux and distros are already doing something right. They just are not aware of it on a higher level or something like that.
Posted Sep 26, 2010 19:28 UTC (Sun)
by skybrian (guest, #365)
[Link] (1 responses)
I think the main problem is that some people want Linux distros to do too much. They don't have the resources and need to step back a bit. There's no reason to provide an all-encompassing solution like in C.
Linux distributions should provide a solid JDK and fix security bugs in it, and provide a standard way to install applications written in Java (which can bundle their own libraries). Then security holes in applications are officially Not The Distribution's Problem. Let application-level projects do that.
Posted Sep 27, 2010 3:38 UTC (Mon)
by cmccabe (guest, #60281)
[Link]
That's basically the situation as it exists today. You can easily use yum and apt to get a JDK, and then install your own jars with wget.
I think Thierry Carrez feels that this situation makes Java on the Linux desktop a "second-class citizen." I can easily install C, C++, python, or perl programs with apt-get. I'll get security updates and bugfixes each time I do apt-get upgrade. It's not just "someone else's problem." I can't do that with Java apps today.
Posted Sep 26, 2010 19:26 UTC (Sun)
by mfedyk (guest, #55303)
[Link]
some distributions like fedora/rhel have policies that specify only one version of a lib, but there is nothing in the package manager that keeps you from having multiple separate packages with different versions of a lib. I know that debian does this a lot with libfoo and libfoo12, etc. it just has more maintenance overhead to have another package to keep updated.
Posted Sep 27, 2010 13:07 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (1 responses)
So, let's not presume any further.
Posted Sep 29, 2010 18:29 UTC (Wed)
by nevyn (guest, #33129)
[Link]
yum -C list firefox pulseaudio
Posted Sep 29, 2010 18:22 UTC (Wed)
by nevyn (guest, #33129)
[Link]
I guess you've never paid for Linux then? RHEL (and I'd assume all distros. people actually pay for) has every version ever released available, just a simple "yum downgrade" (on RHEL-5) away. There are also the minor releases too, like 5.3.z, where you'll just never even see some newer versions of packages.
Posted Sep 26, 2010 2:47 UTC (Sun)
by mikov (guest, #33179)
[Link] (1 responses)
No, Linux users don't expect it - many of them simply have no other choice as installing software outside of the distro repository is simply very difficult.
I have said this before but it bears repeating - the notion that all software that users could possibly need can sit in a distribution repository is laughable.
Most of the software I use on day to day basis on my Linux desktop didn't come from the distribution repository. It couldn't possibly and even if the distro did package some of it, I still wouldn't use the packaged version because I can't rely on the distribution to keep up with the upstream changes.
Posted Sep 27, 2010 14:37 UTC (Mon)
by buchanmilne (guest, #42315)
[Link]
That doesn't mean that there is no value in fixing the broken tools that are the only stumbling block for software that could otherwise be supplied in the distribution repos.
Posted Sep 26, 2010 19:21 UTC (Sun)
by alankila (guest, #47141)
[Link] (5 responses)
It also breaks things. Frequent breakage of everything is a commonplace thing for me between each update of Ubuntu, as I tend to follow the devlopment branches out of interest. So no. Updating stuff breaks stuff, if done willy-nilly, you have to admit that much.
> With the java way a user of 100 java apps will have 100 versions
But every one of those 100 apps probably works with respect to the features that the app developer tested. Nobody else has gone in and changed something and screwed it up.
See, this is what I had to do in my old working place. At first we tried to package our dependencies into nice system-wide installs, but before long we ran into needing to make changes, break APIs, etc. and realized that we pretty much have to either update all the apps, or install new libraries with different names. (Perl did not support any versioning internally.) So we went for application-specific installs: apps come with wrappers that push a bunch of app-local package paths into the library search path, and then we went our merry way with never a problem. Seems fairly similar to the ".jar contains dependencies to me", at least in contextual level. The reason it is done is always the same: to have application still work even if the surrounding system changes.
I will mostly skip the discussion about source tarballs, it doesn't seem to really apply to me. In case of a .jar, no compiling is necessary, so Linux or OS X look just the same as Windows with respect to source tarball being equally useless. Right?
> Java projects could release .jar (which, I want to note, is less handy then a self-extracting exe
To run a Linux application, you need to download Linux first, too. My point being, we all want to make some assumptions about the platform we are running on. It would be nice to have JDK on the OS out of the box, or have it contained in the application binary itself, as some people would almost certainly benefit from this design. However, such a design is somewhat uncomfortably too close to crazy, and the only real reason you might really want to is because you have an application that in fact requires a very specific JDK to work. (Boo to such, etc.)
> For a start, being available by the distribution mechanism the user expects is important for a consistent user experience
Now we get to an interesting point. See, if distributors could give up on this mode of thought where every application must be chewed into tiny bits and spit out as part of the OS, we could already get all the java apps without anybody complaining. The complaints are not solely leveled at java, though, I saw firefox and chrome get their share of ire for forking libraries into their source tree not long ago.
To me, an OS's primary function should be just to contain and manage applications as blobs. When distros try to do more, they easily cause just the problems that application writers wanted to avoid in the first place!
For instance, Debian very nearly had to give up on packaging Eclipse at all because they simply could not successfully package Eclipse since 3.2.3 or so. This apparently activated a number of people, and Eclipse updated to version 3.5.2 at some point. Thinking to myself "Great! I can now stop downloading the tarball myself!", I tried it out but found quickly that code completion didn't work. Back to .tar.gz world. Working software is juts so far more important than the other arguments which I have heard about.
Posted Sep 27, 2010 3:24 UTC (Mon)
by ringerc (subscriber, #3071)
[Link] (2 responses)
The problem is that when a security problem arises, you have to find every copy of every library in every app, all different versions, and make sure they're properly patched.
Maybe if there was a reasonable, nearly universal way to declare that your app bundles these versions of these libraries, with tools that automatically manage the bundled sources, it'd be ok. But what happens instead is that each bundling app integrates the bundled libraries into its own funky build system and often starts patching them in odd ways. you can almost never just drop in fresh upstream sources.
At least Maven helps with this - you may declare dependencies on specific versions, but you don't pull them into your source tree. They're only bundled (optionally) at build-time.
Posted Sep 28, 2010 11:39 UTC (Tue)
by alankila (guest, #47141)
[Link] (1 responses)
Posted Sep 29, 2010 8:56 UTC (Wed)
by ringerc (subscriber, #3071)
[Link]
You can't have a buffer overflow that writes to arbitrary bits of memory when working with a String object - but you can still suffer from string length/truncation bugs. For example, if the program does a validity check on input, then truncates it for storage/use (perhaps in a database with a length-limited field and a JDBC driver that silently truncates long input) you can possibly trick the app into accepting invalid input. That could be a security issue.
You might find a code path that permits the bypassing of certain authorization checks in a webapp. Or an SQL injection attack caused by improper string interpolation. Or a way to trick the app into doing improper file system access for you. A way to modify parts of a data structure that you're not meant to be able to, because a JavaScript<->Java binding in a webapp exposes whole objects to JS, not just authorization-checked individual members.
Java tries to avoid making you walk through a security minefield as part of using the core language, but you can still shoot your own foot of in any language.
The SecurityManager can help protect apps from bugs in user/app code, but it has to be enabled and in use. It's used by default in applets and Java Web Start apps, but not for locally-executed apps. Glassfish v3 enables it too. Many other things don't, so there's no sandbox acting as a second layer if the app's built-in checks fail to prevent something. OTOH, if the security issue is with data/actions *within* the app, the SecurityManager won't help you anyway.
Java can help with security issues, but it's no magic bullet.
Posted Sep 27, 2010 4:38 UTC (Mon)
by nicooo (guest, #69134)
[Link]
After the libpng bug in June, I don't blame the distros for being unhappy.
Posted Sep 30, 2010 9:15 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
> But every one of those 100 apps probably works with respect to the
And what happens when the app needs to be updated to add new customer-requested features? It can't because the local fork of Java deps is missing the features that were added in the real upstream version. So everything needs to be re-done.
The Java way does not solve anything it just delays the payback (That's one reason big Java apps cost astronomical prices, and have slow release cycles. The language is nice and should enable cheaper/faster development, but the abysmal release practices add up in multiple ways in the long term)
Posted Sep 27, 2010 3:18 UTC (Mon)
by ringerc (subscriber, #3071)
[Link]
For anyone else frustrated by this, there's a non-password protected mirror at http://svn-mirror.glassfish.org/ .
That said, most other projects I've dealt with are pretty good about releasing sources. They're usually not very good at pushing them to maven along with their binary artifacts, but they tend to publish them on their website.
Personally I think a big part of the trouble is release management. Ant is bloody awful; everyone has their own weird and wonderful recipe for disaster. Maven *should* help, but the bad documentation and lack of guidance on how to use it tend to lead to pom files full of fixed versions, with hard-coded repositories in the pom, dodgy settings for package deployment, etc. People write Maven poms by waving the chicken feet around and muttering their preferred arcane incantataions, because few people actually understand how the darn thing works or how it's *meant* to work.
Unlike rpm/deb, there's very little good guidance on writing Maven poms, or ant builds, that're friendly to users of your code, not just to the developers of it.
Posted Sep 25, 2010 13:08 UTC (Sat)
by mcq (guest, #66223)
[Link]
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Wol
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Wol
Carrez: The real problem with Java in Linux distros
RPM was introduced in RedHat 2.0, in early fall of 1995. DEB was introduced a few months earlier in Debian 0.93r5, March of 1995.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
rpm is a pig because the second major adopter of it (SuSE) is actually *older* than Red Hat, and has (had?) a very different naming convention
Carrez: The real problem with Java in Linux distros
Not good enough?
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
- requires you to run the software
- will provide very incomplete "documentation"
Carrez: The real problem with Java in Linux distros
Maven should be seen as a cross-platform dependency tool, just like CPAN, CRAN and Python distutils.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
http://www.brocade.com/forms/getFile?p=documents/support_...
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Testing is prohibitively expensive but it also does not require very qualified people.
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
> qualified people.
Carrez: The real problem with Linux distros
I described a mode of development I don't agree with (piling up components with random versions and minimal code tracking and then using many testers to try identify the exposed bugs and fix them before the customer notices).
Carrez: The real problem with Linux distros
How many of them had known unpatched security issues?
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
What could possibly have changed in my mail client that meant it was necessary to become incompatible with all previous software?
I think you're running into problems with the way Gnome does things. When they update their minor version number, they update all their official Gnome applications and libraries in sync. This technically shouldn't be necessary, since GNOME is strict about maintaining API backward compatibility within a major version. You should be able to run old applications on new libraries- though updating an application may require updating the libraries it depends on if they have updated APIs.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
> otherwise. The runtime linker assumes that libfoo.so.18 and libfoo.so.19
> have different ABIs since the _whole point_ of the soname versioning is to
> manage this ABI compatibility.
>
> 1. The first number (x) is the interface version of the library.
> Whenever you change the public interface, this number goes up.
> 2. The second number (y) is the revision number of the current
> interface. Whenever you make an internal change without changing the
> public interface, this number goes up.
> 3. The third number (z) is not a build number, it is the
> backwards-compatability count. This tells you how many previous interfaces
> are supported. So for example if interface version 4 is strictly a
> superset of interfaces 3 and 2, but totally incompatible with 1, then z=2
> (4-2 = 2, the lowest interface number supported)
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
OT: dependency and ABI mismanagement
Carrez: The real problem with Java in Linux distros
But then it's not alone, I recently found what I think is an Evolution bug. I tried to confirm that it was new after the minor version upgrade, by unwinding that upgrade with 'yum downgrade'.
[bgmilne@tiger ~]$ rpm -qf /usr/lib64/libedataserver-1.2.so.13.0.1
lib64edataserver13-2.30.2.1-1mdv2010.1
[bgmilne@tiger ~]$ rpm -qf /usr/lib64/libedataserver-1.2.so.11.0.1
lib64edataserver11-2.28.2-1.1mdv2010.0
[bgmilne@tiger ~]$ urpmq --auto-orphans 2>/dev/null |grep edataserver
lib64edataserver11
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
This isn't the case. compat-* is rarely used and mostly for legacy compatibility packages.
Parallel installable library versioning is more popular c.f. gtk2 and gtk3 for example
Carrez: The real problem with Java in Linux distros
Summary
Summary
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Sorry, but this is NOT Turing-complete VM
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
It is obviously always possible to do stupid things in any language. But Java makes that harder. Just like a good road system saves lives.
It's easy to make such bold statements. And given the complete lack of rational arguments, it's just as easy not to believe them.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Algorithms are being written all the time. It's about trivial stuff like removing even numbers from a list. Don't tell me this doesn't happen in the real world, _especially_ when dumb programmers (who don't know about higher-order functions like filter) and dumb languages (like Java, which also doesn't know about higher-order functions) are involved.
Carrez: The real problem with Java in Linux distros
void removeEvenNumbers(ArrayList<Integer> l) {
for(int i = 0; i < l.size(); ++i)
if (l.get(i) % 2 == 0)
l.remove(i);
}
A smart one would do this:
void removeEvenNumbers(ArrayList<Integer> l) {
int j = 0;
for(int i = 0; i < l.size(); ++i)
if (l.get(i) % 2 != 0)
l.set(j++, l.get(i))
l.removeRange(j, l.size()
}
Now please don't tell me that this kind of stuff isn't done in the real world.
Carrez: The real problem with Java in Linux distros
Or, if there is such a utility function in Apache commons, it should be used.
Carrez: The real problem with Java in Linux distros
IMO, both are not good. Iterator.remove() should be used.
Please, if you make such statements, provide arguments for them. Otherwise, it's just a waste of time for both of us. Also, as far as I can see, there is no analogon to the removeRange method that takes iterators instead of indices.
Or, if there is such a utility function in Apache commons, it should be used.
I don't think there is. It's basically impossible to abstract this kind of things in Java enough to make it useful. A method that removes even numbers from a list isn't really that useful, since one day you may want to remove odd numbers or primes or negatives or whatever. So what you need to do is pass a small chunk of code that determines whether an element will be removed. The best Java will give you is an anonymous inner class, which makes it so verbose that it's pointless.
filter(new Predicate<Integer>() { public boolean doIt(Integer i) { return i % 2 == 0; } }, someArrayList);
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
You asked for arguments, so here's one. You're "stupid" example is inefficient, yes. But it's also buggy. Consider what would happen if there were two even numbers in a row.
That's actually a good point, that wouldn't have happened with an iterator.
And that's a good reason to use commonly used libraries for these kinds of things, of course. I'm sure you know that, but you don't appear to be a Java developer, so I wouldn't expect you to know what those libraries are. The parent post mentioned Apache Commons. That would work, but their Collections framework doesn't do generics. These days Guava would be used.
This kind of stuff makes sense in theory, but as I pointed out elsewhere in this thread, the Java language makes this kind of stuff way too hard and verbose. Programmers will just continue writing loops as long as it stays that way (It works perfectly well in more expressive languages though). Perhaps Java 8 will remedy this situation.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
This discussion is going nowhere. Making claims and not providing any arguments has a name. It's called trolling.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
> all and not just write the stuff in some other, more productive language?
Carrez: The real problem with Java in Linux distros
That's true, but you don't have a choice.
You could say that smart people could figure out the design and let grunt programmers implement it. But I don't think that works well in practice. The devil is in the details, and for a bad programmer, it's easy to write a horrible program from a design he didn't really understand.
That's true, but you don't have a choice.
Yes - and it means most "business programs" are horrible. But with Java they work. With other, less restricting languages, they don't.
You'll have to give reasons for this if you want anybody to believe this who doesn't already.
The problem here is the fact that it's usually very hard to fire bad developer from "big business" so they must be used somehow - and Java is the best language for that.
Those developers shouldn't have been hired in the first place. After all, this is what the probation time is good for: keeping lunkheads out of your team. Instead, you're even trying to accommodate those lunkheads. This is like letting a would-be carpenter use a hand saw instead of a power saw since he might cut his fingers off instead of telling him that he's simply not the right person for the job.
If someone don't want to believe something then he'll not believe something
You'll have to give reasons for this if you want anybody to believe this who doesn't already.
Those developers shouldn't have been hired in the first place. After all, this is what the probation time is good for: keeping lunkheads out of your team.
This is like letting a would-be carpenter use a hand saw instead of a power saw since he might cut his fingers off instead of telling him that he's simply not the right person for the job.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
My own (python-)project releases:
- a source .tbz
- packages for Gentoo and Debian/Ubuntu that do not include dependencies, but merely specify them (e.g. >=python-2.6)
- a self-extracting .exe archive for Windows that does include all the dependencies, from an appropriate python version to GTK and PyGTK.
And I don't care how easy anyone says it is to bundle your project's dependencies into a jar - it simply cannot be easier than simply making a list like this:
dev-python/matplotlib
>dev-python/pygtk-2.16
(...)
Best case: You get free publicity (being in the repos and their search engines), free testing of various combinations of libraries, pre-filtered bug reports, and users of the distros you're in won't ask you an install question ever again.
Worst case: You lose 5 seconds on each release from creating and uploading a tarball.
Seems crazy not to take this deal.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Independent of that, there's a point in updating libraries - that fixes things. Using up to date libs means there's hundreds of bugs you'll never run into.
Carrez: The real problem with Java in Linux distros
The problem is that the Linux users can only get the version that is in that version of the distribution, they can't have an older or newer one.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
> it, and provide a standard way to install applications written in Java
> (which can bundle their own libraries). Then security holes in
> applications are officially Not The Distribution's Problem. Let
> application-level projects do that.
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
Unfortunately that's not true. Updating libraries (or any software) usually means replacing a set of bugs with a new set of bugs. Even old bugs get reintroduced from time to time, see the Linux kernel for current example.
By your logic, when re-introduced bugs are re-fixed, user shouldn't update, in case other old bugs are re-introduced ...
The problem is that the Linux users can only get the version that is in that version of the distribution, they can't have an older or newer one.
s/Linux/Fedora/
I also presume there's no Linux distribution which doesn't have pulseaudio but has Firefox 3.5+
Pulseaudio has been around for some time (many distros started shipping it in 2007), so you have to choose distros which have > 2 year lifetimes, IOW, not fedora, but not counting EPEL, CentOS5/RHEL5 counts:
# yum -C list 2>/dev/null|grep -E "(^pulseaudio\.|^firefox)"
firefox.i386 3.6.7-3.el5.centos updates
pulseaudio.i386 0.9.10-1.el5.3 epel
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
> that version of the distribution, they can't have an older or newer one.
Carrez: The real problem with Java in Linux distros
But Linux users expect to simply type the name of the program into synaptic, or apt-get, or yum, or emerge, and then have the system take care of everything. Why not let them?
Carrez: The real problem with Java in Linux distros
the notion that all software that users could possibly need can sit in a distribution repository is laughable.
Carrez: The real problem with Java in Linux distros
Bundling libraries is a security issue
Bundling libraries is a security issue
Bundling libraries is a security issue
Carrez: The real problem with Java in Linux distros
Carrez: The real problem with Java in Linux distros
> features that the app developer tested. Nobody else has gone in and
> changed something and screwed it up.
Source releases
There is project Jigsaw that should address this issue. This presentation talks about it. Unfortunately it won't make it for Java7 but it's targeted for Java8 (see this and this). Meanwhile there is JPackage.
Carrez: The real problem with Java in Linux distros