User: Password:
|
|
Subscribe / Log in / New account

Providing services is a profession

Providing services is a profession

Posted Sep 10, 2009 20:18 UTC (Thu) by bfields (subscriber, #19510)
In reply to: Providing services is a profession by NAR
Parent article: Attacks against WordPress installations

Similar problems happen on the client side--web clients have vulnerabilities, they're exploited, people don't always upgrade when they should, plugins are a problem. I don't think it helps to focus on services in particular.

Also, setting up something like Wordpress and keeping it upgraded shouldn't be rocket science. Maybe distributors could help. (I'm not sure why this sort of thing seems to so frequently be managed by hand as opposed to installed from distro packages--fixing that might help.)


(Log in to post comments)

Providing services is a profession

Posted Sep 10, 2009 21:03 UTC (Thu) by elanthis (guest, #6227) [Link]

Because packages put stuff in specific directories. If you had a WordPress package, it would install to something like /var/www/wordpress. But lo and behold, you host two sites, so you put them in /var/www/site1.com and /var/www/site2.com. Or maybe in home directories. Or wherever. So now the packages are useless.

The Linux software packaging model is a joke for anything other than one-off appliances and hardcore nerds who have no life and want to babysit their computers. For all the headaches the Windows software installation CAN cause, out in the real world with real users, it causes very few headaches. It just works. And since the installers there let you pick any installation directory (usually) it's easier to have multiple copies of a piece of software and still have it tracked by the Windows "packaging" service.

The only thing the Windows model lacks that the Linux model has is a unified software update facility, but that is very much an easy to solve problem if anybody actually bothered to try. Instead, though, the Linux people stick with their pre-packaged pre-configured pre-mandated per-distro per-version fixed-everything packaging model. And real users suffer when they use Linux while the nerds go on and on about how easy it is to install and manage software... so long as you only need the software that was packaged for your distro, don't need newer versions of software which are only packaged for the latest version of your distro, don't mind updating every last component of your OS every 6 months and getting a ton of all new bugs and UI changes just in order to get the one or two bugs you needed fixed, and don't mind having insanely difficult installation procedures for inherently unpackagable software like commercial games that have gigabytes of data to install to disk.

But hey, this is 10th year that it's the Year of the Linux Desktop, so clearly users are all going to convert over finally despite Linux developers still refusing to solve one of the fundamental problems real users have with Linux.

(And yes, this is a real user problem. I used to be one of those "get everyone I know to use Linux" people. Then I found out that real people want to do things like install games and such without having to open up a fucking terminal and figuring out arcane magic commands to install it, and then do it over again with every game patch, when they could have been using all that time just playing the damn game, playing a sport outside, getting laid, etc.)

Providing services is a profession

Posted Sep 10, 2009 22:41 UTC (Thu) by sergey (guest, #31763) [Link]

"Classic" Windows software relies on registry for configuration, in a way that makes keeping
code in multiple locations on disk rather useless. What you are referring to, a way to drop an
application in a directory and make it work, is specific to Web apps and is perfectly
acceptable in Windows and Linux alike. A well-written Web application would rely on Linux
packaging model to keep a single code base that processes multiple sites, so that one could
update the code (application) without touching the data (all these sites). In Windows world,
application has nothing to rely on, the developer has to give you a way to centralize
updates, or not (making it your problem). With .NET Microsoft reversed the "use registry for
configuration" policy and recommends keeping it in files, making it a little more like any *NIX
in configuration aspect. But .NET still doesn't have any packaging functionality whatsoever,
and for security updates for any non-Microsoft applications you're still on your own.

What happens more often is that one does drop their applications into their home folder on a
provider's shared hosting box, and think that somehow they are not responsible for
maintaining it. Some developers are really good at facilitating this maintenace (example:
Gallery 2), others... Not so much. This is actually a good example for Linux packaging
approach, not against it.

Installing packages or updating is not a profession

Posted Sep 13, 2009 20:57 UTC (Sun) by man_ls (guest, #15091) [Link]

Ubuntu uses the Synaptic package manager, and it works pretty well. Updating your machine is not hard to do, and nobody forces you to upgrade the whole distro every 6 months -- you can pick up an LTS and go with it for two years. Even better, it's free. You can do mostly the same with Debian, and I'm sure that other distros have their own graphical tools.

On Windows you have a plethora of software packages which want to update at random moments in time, need to run a package updater all the time -- or just when they feel like it. Guess what: most of it just goes without updates indefinitely. Including Windows itself, which is so obnoxious auto-updating that people just try to disable or ignore it. The result is that the typical Windows installation has a plethora of worms, Trojans and viruses fighting each other for supremacy.

One model is best for software integrated in the distro. (Big surprise, distributions are better for distributed software.) The other model is better for installing random garbage from the web -- including all kinds of malware. (If people want to play games my advice is to get a console.) This is not a justification, we all know that the problem of installing external software on GNU/Linux is not solved. But Windows is most definitely not an example to follow. Now, if we could learn a thing or two from Mac OS X...

Installing packages or updating is not a profession

Posted Sep 14, 2009 17:33 UTC (Mon) by NAR (subscriber, #1313) [Link]

I think you didn't understand the problem. Not the "Updating your machine" is the problem - the problem is that new versions of applications tend to introduce new bugs (or trigger old ones). Just think about the headache pulseaudio caused. The problem of Linux software management is that if I want a new version of e.g. pidgin, because it supports a new protocol, I need to upgrade the whole distribution, which will install pulseaudio (among other stuff), so I won't have sound. This happens even when I had absolutely no intention of going anywhere near pulseaudio.

The hardcore Linux-advocate's answer would be that in this case grab the code, compile and install, but it's definitely not as easy as clicking "Next -> Next -> Finish" and then the advantage of package management is lost (no automatic security fixes, no warning if a used library gets updated with some incomtaible code, etc.). The Windows solution might be uglier on the inside, might contain lots of duplicated libraries installed - but works, and that's what the user cares. Of course, until the FOSS developers treat their users as beta-testers, then noone should care about things like this, but this road doesn't lead to world domination.

Installing packages or updating is not a profession

Posted Sep 14, 2009 21:08 UTC (Mon) by man_ls (guest, #15091) [Link]

But that's not a problem -- it's a known trade-off, and GNU/Linux distributors have chosen one path. Nobody forces you to use a distributor -- in fact you might just compile everything statically and upgrade each bit independently. But nobody has chosen that path, because of the enormous waste and bloat. And also because, as the number of copies of a library grows, the probability that all of them are upgraded when a security hole is found approaches zero. Especially given that most of those programs cannot be upgraded automatically, and if users had to pay attention to all those upgrades they would do little else in their lives. The result? Tons of malware.

The Windows solution does not work IMHO. World domination yes, but at what price?

Providing services is a profession

Posted Sep 14, 2009 19:26 UTC (Mon) by bfields (subscriber, #19510) [Link]

If you had a WordPress package, it would install to something like /var/www/wordpress. But lo and behold, you host two sites, so you put them in /var/www/site1.com and /var/www/site2.com

Requiring multiple physical copies of the same code in order to run multiple instances is rather silly.

If you insist on doing it that way, then yes, distro packages will be hard to use--but, then, *any* sort of upgrading will be hard in an environment where you're scattering random php code all over the filesystem.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds