Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Posted Nov 5, 2010 0:48 UTC (Fri) by thoffman (guest, #3063)Parent article: Shuttleworth: Unity on Wayland
So, although I'm hopeful and supportive of most anything which would make my Linux desktop more responsive, I hope this Unity on Wayland effort will not prevent new Gnome apps from being usable through a networked desktop.
Posted Nov 5, 2010 1:21 UTC (Fri)
by JohnLenz (guest, #42089)
[Link] (84 responses)
The next generation of gnome (and KDE) apps seems to be going towards much more use of OpenGL and the graphics card. So even if the desktop is based on X, I don't see what good the network transparency at the X level will get. Sure there is indirect GLX rendering over the network but that is too slow so you won't be able to run the program over the network anyway. Instead, the rendering will have to happen on the client using the clients graphics card. I currently use network transparency of X too, but it looks like the X split of rendering on the server will need to be abandoned to instead use something like VNC where the rendering is allowed to happen on the client. Perhaps wayland plus SPICE could be used to replace the network transparency.
Posted Nov 5, 2010 1:43 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (78 responses)
I'm sure someone loves all this bouncing fading crud that halts your machine for seconds at a time just to play a pretty animation but for me the computer is an important _tool_ and things like speed and network transparency are important for me.
Fortunately there is a whole suite of active developed toolsets focused on users like me things like xmonad as well as all the "classic" unix tools. The biggest downside to using them now is that it means breaking from the norm of your distros default configuration and thus losing part of the outsourced systems administration value they provide. Perhaps a distro fork will arise targeting people who are technically competent more interested in productivity.
Posted Nov 5, 2010 2:36 UTC (Fri)
by bjacob (guest, #58566)
[Link] (13 responses)
1. web apps (more generally, in-browser apps --- the CUPS config tool over port 631 now appears like a visionary precursor!)
2. the _clients_ now have insanely powerful graphics hardware, in any case there is no reason anymore for wanting to do graphics on the server side (where I mean "server" in its proper network sense, not in the GPU sense).
And for non-GUI apps, SSH is all you need...
Posted Nov 5, 2010 3:30 UTC (Fri)
by davide.del.vento (guest, #59196)
[Link]
But there will still be the need for "normal" X-forwarding via ssh, so a distro that completely kills it will be a deal breaker for me.
Posted Nov 5, 2010 7:09 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (10 responses)
The way that Web apps are now (usually), they have a big deficiency in comparison to console or X apps running over SSH: when you use the console/X app, it runs with your UID, but when you use the Web app, it runs with the same UID as it does for everyone else. This is bad for security; it's as though every app were a setuid app. Apache's "mod_suexec" is one solution to this problem, but its limitations (it can only run CGI apps, and it chooses which UID to run as according to the directory where the program resides) make it rather impractical.
Posted Nov 5, 2010 12:51 UTC (Fri)
by alankila (guest, #47141)
[Link] (9 responses)
Thus, all security features must be implemented through other kind of checking, often manually by comparing the user id on the database row being requested with the user id currently logged in. Not everyone remembers to do this all the time, though: generally a missing check for things like "is this user to authorized to view that page" results in information disclosure bugs.
I think a lot of real-world systems don't run a ton of webapps within one server and one uid. I personally tend to isolate running webapps inside virtual machines and use reverse proxying techniques to expose them where I want them. Virtual machines can be backed up and recreated wherever I want, so they are actually quite convenient to move as black boxes from a system administrator's perspective...
Posted Nov 5, 2010 14:03 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (7 responses)
> Webapps actually have far more interesting issue than the one you talk about. The most important problem is that all end-user data generally is accessible from the same server-side UID, as the guy who logs in a web application isn't using a distinct UID on the server side. Uh, that is the issue I'm talking about. Or at least it's a consequence of it (because the app always runs as the same UID, all of its stored data is accessible to that UID). > The data is in fact usually written in SQL storage But, the grandparent post was talking about, essentially, using a Web browser & Web apps to do what we formerly did/still do with an X server & remote X clients. That is to say, to take the kind of apps which now are X clients (e.g. image editor, email user agent, text editor, office suite, RSS feed reader, terminal emulator, XMPP client, etc.) and make them into Web apps. These don't usually use SQL; or if they do, they use a per-user database instance, e.g. an SQLite file owned by the user, or a per-user instance of mysqld (IIRC, KDE's "Akonadi" does this). There's no good reason for these apps to suddenly become dependent on a central RDBMS server, just because they have migrated from one remote-user-interface protocol (X Window System) to another one (HTTP + HTML5 + JavaScript + whatever)!
Posted Nov 5, 2010 14:51 UTC (Fri)
by alankila (guest, #47141)
[Link] (6 responses)
Otherwise I am in agreement.
Posted Nov 5, 2010 17:07 UTC (Fri)
by rqosa (subscriber, #24136)
[Link] (5 responses)
> suexec does nothing to solve that problem as far as I know Well, with suexec, the UID a CGI program runs as is determined by what directory it's in. A typical usage scenario is that each user has a "home" (or "public_html") directory (that is, a directory found at a path like "~user/public_html" or something similar on the machine where Apache runs, which Apache then exposes to HTTP clients as the URL "http://hostname/~user/") which may contain CGI programs, and when one of those program is executed, suexec will set the UID for its process to the UID of the user who owns the "home" directory it's in. (Or maybe it just picks the UID that owns the program file; I don't remember which way it is, but it doesn't make much difference.) So, basically, suexec will separate webapps that "belong" to one user from webapps that "belong" to other users. Now, if you take one CGI program and make multiple copies of it, each belonging to a different user (that is, each in a different user's Apache home dir), then the different users of that app are separated from each other. But that is an ugly kludge, necessary only because of the limitations of suexec. So suexec isn't a good solution for this problem. (Also, suexec is only compatible with CGI programs. CGI has its own problems, the biggest of which is that it requires every webapp process to exit immediately when it finishes generating a response message; that is really bad for performance. There are much better IPC protocols for webapps, such as SCGI, AJP, and FastCGI.) Here's a suggestion: for "single-user" webapps, the UID to run the app as should be determined by the user specified in the HTTP request, with HTTP authentication (basic or digest).
Posted Nov 5, 2010 21:13 UTC (Fri)
by Pc5Y9sbv (guest, #41328)
[Link]
I've frequently considered that it would be nice to have a generalized mod_wsgi like this, and a user-mapping variant that could manage a daemon process pool with each authenticated web user getting his own app instance, which can be reused for many requests and shut down automatically when idle for too long. There is already some basic pool management in mod_wsgi, but it needs more features.
However, other aspects of the security model need to be matured, as web frameworks have such an in-built idea of one app for all users. You'd really need to make all of your server side resources now owned by the individual web users, e.g. good user/group models for files and good multi-role policies for your RDBMS.
Posted Nov 6, 2010 11:33 UTC (Sat)
by alankila (guest, #47141)
[Link] (3 responses)
I don't think anybody is going to actually do desktop apps in the web browser. The most important feature of a web application is probably still the fact that it's accessible anywhere and requires no installation from user's viewpoint. A local application implemented as web app is only available locally and needs to be installed, so that advantage is wholly gone.
Posted Nov 7, 2010 0:36 UTC (Sun)
by rqosa (subscriber, #24136)
[Link] (2 responses)
> The model of using CGI programs run by a central apache from user's home directory is probably not in the future. Indeed, it isn't. Here's how it should be instead: When person goes to use a remote app, they point their browser at the URL for the host where that app resides and the pathname on that host where the app is installed; for example "http://hostname/bin/my_app.py". Then, the user enters their authentication credentials (use HTTP digest or basic authentication for this) for that remote host. Then, any subsequent HTTP requests from that user will be forwarded (by SCGI or AJP or similar) to an instance of that app running as that user's UID. So, the Web app is installed in just one location, but there will be multiple running instances of the app, one instance per user. (Think about what happens with, for example, a host running an SSH server where many user log in via SSH and then run various console apps and X apps. It's the same principle: apps are installed system-wide, and there's a separate running instance of each app for each user using the app.) > enable browsers to execute applications without a web server I think this is already possible. (If you've got the Python documentation installed, try going to "/usr/share/doc/python/html/index.html" in a browser, type something in the search box, and press "Go".) But I wasn't talking about running web applications locally. > I don't think anybody is going to actually do desktop apps in the web browser. The trouble is, some people here are saying that we don't need X Window System anymore, because we don't need X's network-transparency anymore, because we have a better way to use apps remotely: Web apps. But, with X, most apps that you can run locally (image editor, text editor, etc.) you can also run remotely, and lots of people use this feature. That won't be possible if those apps migrate to a non-networked UI system (e.g. Wayland). If we're really going to adopt HTTP + HTML5 + (whatever else) as the replacement for remote X, we've got to have these same kinds of apps available for it!
Posted Nov 7, 2010 0:38 UTC (Sun)
by rqosa (subscriber, #24136)
[Link]
s/requests from that user/requests from that user to that URL/
Posted Nov 9, 2010 0:29 UTC (Tue)
by alankila (guest, #47141)
[Link]
The most popular X-forwarded application I use personally is xterm and that's mostly because I'm too lazy to open local terminals and use separate ssh connections for them. If I had to choose between X-style vs. VNC-style, I guess I actually prefer VNC-style remoting because of the ability to leave the session running perpetually on the server. Unfortunately, in practice, VNC is not really such a stellar protocol, and I've seen RDP between 2 Windows systems perform better than VNC seems able to, for some reason.
Posted Nov 6, 2010 5:44 UTC (Sat)
by butlerm (subscriber, #13312)
[Link]
Posted Nov 5, 2010 19:46 UTC (Fri)
by misiu_mp (guest, #41936)
[Link]
Posted Nov 5, 2010 3:27 UTC (Fri)
by davide.del.vento (guest, #59196)
[Link]
Posted Nov 5, 2010 3:57 UTC (Fri)
by Kit (guest, #55925)
[Link] (39 responses)
Having a tool be capable of what it's intended to do is important, but would you really want to use an oily screw driver with no grip? Or a hammer where the metal on the handle is flaking?
That's part of the idea behind this "dancing candy cram-ware", as you call it. Even modern netbooks have MORE than enough power to handle these animations and transitions, it's just that the current stack has a variety of issues (immature drivers, 2D operations that act as basically a worst-case-scenario for the graphics accelerator, etc). Animations and transitions can work really well when _done_ well. Any that are showy, flashy, or long are prime examples of _bad_ ones... determining what's good is a bit harder, with subtlety generally always being best.
Posted Nov 5, 2010 4:53 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (14 responses)
So done well most of them would be invisible. Unfortunately they aren't done nearly well at all. My favorite peeve today is the combination of: gnome-screensaver doesn't reliably measure idle status when using the keyboard exclusively, and getting an uninterruptable several second fade out animation when it decides to blank.
It's not that I want a oily screwdriver. I want the nano-diamond tipped tungsten-carbide rocked-powered screwdriver. I want amenities, but they ought to be ones I consider helpful rather than hindrances. If someone wants to paint it pretty colors thats fine as long as it doesn't damage the atomically sharp pointy end. But absolutely no wind-load adding spoilers please.
I fully expect that different people will have different preferences in this regard. Unfortunately I don't feel that they are any good "power users" distros these days which don't leave me playing sysadmin over every piece of minutia (e.g. gentoo). Although I feel like the reduction in sysadmin work I get from using fedora vs gentoo is constantly decreasing due to "usability improvements", which seem to take the form of making the user's first hour 1% easier at the expense of adding a 10% cost to the user's next 20 years. Things like having to use some opaque "registry editor" in order to set a distinct lock and save time when almost 20 years ago xscreensaver gave me a perfectly accessible text file (or even a gui!) with these settings.
Posted Nov 5, 2010 5:55 UTC (Fri)
by sladen (guest, #27402)
[Link] (3 responses)
Posted Nov 5, 2010 9:20 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (2 responses)
I don't think things are quite as simple as you suggest. One, the animations aren't done by the GPU alone, they need support (think loading, preparing, scheduling) from the rest of the system. Two, the GPU draws power to do those animations, which is a cost. And three, perhaps most relevantly, they are not free from a developer time perspective. At least in theory (though this may not apply to software developed by volunteers and/or enthusiasts), that time could have been put into reducing your general setup and teardown time rather than creating animations.
Posted Nov 5, 2010 11:27 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Negligible.
"Two, the GPU draws power to do those animations, which is a cost. And three, perhaps most relevantly, they are not free from a developer time perspective."
GPU draws power to draw stuff in any case. And most effects are so simple that from GPU's point of view they are essentially free.
Posted Nov 5, 2010 15:42 UTC (Fri)
by drag (guest, #31333)
[Link]
If your interested in speed and battery life then using the GPU to it's full extent will get you both faster then trying to depend on the CPU alone.
Using the CPU to do things that the GPU can do faster just means your wasting cycles and ruining your efficiency and performance.
The GPU is now a part of your computer as much as floating point processing is or DMA. It's not longer possible to treat it like it's some sort of optional add-on or something you only use for games. It's a native part of the architecture and should be possible for application writers to easily take advantage of.
In PCs this has been true for a while and with mobile world this is more and more true. After all you can look at the requirements for Windows Phone 7... they require a DirectX 9 capable GPU.
Posted Nov 5, 2010 9:15 UTC (Fri)
by mjthayer (guest, #39183)
[Link]
And instead of that you got the Tungsten Graphics powered one! (Sorry, couldn't resist there...)
Posted Nov 5, 2010 9:58 UTC (Fri)
by roc (subscriber, #30627)
[Link] (8 responses)
For example, even if the application can move a visual object from point A to point B instantly, an animation can still be a helpful cue to remind the user that motion has occurred. Our brains aren't designed to process objects teleporting around.
Posted Nov 5, 2010 11:47 UTC (Fri)
by orabidoo (guest, #6639)
[Link] (7 responses)
That's fine as a general case, by default. By all means provide a pretty animation, OpenGL-powered or otherwise, to make that window minimize to the taskbar or wherever.
BUT, it so happens that many of us, technically minded users, already know exactly what we expect from our computers when we press a key or click a button.
In those cases, having stuff visibly move around is just a plain distraction. The human eye, like most animals, is designed to follow stuff that moves and pay much more attention to it than to the static background.
And if I *know* that the window is going to minimize, my brain is already onto what I want to do with that window out of the way. So to have an eye-catching animation at that point is not just harmless eye-candy. It's actively distracting me from where my mind wants to go.
For that reason, every power-user friendly GUI and desktop should have an option to disable all animations. Current GNOME/Metacity has a gconf key for that (Apps->metacity->general->reduced_resources), which is nice. I sure hope the future GNOME shell(s) also have an equivalent setting.
Posted Nov 5, 2010 13:07 UTC (Fri)
by flammon (guest, #807)
[Link] (6 responses)
Posted Nov 5, 2010 13:26 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (5 responses)
It depends on which button you clicked. I use XFCE without any animations and I'm never confused about what happens when I do things to windows.
I use network transparency all the time. Eye-candy is great for those who want it, I suppose, but please keep an escape-hatch for those of use who like network transparency.
Posted Nov 5, 2010 13:38 UTC (Fri)
by Janne (guest, #40891)
[Link] (4 responses)
Well, duh. But people don't always know which button does what. If the UI can guide them with animations and such, that's only a good thing. App-window that minimizes in to the button on the taskbar is a GOOD IDEA. If the window simply vanished, users can be left confused as to what happened. Even you. What if your aim was few pixels off, and you accidentally closed the windows instead of minimized it? With animations you would instantly know that you closed the windows, instead of minimized it.
And I kept on hearing comments about "technically minded people". You do know that those people are in the minority? Most people are NOT "technically minded", they just want to get their stuff done. And if they can get it done elegantly, all the better.
And I find these comments about "eye-candy that freezes the desktop" strange. I have all kinds of animations and the like on my Mac, and the UI does not freeze.
Posted Nov 5, 2010 15:14 UTC (Fri)
by dskoll (subscriber, #1630)
[Link]
But people don't always know which button does what.
A UI that doesn't make that clear is fundamentally broken and no amount of animation can fix that.
To be clear: If people want to implement fancy animations, that's fine. I don't care. Even make it the default if you like. But make it possible to switch them off because I do care if animations are forced on me.
Posted Nov 5, 2010 15:42 UTC (Fri)
by tjc (guest, #137)
[Link] (1 responses)
Well, maybe the first time they don't know what happened. But if someone clicks a button five times, and the same thing happens every time-- and they still don't know what's going on-- then they have issues that can't be addressed by the UI. Everyone is confused from time to time, but it usually passes. There are very few people who live in a state of perpetual confusion, so why target a UI at some imaginary, gormless twit who doesn't even exist?
Posted Nov 6, 2010 10:22 UTC (Sat)
by Janne (guest, #40891)
[Link]
People are not computer-wizards. It might be obvious to you and me how and why computers work the way they do, but rest of the people have no idea. The computer should do everything in it's power to help the user. But every time something like that is attempted in Linux, we get whining about "dumbing down" the UI or something. Only in Linux, complexity is considered a good thing, and helping the user is considered a sign of stupidity.
End result is that Linux on the desktop is something that normal people do not want to use.
And sure, people will learn which button does what. But animations still help. When you have dozen apps in the taskbar, it's useful to have an animation that shows you which of those is the app you just minimized. Sure, you could visually scan the taskbar, but you must admit that animation is a lot faster way to do this.
And there are even studies about this. Researchers set up two functionally identical systems. The difference was that one system looked plain and basic, while the other has nice graphics ("useless eye-candy" as it's called in Linux-community). It was found that people were more productive on the system that looked better. People found the better-looking system more pleasant to use. And that in turn made them more productive. And happy users are a good thing.
Posted Nov 6, 2010 21:21 UTC (Sat)
by orabidoo (guest, #6639)
[Link]
Well duh right back. As I said above, I'm all for having such friendly animations on by default.
I'm just pointing out a good reason why a subset of users find them counterproductive, and pleading that every GUI should have an option to turn animations off. I don't mind if the knob is quite well hidden, like a gconfkey. Just let those of us who like to think ahead of the computer save that 0.10s of time, or feel like we did. Thanks.
Posted Nov 5, 2010 8:36 UTC (Fri)
by janpla (guest, #11093)
[Link] (1 responses)
"Having a tool be capable of what it's intended to do is important, but would you really want to use an oily screw driver with no grip? Or a hammer where the metal on the handle is flaking?
That's part of the idea behind this "dancing candy cram-ware", as you call it. Even modern netbooks have MORE than enough power to handle these animations and transitions ..."
- All this may be true, but there are some (I am one) who avoid this kind of thing because it is too intrusive and too much of a distraction. I am perfectly happy with graphics where relevant and useful, but in my view trying to work in the middle of an advanced light-show will only detract from the real enjoyment of computer programming.
Apart from that, I think it is deeply unfair to compare X to a broken tool. To take you up on the tool-analogy, you may prefer a sleek-looking electric drill with automatic cable roll-up, cool colour and some impressive graphics printed on the body, but if you want to drill a hole, all you need is a hand-cranked drill; and if you know how, you can normally do a much better job faster, because you have far better control over it.
X may be hand-cranked, but it is a very well-designed tool and there is nothing broken about it.
Posted Nov 5, 2010 8:57 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
>X may be hand-cranked, but it is a very well-designed tool and there is nothing broken about it.
Some insiders do not agree: http://lwn.net/Articles/390389/
Posted Nov 5, 2010 8:57 UTC (Fri)
by codefisher (guest, #64993)
[Link] (19 responses)
Posted Nov 5, 2010 13:08 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (18 responses)
The way I see it if _I_ need animations to tell what the system has done then the system has already failed. The computer's behavior should consist mostly of deterministic direct responses to my actions so that I should almost never need help figuring out what it has done.
In the rare cases of doubt (such as a cat jumping on the keyboard) I should not have to fuddle out what happened from my memories of the computer's graphical interpretive dance, instead a log/history should be provided which I can reference whenever I need to.
There are a great many operations that a computer can conduct which have no intuitive mapping to an animation. We would weaken our computers to uselessness if we constrained their easily accessible abilities to those which could be represented accurately as dance. I could possibly memorize a long list of animations "When the screen vibrates up and down, the system has unmounted a disk and it can be removed." but part of the reason for having _language_ is that you don't need to memorize a unique symbol for every possible idea. Textual communication can provide a precise, searchable, learnable, archivable, and accurate information channel between the computer and the user. Language is a tool which is equally available to all applications, including GUI ones.
Much of the Unix environment already works this way, certainly the CLI itself does but it seems that many desktop tools comes from a different culture where they use things like focus stealing animated popups with ideograms to inform the user about system activities. When users complain that sometimes the message disappear, never to be recovered, before they had a chance to see them the 'desktop' culture seems to think "lets make the animation slower and more intrusive!". If that kind of thing makes someone happy, good for them but it isn't something that I want.
Posted Nov 5, 2010 15:51 UTC (Fri)
by drag (guest, #31333)
[Link] (16 responses)
Lolz over Lolz. :)
Don't you know that, you know, text scrolling is a ANIMATION?
The way it makes it sound is like your computer is just something with a big red button on the front that you press and it says "DO WHAT I WANT" and then it plugs into your mind or something.
It's all about information feedback. There are lots of ways to provide information. Lots of different ways to receive it. If you want to live in a weird sort of Max headroom type universe were all that exists is just you and your PC then that's a interesting idea, but I (and most people) want to be interact with the real world.
This means things happening outside your control and interacting with you and your computer. GPS, temperatures, news feeds, message notifications, etc etc. All sorts of stuff is going on all the time. We want 'augmented reality', 'feedback', and that sort of thing. It's the dream to be able someday go 'Hello Computer' and have some sort of meaningful response.
Posted Nov 5, 2010 17:41 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (15 responses)
Please don't be silly. I think I made it amply clear that I don't expect to work with the computer without it communicating with me but I want it to communicate with only when required or requested and I want it to use the high bandwidth channel of _language_ to do that communication when communicating things which are intuitively and obviously graphical or when doing so is most efficient, instead of what I characterized as "interpretative dance" or cave drawings.
I certainly do want to interact with the outside world but I also want to be able to control that interaction. A small status indicator, additional comments in addition to some output the computer is already providing. Among humans we generally consider it impolite to interrupt someone with something unless it's urgent or you know that its something that they want to know about. My computer is far too stupid to reliably know when something meets that criteria, so it ought to be especially cautious in its interruptions unless I tell it otherwise.
It's not like you really have a choice of the matter. In environments where the computer is constantly presenting the user with a barge of focus stealing choices users quickly learn to simply confirm everything that comes before them. "Install this?" "Yes." "Remove this?" "Yes." "Transfer your bank account to Nigeria?" "Yes. er. SHIT!". My bandwidth is finite and I'd much rather spend it on the interactions I have initiated.
Not always but experienced users USUALLY do. Why should I pay the price of an animation every time just to provide a small benefit in a small minority of cases? Give me a session history, give me an undo, give me a "WTF just happened button". These things would all be great. An animation? Without things like undo an animation just lets you know how screwed you are a bit faster. Without a history/WTF-button an infrequently encountered animation is likely to be inscrutable.
The kinds of events which are likely to confuse me are also likely to not be representable by an animation. I'm not going to be confused by accidentally dropping a file in the wrong directory, I'm going to be confused by something like a glob having unintended consequences.
Posted Nov 5, 2010 18:20 UTC (Fri)
by drag (guest, #31333)
[Link] (14 responses)
Well you can avoid that just by using software that does not suck.
The only time I want my attention to be stolen from what I am working on is if it's something damn important. Then in those cases I WANT my attention to be stolen.
But really nobody is advocating that we should have constant big swooping animations that do nothing but get between you and what ever text box you happen to be interacting with at the time.
And these animations don't cost you anything really. If you think that having a translucent notification box pop up to tell you received a email is going to take away from whatever your doing your probably very wrong.
Or at least you should be wrong.
We have had hardware around since the late 1990's that is perfectly capable of performing the necessary functions to get what people are trying to get at with things like Unity, Gnome-3, etc etc. Apple's first OS X desktop ran with no GPU acceleration at all!
It's just that graphics suck in Linux. This is what _may_ get fixed if we can break away from the tyranny of proprietary video drivers and everything-we-use-must-be-X.
Posted Nov 5, 2010 18:44 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (6 responses)
I'm sure that some other people, perhaps most other people, are completely fine with that sort of thing. I wish you luck in creating software for those people to use. Though I am somewhat skeptical that most people actually prefer this sort of thing outside of computers almost nothing else provides indications in that kind of intrusive"interrupt driven" way. (My car doesn't overlay a gigantic oilcan on my windshield when the oil pressure is low it lights up a discrete check engine light and I can attach and OBD tool to find out the cause. When my office paper-mailbox has a letter it's left sticking out where I can see no one copies the letter onto a transparency then slams it in my face)
But even if I really am in the crazy minority here, please don't think that you speak for everyone. You certainly don't speak for me and at least a few other curmudgeons like me and I've been using computers long enough to have a pretty good idea what works for me. That kind of annoyance isn't how I work, it isn't what I want. I put up with this kind of behavior from my computer only in so far as putting up with it is less costly to my time and endurance than maintaining every part of my systems on my own. But as far as I'm concerned a step down from a system that provides no notification at all.
Posted Nov 6, 2010 16:08 UTC (Sat)
by andreashappe (subscriber, #4810)
[Link] (5 responses)
Time to cut back on the hyperboles..
The X-Protocol is currently getting more in the way than helping stuff -- don't take my word for it, Keith Packard's should be enough. There are people trying to improve that: look at the quality of the X stack, they are are a long way ahead from the things that I had to use in the last millenium.
Animations might be added. So what? Scrolling is an animation, tear-free window movement was made possible through that animation work. Who did suffer from that? Wayland might the way forward, but still there's X11 as a possible client to that.
If you don't like it: turn them off. Come on, that would have taken less time than the whining on this forum. So you don't like those transparent popups that disappear after 2-3 seconds and hide the some 8cm^2 in the top right corner of your screen where most of your work seems to happen: disable them. You are able to disable them, some no-clue first-time Linux user surely ain't able to enable them. If you (for some reason) need to install your linux distribution every year automate it. Create a package that does all the magic for you -- other people might even like to use it.
Posted Nov 6, 2010 20:19 UTC (Sat)
by gmaxwell (guest, #30048)
[Link] (4 responses)
I run a distribution in order to outsource basic system maintenance. I have more pressing things to do with my time and I'm willing to tolerate the consequence of system operation that I don't agree with but that doesn't mean that I don't have preferences. I'm speaking up here because I believe that it would be a disservice for me to everyone who has common interests to sit quietly while people pushing features which are harmful to those interests are so vocal.
You make it sound like it's so easy to disable these things. Sadly it is usually not in the interest of "usability" the mere option to disable these things is often completely eliminated or if it remains at all it is deeply hidden (often inside some undiscoverable registry tool). Just because I am more capable than joe-random that doesn't mean my time is less valuable, that I am more patient, or that I am infinitely capable. In cases where the functionality is eliminated patching the software breaks updates and leaves me tracking development, which is the work I was hoping to avoid by using a distribution in the first place.
Going back to the subject that started this sub-thread: If network transparency is abandoned in the GNU/Linux desktop infrastructure I can't simply turn a knob to bring it back! Remote X is functionality I use _every day_. I have three windows open on my laptop right now to a system with a large amount of ram which is able to work on data sets that I can't reasonable work on locally. It works great. And the notion of it only working via shims or with arcane software which I have to maintain myself troubles me greatly.
I'm certainly not opposed to _performance improvements_. By all means, making it faster has my full support. The discussion here was about tossing functionality (which I find critical) in order to enable performance improvements which are mostly inconsequential to me. I am not comforted by the argument that this change is urgently needed due to make improvements like increasingly intrusive animations.
Janne, I must admit that I'm not quite sure if you're trolling me or not but if you are I guess I'm going to fall for it.
Your market share strawman is not well supported by the evidence. Systems with clearly superior user experience have time and time again failed to capture really significant market share (Mac OS for the longest time and even today it's only at perhaps 7%, BeOS, etc).
You're also making the erroneous assumption that I care about having 7% market share (like OSX) vs 2% market share(numbers source). I don't. I care about having a usable _computer_ (as opposed to a home entertainment center, which has large orthogonal usability requirements). I care about having a good option to recommend to other technical people. I care about not having to build my own desktop software stack, even though I would probably be able to create one which met my needs I have other things that I'm working on. While I'd love to see most people running Free software, 7% wouldn't be much of an improvement against the 85% on windows for that purpose... even if I believed that we could solve the marketshare gap with UI improvements.
People use computers for different purposes. Even windows has a small market share if I count televisions and video game systems as "computers". I wonder if we're using 'desktop' market share numbers which are diluted by a great many use cases which would be better served by an appliance? If I were to care about market share I'd want to first care about getting 100% of uses which are best met by powerful computing systems rather than by media players or the like.
I am and I am not alone. And I want a system which is useful for me to run. I also want other people to have systems which are useful for them, even if their needs are different than mine. I feel that non of the major distributions are catering to my interests, and I think thats unfortunate and I hope it changes. The major distributions and major Linux desktop software suites are clearly prioritizing non-technical novice users today. They even say so explicitly. They may be actually failing to satisfy the needs of those users too, but failing to make your target happy isn't equal to having a target which includes other people.
Perhaps animations can play a useful role in a typical user's "next twenty years" but the animations that do probably won't be the same training-wheels animations that you'll create if you're optimizing for the initial impression. I found the example about minimizing to be pretty humorous. Why would I want that? If I care it's because I either don't know what I did, or because I wish I hadn't done it. In either case what I need is an undo button, not an animation. An animation might make it a little easier to manually undo my mistake, but thats really a half-step... We have computers to eliminate manual processes. How many significant usability improvements are we missing because everyone focused on usability is primary focused on newbies and the initial impression?
Posted Nov 6, 2010 21:22 UTC (Sat)
by dlang (guest, #313)
[Link] (3 responses)
it's really hard to put a couple hundred gig of ram into a laptop, but trivial to remote the display from a server that has a couple hundred gig of ram to a laptop that you can carry into a conference room.
you may try to argue that the app could be written to work that way through other means, but that misses the point that with X the app author doesn't have to make a decision of if the app should be network accessable or not. If app authors have to go to extra effort to make their stuff network accessable, most of them won't go to that effort (after all, nobody needs that anyway, that's why the feature was removed from linux systems to start with right?) and the only apps that will have the ability to be networked are the ancient X apps (that predate the change), or high-end 'enterprise' commercial apps where someone is paying for the feature.
this leaves out the huge middle ground where the app author never thought about the need to be networked, but that app ends up being the perfect thing to use when backed by the right hardware. Instead someone will have to fork or recreate the app in a networked version.
Posted Nov 7, 2010 9:31 UTC (Sun)
by roc (subscriber, #30627)
[Link] (2 responses)
Maybe true for simple apps, but complex apps are basically unusable over modest-latency links unless they've been significantly optimized to reduce round-trips to the X server. There are a lot of X APIs that you simply cannot use if you want to be fast over the network.
> this leaves out the huge middle ground where the app author never thought > about the need to be networked, but that app ends up being the perfect
Or just run it under a modern screen-remoting tool.
Posted Nov 9, 2010 2:02 UTC (Tue)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Nov 9, 2010 6:40 UTC (Tue)
by dlang (guest, #313)
[Link]
Posted Nov 5, 2010 19:30 UTC (Fri)
by dskoll (subscriber, #1630)
[Link] (6 responses)
And these animations don't cost you anything really. If you think that having a translucent notification box pop up to tell you received a email is going to take away from whatever your doing your [sic] probably very wrong.
Are you kidding me? Those notification boxes drive me crazy. There I am, deep in an xterm or an emacs debugging session and some stupid box obscures my text? I want the computer to stay out of my face!
Posted Nov 5, 2010 20:06 UTC (Fri)
by drag (guest, #31333)
[Link] (2 responses)
For example at my current job some of the emails that I get are going to be critical and far more important then anything I would happen to be working on, unless I am working on a emergency... at which time I would have 3 phone lines blazing, people talking over everybody else, etc etc. Then a little pop up in the corner of my window is going to be the last thing on my mind.
In my old job I couldn't care less. There was no communication that mattered enough to be answered right away.
But now I WANT to see that stuff. I WANT to be interrupted. That's a good thing. Because if I get a notification and act fast enough I can stop those above mentioned emergencies. :)
Posted Nov 6, 2010 10:25 UTC (Sat)
by modernjazz (guest, #4185)
[Link]
The problem is there just hasn't been enough effort put into open-source drivers until recently, and the quest for "bling" has really ramped up those efforts. Just like how the commodity/gaming market increased power and decreased the price of computing, for both "serious" and "fluffy" use-cases.
So I'm happy about where things have been going, even though it has made X a pain in the neck for the last couple of years. (Fortunately, it seems to be getting better, at least for me.) But I would bemoan the loss of network transparency in situations where I didn't need the absolute highest-performance graphics.
Posted Nov 11, 2010 17:13 UTC (Thu)
by cdmiller (guest, #2813)
[Link]
Posted Nov 6, 2010 22:31 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (2 responses)
Perfect example. KDE. I have a taskbar at the bottom of my screen that currently says "Konsole (2)" - ie I have two Konsoles (currently hidden). Let's say I put my mouse over it - it now displays what those two consoles are. All fine and dandy - UNTIL I actually want to select the upper of the two.
If I don't know which one I want, or I'm slightly hesitant, or I'm not good at moving my mouse, or or or ... the mouse hovers over the FIRST konsole description a tiny moment too long, and the information popup appears, COMPLETELY obscures the second Konsole button that I actually want, and JUST WON'T GO AWAY until I go back to "Konsole (2)", get rid of the whole damn lot, and have to start ALL OVER AGAIN.
Don't forget - these information popup bars have a habit of following the mouse. In other words, if you're slightly unsteady, or can't aim quite right, or anything else where the mouse is wobbly, there's a damn good chance the popup is going to pick a damn inconvenient place to appear.
Quite why the KDE people chose the place they did for the popup I'm moaning about I do not know - it is INCREDIBLY stupid, but hey, I'm sure they have some very clever people who thought it was a good idea ... :-)
Cheers,
Posted Nov 6, 2010 22:43 UTC (Sat)
by dlang (guest, #313)
[Link]
personally, I choose to have it never group and I set the taskbar to be tall enough to show enough rows to have a useable amount of text in each of the icons.
Posted Nov 7, 2010 1:32 UTC (Sun)
by boog (subscriber, #30882)
[Link]
Posted Nov 5, 2010 19:30 UTC (Fri)
by rgmoore (✭ supporter ✭, #75)
[Link]
That's only likely to be true for UI limited tasks. I expect my computer to be doing many things at once. Some of those tasks are things that don't and can't happen instantly because they require substantial processing or data retrieval time. Many of them are background tasks that are running without requiring my explicit instructions every time. I want feedback about what's happening with those tasks, and some kind of unobtrusive desktop effect can be a better way of providing it than yet another message window popping up.
And that's just for desktop notification type effects. There are other useful things you can do with graphics. For example, I find that I wind up with overlapping windows fairly regularly, even though I have a very large monitor with multiple virtual desktops. I like some of the eye-candy effects that are used to help out with that problem- translucent window borders, temporarily reconfiguring the desktop so I can see miniature versions of all windows, etc. Those kinds of effects may not be vital, but they make the system easier to use- which should be the single biggest goal of development.
Posted Nov 5, 2010 12:28 UTC (Fri)
by pboddie (guest, #50784)
[Link] (1 responses)
What a flawed analogy! It would be more appropriate to liken a plain, functional desktop to a working screwdriver and a fancy, animation-heavy desktop to a screwdriver with a diamond-encrusted gold handle with a fluffy comfort grip. The former is sufficient to get the job done whereas the latter looks great if you want to give a demo.
Posted Nov 5, 2010 21:58 UTC (Fri)
by Kit (guest, #55925)
[Link]
Your version only further illustrates the point I was making. _BADLY_ done transitions and animations are FAR easier to notice than _well_ done ones, because _well_ done ones you only really notice subconsciously, while badly done ones demand your attention.
Well done transitions must be VERY quick, and completely smooth. They must be over far faster than a person could actually react to them, because they're only supposed to provide a hint at what's going on. People that are used to a system operating a specific way might not like it, because people fear change... but to a user that has to learn both systems, the ones where transitions and animations are used well will be far easier to learn. And then, after using the system for a while, once they're used to how it operates, the one with the transitions will _continue_ to be the more pleasant one to use.
Animations and transitions can transform an interface from feeling like a computer, to feeling like an actual physical thing, operating under the normal physical properties.
---
There are other non-animation/transition effects that the system can use to improve usability, such as apply an effect to the windows of an application that appears to be frozen, or very subtly dimming background windows (but it needs to be subtle enough that you wouldn't notice, likely even if someone told you it was doing it). Humans notice far more than what they're consciously aware of, interfaces should take advantage of that.
Posted Nov 5, 2010 8:08 UTC (Fri)
by laf0rge (subscriber, #6469)
[Link] (4 responses)
And as for making 'the linux desktop' attractive to end-users in an office: i could not care less personally. I am interested in making technology work for those people who have an interest in technology and want to understand it. People who use it so much, that adopting the human being to the computer results in much more productivity than trying to adopt the computer to human beings.
Posted Nov 5, 2010 8:40 UTC (Fri)
by pheldens (guest, #19366)
[Link] (3 responses)
Posted Nov 5, 2010 9:11 UTC (Fri)
by Karellen (subscriber, #67644)
[Link] (2 responses)
WTF?
Since the Gnome 1.0 release in March '99, there has been *one* incompatible change, when 2.0 was released, in June 2002. Since then, no incompatible changes in over 8 years. Gnome 3 is (currently) scheduled for March 2011, and there's no reason to think that it will last for any less time than Gnome 2 did.
Similarly, KDE 1.0 was in July '98, 2.0 in October 2000, 3.0 in April 2002, and 4.0 in Jan 2008. While 1 and 2 were fairly short-lived, 3 was a lot more mature lasting for 6 years, and with a lot of the technologies in 4 still being built upon with newer minor releases at nearly 3 years in, I predict that 4 will be a longer-lasting base than 3 was.
And, of course, apps written for Gnome 1, and KDE 1, 2 & 3 should all still run fine on any current and future Gnome/KDE desktops. You don't have to rewrite your KDE3 app to KDE4 technology if you don't want to. You can keep developing it against the old libs for as long as you want.
Posted Nov 5, 2010 12:51 UTC (Fri)
by BeS (guest, #43108)
[Link]
Reading this blog post about the plans for Gtk4 I have the feeling that Gtk3 and GNOME3 will have a rather short live:
Posted Nov 8, 2010 18:19 UTC (Mon)
by jond (subscriber, #37669)
[Link]
Posted Nov 5, 2010 9:31 UTC (Fri)
by nhippi (subscriber, #34640)
[Link]
Back to seriousness, is there really much point in investing heavily in desktop window management? Most users end up switching mostly between browser tabs. Just compare how many people bother with evolution or thunderbird and just go with gmail nowadays? Developers and HC users will still have some xterms open too, but they will probably use something tiling to manage them (like now).
Posted Nov 5, 2010 19:24 UTC (Fri)
by Simetrical (guest, #53439)
[Link] (15 responses)
Posted Nov 5, 2010 20:42 UTC (Fri)
by gmaxwell (guest, #30048)
[Link] (14 responses)
(1) We have GPU acceleration already. Perhaps it could be made to work better. I'm all for that. I protest the idea that we must toss the very useful network transparency just to get the small incremental improvement that might come in cases where network transparency isn't being used.
(2) While I'll concede on the power savings bit I don't actually believe that we do need GPU acceleration. A fairly typical PC can compute and blit out 1080P video at 60FPS without any "gpu acceleration" at all. The system is already far beyond my reaction time at least until someone adds a bunch of fancy animations.
Posted Nov 6, 2010 16:17 UTC (Sat)
by andreashappe (subscriber, #4810)
[Link]
Yeah, but the CPU utilization drop from 40% to 4-5% was kinda nice.. do you buy cpus to just get spammed by tasks that they are not suited to?
Posted Nov 6, 2010 16:26 UTC (Sat)
by Darkmere (subscriber, #53695)
[Link] (6 responses)
This may because the current Linux-based software stack is impossible to deal with it properly, but that doesn't really make a difference to the end user.
Posted Nov 6, 2010 19:16 UTC (Sat)
by gmaxwell (guest, #30048)
[Link] (2 responses)
Posted Nov 6, 2010 19:43 UTC (Sat)
by Darkmere (subscriber, #53695)
[Link]
Posted Nov 9, 2010 15:36 UTC (Tue)
by nye (subscriber, #51576)
[Link]
Posted Nov 6, 2010 19:58 UTC (Sat)
by sfeam (subscriber, #2841)
[Link]
Posted Nov 8, 2010 8:26 UTC (Mon)
by buchanmilne (guest, #42315)
[Link] (1 responses)
While my Atom-based netbook can handle some 720p content, it struggles with others. However, my Atom-based HTPC, which has an ION GPU, running XBMC on Linux, plays almost all 1080p H.264/5.1 content (with sound going through pulseaudio) without going over 20% CPU utilisation. But, the Nvidia ION chipset has VDPAU with H.264 decoding support, whereas the cheap intel chipset on my Atom-based netbook doesn't.
I don't think the problem is the rest of the software stack, it's probably that your GPU doesn't have accelerated decoding, or the driver doesn't support it.
Posted Nov 8, 2010 12:47 UTC (Mon)
by Darkmere (subscriber, #53695)
[Link]
Simply the amount of data to shuffle is enough to choke certain machines, and will remain so for quite a while.
(Especially with "slow" graphics memory/shared memory setups which remain common on laptops of the cheaper inclination)
Posted Nov 7, 2010 17:55 UTC (Sun)
by Simetrical (guest, #53439)
[Link] (5 responses)
Well, we aren't, are we? You can still run X on top of Wayland, and Wayland will presumably support other types of networked desktops. Every OS does, after all. Why do you think Wayland will wind up being less nice to use over the network than X in the end? I've found even NX to be almost unusably slow with even 50 ms latency, between uptown and downtown Manhattan. Regular old X forwarding didn't even work at that latency, practically speaking (taking minutes to even draw the window).
Posted Nov 7, 2010 18:57 UTC (Sun)
by dlang (guest, #313)
[Link] (4 responses)
different people have different tolorance for the effects of latency. while you consider 50ms unusable, other people have been reasonably happy with X over dialup (~300ms latency)
Posted Nov 7, 2010 23:23 UTC (Sun)
by Simetrical (guest, #53439)
[Link] (3 responses)
The lag I saw in X forwarding latency is not a question of individual tolerance. When I tried regular X forwarding on Chromium, it took minutes to even draw the thing once at startup. It was not usable as an interactive application by any stretch of the word.
With NX, it was usable, but with lag of a couple of seconds on everything I did. This should not be necessary -- it should take exactly one round-trip for my mouse click to get to the other computer and all changes to get back. NX was taking dozens of times that. We live in an era of high latency and low bandwidth; the X way of doing things no longer makes sense. Pushing around bitmaps is a much better strategy, and will become ever more so with time, as network connections get faster and latency remains constant.
Unless I'm missing something, which is entirely possible, since I have only the vaguest idea of how anything related to graphics works. In that case, corrections appreciated. :)
Posted Nov 8, 2010 2:49 UTC (Mon)
by dlang (guest, #313)
[Link] (2 responses)
Posted Nov 11, 2010 18:03 UTC (Thu)
by Quazatron (guest, #4368)
[Link] (1 responses)
Posted Nov 11, 2010 18:41 UTC (Thu)
by gmaxwell (guest, #30048)
[Link]
Over slower links VNC will stay usable (if slow) while X becomes useless.
There are various x protocol compressing proxies available for these situations, but I haven't had cause to use them for years. Networks got faster.
Posted Nov 8, 2010 8:10 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Nov 5, 2010 9:06 UTC (Fri)
by modernjazz (guest, #4185)
[Link] (4 responses)
So, are the concerns that Wayland will break network transparency merely "short-term," meaning that the technology is feasible but simply not in place yet? Or are they "long-term," meaning the OpenGL path is basically not amenable to network transparency?
Posted Nov 5, 2010 10:07 UTC (Fri)
by dgm (subscriber, #49227)
[Link] (2 responses)
Posted Nov 5, 2010 22:05 UTC (Fri)
by Kit (guest, #55925)
[Link]
For a normal desktop environment, that shouldn't be a huge deal. The problem is, desktop apps seem a bit hell bent on trying to make GPUs lives as difficult as possible, and will frequently do the least efficient thing possible when it comes to drawing. Toolkits/applications are too insistent are throwing everything away and doing it all over again from scratch, instead of reusing what has _already_ been uploaded to the GPU, when that's what the application was wanting to render anyways.
Posted Nov 6, 2010 10:35 UTC (Sat)
by modernjazz (guest, #4185)
[Link]
3D textures are of course worse, but one could restrict use of them to cases that are strictly necessary so as not to kill network performance.
Posted Nov 8, 2010 18:37 UTC (Mon)
by daniel (guest, #3181)
[Link]
Posted Nov 5, 2010 9:59 UTC (Fri)
by dgm (subscriber, #49227)
[Link]
What's more, this way you don't force applications where network transparency makes no sense -like video players and games- through the X protocol. Probably the desktop itself is a good example of such an application that's better run close to the display hardware.
In the end, what you will get is a (much) simpler X server, focused on the network protocol and independent of the hardware. Good stuff, if you ask me.
Posted Nov 5, 2010 20:07 UTC (Fri)
by jonas.bonn (subscriber, #47561)
[Link]
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Look at mod_wsgi...
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
The data is in fact usually written in SQL storage and there are no explicit security tags that the database server could check on behalf of the application: thus all data is available to every query, at least in principle.Shuttleworth: Unity on Wayland
There are good ways to fix that problem, namely "virtual private databases". You can implement them in any database that has update-able views that can filter on session variables.
I have an application that sets the database session state to match the application session when handling each page request. Until that state is set, all the "tables" return zero rows. After it is set, all the virtual tables contain only the rows the user is allowed to have access to, only those rows can be updated, and the application can only insert rows into the same range. Near perfect isolation. Any kind of attack can only affect the data of the logged in user.
Shuttleworth: Unity on Wayland
ssh -L 5902:localhost:5901
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
> and network transparency are important for me.
I would say instead that these systems are so fast that they ought to have no time in which to display the bling. Every action should occur so fast that unless the animation is slowing it down you wouldn't be able to perceive the animation. If there are still cycles left over the system should be conserving battery (if it's battery powered) or pre-calculating possible next moves on my part (if it's not).
Shuttleworth: Unity on Wayland
The value of animations
The value of animations
The value of animations
The value of animations
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
But people don't always know which button does what....
If the window simply vanished, users can be left confused as to what happened.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Don't you know that, you know, text scrolling is a ANIMATION?
The way it makes it sound is like your computer is just something with a big red button on the front that you press and it says "DO WHAT I WANT" and then it plugs into your mind or something.
Well, duh. But people don't always know which button does what. If the UI can guide them with animations and such, that's only a good thing.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
To quote my first message on this thread:
Shuttleworth: Unity on Wayland
Perhaps a distro fork will arise targeting people who are technically competent [and are] more interested in productivity.
Posted Nov 6, 2010 10:22 UTC (Sat) by Janne (guest, #40891)
With attitude like this, it's no wonder that Linux on the desktop is perpetually stuck at under 1% market-share...
People are not computer-wizards.
The computer should do everything in it's power to help the user.
It seems to me that the people carrying the biggest "help people" banner often do the most harm. I too want the computer to help people, even non-technical people. I suspect we have very different ideas of what "help" means. I can assure you that adding more popups and interface interrupting animations will not help _me_ in the slightest. Other folks, perhaps, but I don't intend to speak for anyone else.
And sure, people will learn which button does what. [
] And there are even studies about this. Researchers set up two functionally identical systems. The difference was that one system looked plain and basic, while the other has nice graphics [
]. It was found that people were more productive on the system that looked better.
If you provided citations I would read them. But what concerns me with this is that it seems like an unhealthy obsession with the initial impression. Your first few hours with a system are entirely different than your next twenty years with it. Unless you are worried about every last fraction of a percent of market share I believe you should optimize as much for the 'next twenty years' as is possible without turning people off completely. (For example, I think Blender fails a bit too hard on the initial impression)
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
> be network accessable or not.
> thing to use when backed by the right hardware. Instead someone will have
> to fork or recreate the app in a networked version.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Wol
Shuttleworth: Unity on Wayland
KDE obscuring tool tips
Shuttleworth: Unity on Wayland
The computer's behavior should consist mostly of deterministic direct responses to my actions so that I should almost never need help figuring out what it has done.
Shuttleworth: Unity on Wayland
Having a tool be capable of what it's intended to do is important, but would you really want to use an oily screw driver with no grip? Or a hammer where the metal on the handle is flaking?
Shuttleworth: Unity on Wayland
>functional desktop to a working screwdriver and a fancy, animation-heavy
>desktop to a screwdriver with a diamond-encrusted gold handle with a fluffy
>comfort grip. The former is sufficient to get the job done whereas the
>latter looks great if you want to give a demo.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
>Gnome 3 is (currently) scheduled for March 2011, and there's no reason to think that it will last for any less time than Gnome 2 did.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Why do you imply GPU acceleration is only useful for "dancing candy cram-ware"? It's useful for mundane things like video playback, not to mention games. Plus for saving power, or so I've been told. Even browsers are all becoming GPU-accelerated these days. GPUs make things faster, it's bad interface design that makes things slower.
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland
Shuttleworth: Unity on Wayland