|
|
Subscribe / Log in / New account

Akonadi – still alive and rocking

At his blog, Daniel Vrátil provides an extensive update on the status of Akonadi, the KDE project's personal information management (PIM) data service. He focuses on the changes made during the port to KDE Frameworks 5, starting with the switch from a text-based to a binary protocol. "This means we spent almost zero time on serialization and we are able to transmit large chunks of data between the server and the applications very, very efficiently." The ripple effects include changes to the database operations and, eventually, to the public API. Finally, he addresses the disappearance of the KJots note-taking application. "What we did not realize back then was that we will effectively prevent people from accessing their notes, since we don’t have any other app for that! I apologize for that to all our users, and to restore the balance in the Force I decided to bring KJots back. Not as a part of the main KDE PIM suite but as a standalone app."


to post comments

Akonadi – still alive and rocking

Posted Jan 9, 2016 0:52 UTC (Sat) by robert_s (subscriber, #42402) [Link]

I can't even begin to talk about the perpetual thorn in my side that is akonadi. Kmail1 how I miss thee.

Akonadi – still alive and rocking

Posted Jan 9, 2016 2:29 UTC (Sat) by Gnurou (subscriber, #85058) [Link] (42 responses)

Single step to a decent KDE experience:

$ akonadictl stop

(also shut down Baloo, although it is not as terrible as Nepomuk)

I don't understand why the KDE project still bothers with this overengineered monster. Akonadi is what makes KDEPim unusable. I lost data more times than I can count before finally giving up on KMail (which would be a good email client otherwise). My system feels better without a MySQL process permanently running anyway (when everybody else seems to be able to handle SQLite).

I *love* KDE, and use it on all my systems, but seriously, kill that thing. A good testimony of the state of this software is how all the comments in the blog post are about issues.

Akonadi – still alive and rocking

Posted Jan 9, 2016 4:46 UTC (Sat) by Elv13 (subscriber, #106198) [Link] (23 responses)

> I don't understand why the KDE project still bothers with this overengineered monster

While there is obviously downsides, there is also potential in such system. "Modern" operating systems allow personal information to be centralized and shared between applications. This is true for iOS/OSX (iCloud, CalDav/CardDav server), Android (services), Gnome (EDS / GOA) and KDE (Akonadi). For us (Ring.cx ** developers), such system allowed us to integrate our software into each OS without having to resort to synchronization and shared vCard folders*. For example, as a VoIP app developer, I can query the emails to get the voicemails sent over IMAP, share the contact, get usage statistics to improve autocompletion precision and so on. Is having something as large and complex as Akonadi running locally required to do this: no. does it help: yes. The feature and APIs themselves are required. I would not go back to only having KABC classes like KDE3 / KMail1 had. Re-creating information silo would be more harmful in the long run than fixing a good idea with a flawed design.

Remember that DCOP/DBus, Akonadi (simplified), Decibel (DOA), Zeitgeist (DOA) and Nepomuk (killed, replaced by simpler tech) are products of their era. Back then, SOA was still king. We all know how it ended. Nepomuk was a research project funded by the Open Semantic Collaboration Architecture Foundation. A lot of research papers have been produced. The idea was to create standard compliant semantic interoperability. Of course, in 2016, the hope to see RDF become the new magic data storage format are *slim* (read: I hope it never happen, this thing is as bad as CORBA in its own domain).

Akonadi and Decibel were the pillars of a vision of a future where your own data is running locally, where you are running your own services and have interoperable, standard compliant protocols and APIs. Given our current state of proprietary online services dependency, can you really blame anyone for trying? Did it work? No, of course not, trying to convince anyone otherwise is a serious reality denial case. Daniel blog post should be seen for what it is: a path forward. There was issues (complexity, performance, reliability): here are the proposed fixes. If your issue is defaulting to a local MySQL server, this can be changed in the config somewhere.

** An encrypted, peer-to-peer and distributed SIP based communication platform

* I did that until Akonadi was released for KF5, it is very inconvenient

Akonadi – still alive and rocking

Posted Jan 9, 2016 12:45 UTC (Sat) by pboddie (guest, #50784) [Link]

Even arguably "unmodern" operating systems allow for the sharing of information, potentially even personal information, but it's the way it has been done that should be called into question. The services that support such sharing seem to have evolved from the application of desktop technology in certain circumstances, rather than the application of server technology (and traditional daemons).

So there were things like DCOP which was a handy way of "automating" desktop applications and performing inter-application communications at the GUI framework level, resulting in the emulation of things like COM automation on the Windows platform, which is a pretty poor approach when attempting to provide distinct and reusable components with their own responsibilities. Things like DCOP were attractive because it looked like GNOME was struggling with CORBA, but with a proper component architecture (and a well-performing ORB), there's not that much wrong with CORBA: it's just fashionable to criticise it (like it is with things like XML) when you find that it makes interprocess calls expensive in a badly-architected system (or same-process calls in an implementation that adds unnecessary overhead) where the components haven't been properly identified, and where everything is a soup of communicating applications.

Meanwhile, in the last twenty years, a lot of experience has been built up around server technology in the form of the Web applications domain. Sadly, it doesn't look like the fields of expertise tend to overlap that much, and despite the disdain for the dynamic language stacks that feature so often in Web applications, we see weighty daemons written in C++ instead, as well as somewhat hair-raising usage of things like MySQL (and odd middleware products that are largely neglected outside organisations with legacy requirements).

Akonadi – still alive and rocking

Posted Jan 9, 2016 15:21 UTC (Sat) by robert_s (subscriber, #42402) [Link] (21 responses)

"While there is obviously downsides, there is also potential in such system. "Modern" operating systems allow personal information to be centralized and shared between applications. This is true for iOS/OSX (iCloud, CalDav/CardDav server), Android (services), Gnome (EDS / GOA) and KDE (Akonadi)."

The problem is, if using this subsystem makes the application perform & behave like garbage (akonadi certainly does), the user's going to abandon the software for something that doesn't use this subsystem, and then you're not going to get access to this data anyway.

Akonadi – still alive and rocking

Posted Jan 9, 2016 18:19 UTC (Sat) by niner (subscriber, #26151) [Link] (20 responses)

Have you read the actual blog post? Sounds like the largest source of performance issues is going away soon.

Akonadi – still alive and rocking

Posted Jan 10, 2016 12:37 UTC (Sun) by pboddie (guest, #50784) [Link] (19 responses)

Maybe it's about time, after everyone criticising these things were told over and over that they "didn't get it" or were "haters" or whatever. Still, there are a few things that make me wonder a bit...

Human-readable formats are overrated
[...]
The libraries need to talk to the Server somehow. In KDE4 we were using a text-based protocol very similar to IMAP (it started as RFC-compliant IMAP implementation, but over the time we diverged a bit). The problem with text-based protocol and large amount of data is that serializing everything into string representation and then deserializing it again on the other end is not very effective.

The heading makes a sweeping statement but then reveals the real problem: a potentially expensive-to-parse "human-readable" format. The solution to this, involving what could be described as "printf to a socket" raises other issues.

Pfff, who needs database indexes?
[...]
I sat down and look at EXPLAIN ANALYZE results of our biggest queries. Turns out we were doing some unnecessary JOINs (usually to get data that we already had in in-memory cache inside the Server) that we could get rid of. SQL planners and optimizers are extremely efficient nowadays, but JOINing large tables still takes time, so getting rid of those JOINs made the queries up to twice faster.

It's good to know that they're using the tools as they should, something that people I've worked with hadn't bothered to do when complaining about things being "slow". We've all written stuff that optimistically relied on the database to be clever - well, at least I have (for a bit of fun) before now - but this is all schema and query optimisation stuff that needs doing before it gets rolled out to everyone, and it also needs doing on non-trivial quantities of data.

That last bit about non-trivial data volumes is important, and it's arguably something that becomes routine in certain server technology domains, while developers in other areas might think that the database is doing its clever stuff admirably when it's really just working out of main memory the whole time.

(And that leads me to some observations about indexing solutions that also cause a lot of frustration, where Tracker immediately comes to mind: when trawling the user's disk for indexable stuff, these tools need to avoid saturating the I/O channels in their hurry to get everything indexed as fast as possible. In some data management environments, effective use of all hardware resources can be desirable - having the CPU at 100% while data is spoon-fed into the database system may be a sign of something not being quite right - but one has to know when to step back from doing so, especially if the user can't even log in because the daemons are all going at "full tilt".)

Akonadi – still alive and rocking

Posted Jan 10, 2016 21:55 UTC (Sun) by Wol (subscriber, #4433) [Link]

> That last bit about non-trivial data volumes is important, and it's arguably something that becomes routine in certain server technology domains, while developers in other areas might think that the database is doing its clever stuff admirably when it's really just working out of main memory the whole time.

:-)

And if you don't have enough main memory, don't use relational !!! :-)

(Actually, just don't use a FNF engine full stop :-)

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 11, 2016 9:16 UTC (Mon) by eru (subscriber, #2753) [Link] (17 responses)

The solution to this, involving what could be described as "printf to a socket" raises other issues.

Like endianness and data alignment, but even if you have to process your data to a "normalized" binary format, just in case the recipient is in a different CPU, it is far cheaper than turning your data into text and back all the time.

Akonadi – still alive and rocking

Posted Jan 11, 2016 12:01 UTC (Mon) by pboddie (guest, #50784) [Link] (16 responses)

Actually, printf to a socket might actually be OK, but it was possibly more like a straight memory copy to a socket that was being suggested. Of course, on the same CPU that may well work just fine, and I'll agree that a neutral binary format can be more efficient: after all, we've all experienced the wave of XML-RPC and SOAP that washed away those nasty binary RPC protocols, resulting in much anguish about inefficiency.

What worries me more are the sweeping statements being made, but then I'm not likely to go digging in the Akonadi code to see what else was contributing to the poor performance. What I can say is that - from memory about IMAP because I don't deal with it every day - there are a probably few levels of format complexity between IMAP and shoving bytes in a socket that might be explored.

Akonadi – still alive and rocking

Posted Jan 11, 2016 12:32 UTC (Mon) by jospoortvliet (guest, #33164) [Link] (15 responses)

Well, I'm sure more was contributing to poor performance, they've fixed dozens of small and large things and after every optimization something else became the biggest problem, of course. That's how it always works, isn't it?

One of the biggest issues remains, which is very fundamental to the Akonadi design: it is entirely data-agnostic and leaves specific handling of stuff to the client. One consequence is that filtering has to happen client-side: if you want to show the calendar items of today, you must retrieve ALL OF THEM and throw away what you don't want. Welcome, massive overhead...

I've seen some presentations on the design of Akonadi-Next which should fix it, in two ways. First, it does away with the server inbetween altogether, rather letting client apps do everything themselves by loading a library. Concurrency is handled by the database/storage used, which can be specific for each resource (eg sqlite or nosql solutions, even flat text or a binary format where it makes sense). Interestingly, this design is similar to the direction Baloo has taken when taking the lessons from Nepomuk and discarding the single large database and letting apps manage things themselves in specific, data-type optimized databases.

Note that, in the time Akonadi was designed (2001-ish!) much tech simply wasn't around. I guess it was overengineered but it made sense at the time, just like Nepomuk. Better understanding of requirements and real world needs, as well as the emergence of new technology have resulted in the need for a new design...

Anyway, you can find more info if you want by googling.

Akonadi – still alive and rocking

Posted Jan 11, 2016 12:58 UTC (Mon) by aleXXX (subscriber, #2742) [Link] (2 responses)

Wasn't it more like 2005 ? I remember that at the KDE 4 Core meeting in Trysil in 2006 Akonadi already existed, but was still very fresh.

Akonadi – still alive and rocking

Posted Jan 11, 2016 22:42 UTC (Mon) by jospoortvliet (guest, #33164) [Link] (1 responses)

Well, I said designed, it was probably more that the first ideas were sketched around the 2002-2002 timeframe if I am right. Code and the name came later but the problems it was to solve and the how came early that decade.

Akonadi – still alive and rocking

Posted Jan 14, 2016 22:42 UTC (Thu) by cornelius (subscriber, #72264) [Link]

Akonadi was born pretty much exactly ten years ago at the fourth Osnabrück meeting in January 2006. That's where we came up with the name, the original architecture, and a plan for the implementation.

Akonadi – still alive and rocking

Posted Jan 11, 2016 14:35 UTC (Mon) by pboddie (guest, #50784) [Link] (1 responses)

Thanks for the overview!

Note that, in the time Akonadi was designed (2001-ish!) much tech simply wasn't around. I guess it was overengineered but it made sense at the time, just like Nepomuk. Better understanding of requirements and real world needs, as well as the emergence of new technology have resulted in the need for a new design...

Oh, lots of us dabbled with RDF back in the day, that's for sure, but for quite a bit of this kind of thing I'd dispute that "much tech simply wasn't around", although I'd agree that a more widespread understanding of the problems probably wasn't around back then.

Another thing that really needs fixing is the multiple levels of indirection on display in the user interface stuff. When I last looked at doing calendar-related stuff with Kontact, alongside the laundry list of obsolete groupware solutions was some magic Akonadi thing, if I remember correctly (honestly, I'd rather not look at it again), and although this might showcase some technical wizardry, it is just confusing even to people who have the patience, motivation and general background to hunt down the appropriate dialogue choice.

Akonadi – still alive and rocking

Posted Jan 11, 2016 17:49 UTC (Mon) by Wol (subscriber, #4433) [Link]

> Oh, lots of us dabbled with RDF back in the day, that's for sure, but for quite a bit of this kind of thing I'd dispute that "much tech simply wasn't around", although I'd agree that a more widespread understanding of the problems probably wasn't around back then.

The problem is that there aren't any old greybeards like us involved in the design - you know, people who were used to machines where CPU speeds were single-digit megahurtz OR SLOWER. Where RAM was measured in kilobytes and cost 100s or even 1000s of <insert currency unit here>.

People who's instinctive reaction to bloat is "is it necessary?". So much software of today *needs * an overspecified, super-powered machine simply to provide reasonable performance. My triple-core Athlon with 16Gig of ram struggles sometimes (well, I am a geek, I do try to run a high-spec machine). But at work I ran 30 or 40 online users on a R3000 machine (that's equivalent to a 386). And even more on an even older machine a few years before that. User response hasn't improved with faster machines ...

(And it doesn't help when the response to "my machine is so slow it takes over 24 hours to get to the login screen" is "you should be grateful to us. If you want us to help you you need to provide loads of debugging info". Hello? I can't even get to a prompt to try and get that info!!! And yes, I *did* have a system where I had to disable KDE for precisely that reason!)

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 11, 2016 17:39 UTC (Mon) by drag (guest, #31333) [Link] (9 responses)

> First, it does away with the server inbetween altogether, rather letting client apps do everything themselves by loading a library.

That seems backwards. I never liked a approach that took functionality that seems to make sense to put in daemons and stuff them into a library.

> One of the biggest issues remains, which is very fundamental to the Akonadi design: it is entirely data-agnostic and leaves specific handling of stuff to the client. One consequence is that filtering has to happen client-side: if you want to show the calendar items of today, you must retrieve ALL OF THEM and throw away what you don't want. Welcome, massive overhead...

If the problem is that clients are required to do to much (filtering client-side) then it seems a better move is to put the filtering on the server-side rather then just move 100% of the functionality to the clients.

I would feel a lot more comfortable with a daemon with a 'REST-ful' API and a simple key value store back-end to take care of things. That way you can not only use Akonadi as a basis for a desktop clients, but have something that actively manages the data, backs it up, and deals with differences between major revisions instead of leaving it up to individual app authors to figure it out among other things.

Then that makes it also useful for building other services to take advantage of it. For example create a 'sister daemon' that takes the binary representation of data from the Akonadi API, textualizes and encapsulates it in JSON (or whatever) for the consumption of remote clients over HTTPS. Thus the KDE desktop could serve as a basis for synchronizing many devices and/or have a easier time integrating with other services. When it comes time to introduce more PIM-functionality then the bulk of the work can be done on the server side and thus client changes are minimal with much easier time of handling backwards compatibility.

Akonadi – still alive and rocking

Posted Jan 11, 2016 19:16 UTC (Mon) by pboddie (guest, #50784) [Link] (5 responses)

If the problem is that clients are required to do to much (filtering client-side) then it seems a better move is to put the filtering on the server-side rather then just move 100% of the functionality to the clients.

I was going to write something about that, actually. Some performance problems might be a consequence of the need for lots of service calls to get individual items of data - I seem to recall that things like DCOP and D-Bus were motivated by some apparent need for making lots of calls - but in the traditional database development realm, this is typically a bad thing exhibited by programs that have a loop doing "select something from table where item = ?" over and over again for a list of values that has been obtained from a previous query. (Yes, code like this really does exist in the real world.)

The mention of IMAP got my attention. IMAP - again, if I remember correctly - is also supposed to allow querying, or maybe some extension of it does. So, in principle, the tools would be there to do this efficiently. Then again, if the IMAP stuff sits below some kind of data mapper, maybe the necessary tools just aren't exposed at the right level.

Akonadi – still alive and rocking

Posted Jan 11, 2016 20:16 UTC (Mon) by anselm (subscriber, #2796) [Link] (3 responses)

The IMAP functionality in question is called “SIEVE” and is optional. This makes it difficult to stipulate to its existence in a lowest-common-denominator framework such as Akonadi, which also accepts files in the local file system as a source of e-mail messages. You could probably put equivalent functionality into the local Akonadi component for the benefit of local files, but once you're done you'll have implemented most of an IMAP server, and there are very good free IMAP servers around already.

Akonadi – still alive and rocking

Posted Jan 11, 2016 20:50 UTC (Mon) by pboddie (guest, #50784) [Link] (1 responses)

I guess this is why solutions like Kolab try and put everything in an IMAP/SIEVE-accessible message store, then. (Kolab relies on various KDE-related libraries, too.)

I'll accept that if you want to provide some kind of protocol for accessing messages and similar things, then POP and IMAP are obvious things that potentially leverage compatibility, or at least familiarity, if the protocol resembles them. But ultimately, there may be no escaping a full message store, even though I do also read about people's performance issues with their IMAP infrastructure every now and again (but that's more likely to be related to having lots of users and to potentially misconfigured components).

Akonadi – still alive and rocking

Posted Jan 11, 2016 21:09 UTC (Mon) by anselm (subscriber, #2796) [Link]

POP3 isn't really very useful as a message store protocol (for one, it doesn't have anything resembling SIEVE), so if you want to build on something you're pretty much stuck with IMAP. Dovecot is a very good free high-performance IMAP server that does SIEVE (among other useful things) and sits comfortably on Maildirs (and its own more efficient message store format), so you're basically covered. The downside is that as layer-7 protocols go, IMAP is a very nasty specimen, and it is difficult to fault programmers for not wanting to have anything to do with it unless they really can't avoid it.

Akonadi – still alive and rocking

Posted Jan 12, 2016 11:57 UTC (Tue) by Wol (subscriber, #4433) [Link]

> You could probably put equivalent functionality into the local Akonadi component for the benefit of local files, but once you're done you'll have implemented most of an IMAP server, and there are very good free IMAP servers around already.

I get the feeling they are trying to index EVERYTHING, without bothering to ask the question "Is this WORTH indexing?". Which is where users get so frustrated - the system insists on spending a large chunk of its (and by extension the user's) time doing stuff the user considers *counter*productive*. The classic example is pre-loading the office suite into ram for when the user wants it - I used to have three office suites, all of which I rarely used but sometimes needed, and on a system that's short of ram ...

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 12, 2016 5:02 UTC (Tue) by drag (guest, #31333) [Link]

> Then again, if the IMAP stuff sits below some kind of data mapper, maybe the necessary tools just aren't exposed at the right level.

I think so.

My reasoning:

IMAP itself didn't provide enough advantages, or at least provide the right type of advantages, over POP to make it really useful. People with heavy technical focus on email generally just kept treating IMAP servers like POP servers with a added bonus they could easily have multiple email clients on different computers. It was still a advantage to having email database locally on the machine for management. IMAP didn't provide the necessary features to win over that.

And, nowadays, it should be obvious that server-side processing was still the right way to go as evidenced by the dominance of webmail.

My fantasies about a solution:

IMAP + Seive is closer to the right thing, but I still don't think it's enough. Lack of client support is a major problem, but I think even if that was not the biggest problem Seive/IMAP isn't the correct solution. I see things like people dealing with duplicate emails, missing emails, having to craft special rules to filter email into different folders.. etc etc.. this are symptoms that the general mentality of management email is wrong. That mentality being that you copy around and delete and move email from one folder to another.

What I would love to see (other then everybody world-wide deciding email is so abhorrent that we should develop a secure replacement together) is a solution based around stuffing email into a single database and kept 'raw'.

No editing of the email, changing of the headers, adding tags, basing anything on file system dates, or moving it around folders, or anything like that.

That is get rid of the 'mdir' format, or shoving everything into a SQL database concept... but a return to something more closely resembling the original mbox storage format, but optimized. A Log-stuctured Mbox format for lack of a better term/concept. Get rid of all the file-based locking and use a single-purpose service for managing reads/writes to that 'log-structured mbox'. One for each user. Make it as trivial as possible to trigger a robust backup through that service.

Journalctl, for as terrible as it is sometimes, still has no problem reading the 3,500,000+ messages my system has logged on my desktop in less then a minute. Mail applications should have, at the very least, the same level of performance.

Then integrate something like 'notmuch' into it somewhere so every interaction a client has with the service is nothing but the result of searches performed on the original data. Maybe a separate mail-manage service that talks to the lmbox-service, but maybe the same. I would like to make it as trivial as possible for somebody to self-host so simpler the better, dealing with large number of users is a specialized problem. Be able to use just a cheap Linux server at home, or rasberry pi-level machine, or a single cloud instance.

'True' edits and deletes should be special operations and it shouldn't matter much if they are expensive. Due to spam and such things some pre-proccessing of the mail before it gets added to the main database, but it should be kept to a minimum.

Normal operations should just be performed by 'live filters' or 'views' or 'search folders'. You should be able to do things like 'Emails sent directly to me with low spam ratings shall be my inbox'. Then you can say 'emails sent to mailing list X is in Y folder', and then layer it further and have a folder that says 'emails sent to mailing list X and addressed to me directly is in Y+W folder'. So then the original data and format of each mail is preserved and you interact with the mail through various 'views'. Thus things like duplicate emails are not a sign of corruption, but just because they happen to meet multiple different criteria.

Thus you end up with a 'do no harm' approach to managing these messages. No action will normally be performed that risks corrupting the original data. No matter what sort of insane or batshit crazy filters and folders or whatever you place on the data and even if clients conflict with each other and trigger bugs and crashes in your services... the worst thing that can happen is that cause a self-inflicted denial of service attack. Then the only data loss you can suffer is a few missed messages or something corrupt tagged on the end of your 'log structured mbox'. Sure the indexes and metadata databases can be jacked up and unrecoverable, but that is something that can be recreated since all the original data it was derived from is still present. To recover from a disaster you blow away all the files, except the main one, and then selectively re-apply your filters until they are back to where you want them.

Of course optimizations will be necessary. You can't expect a 'view/filter/search folder' to be active instantly after you create it. Indexes of email should be kept so that you don't have to go through your entire history of email every time you open your application. Results from searches should be available 'on demand' as much as possible. Try to set it up so that the mail-manage service just has to look through anything relatively recent and add it to the data it's already derived from your backend store. So to make it possible to have a nice UI some sort of timing information on operations should be accessible, I suppose, so people can have a reasonably accurate idea of how long expensive/long running processing is going to take.

I doubt IMAP could provide a rich enough API to access the 'mail-manage' service, but a close approximation may be able to made for current generation clients. Probably through some sort of IMAP gateway service.

Akonadi – still alive and rocking

Posted Jan 12, 2016 10:47 UTC (Tue) by jospoortvliet (guest, #33164) [Link] (2 responses)

Note that we're on the edge of what I understand about this, technically.

But from my understanding, there's no more central server, but not ALL work is done by the clients, only the reading, which is implemented in a library. Resources (eg IMAP client) are independent processes which retrieve and cache data.

Let me quote
> Basically the new design absents central server in favor of using
> per-resource storage. Each resource would maintain it's own key-value store
> (allowing each resource to store data in a way that is efficient for them
> to work with) and would provide an implementation of standardized interface
> to access this storage directly. Clients would be allowed direct read-only
> access to these storages, while resources are the only one with write
> access. Internally flatbuffers would be used to provide access to the data
> in super-efficient way (lazy-loading, memory mapping, all the fancy stuff).

> Resources would implement pipelines allowing some pre-processing of newly
> incoming data before storing them persistently (think mail filtering,
> indexing, new mail notifications etc.). Inter-resource communication (for
> example to perform inter-resource move or copy), and client->resource
> communication would be done through a binary protocol based on what we have
> now in Akonadi. This design also gives us on-demand start/stop of resources
> for free. Something that requires ridiculous amount of work to make work
> with the current design.

> API-wise, while we can't completely get rid of the "imperative" API of
> having jobs, the core method to provide access to data would be through
> models. Making use of storage data versioning, on update the model simply
> requests changes between current and last revision of the stored data. This
> should prevent us from ending up with overcomplicated beast-models like
> ETM.

See
https://community.kde.org/KDE_PIM/Akonadi_Next/Design
https://community.kde.org/KDE_PIM/Akonadi_Next#Design

https://cmollekopf.wordpress.com/2015/02/08/progress-on-t...
https://cmollekopf.wordpress.com/2015/08/29/bringing-akon...
https://kolab.org/blog/mollekopf/2015/10/22/progress-prot...

Akonadi – still alive and rocking

Posted Jan 12, 2016 16:06 UTC (Tue) by drag (guest, #31333) [Link]

thank you.

Akonadi – still alive and rocking

Posted Feb 9, 2016 4:49 UTC (Tue) by daniel (guest, #3181) [Link]

"from my understanding, there's no more central server..."

And, blessedly, no more relational database. It looks like the new maintainer has a sensible attitude, and the experimental refactoring that is Akonadi might yet prove to be useful. Too bad about sacrificing the entire Kmail user community in the process, myself included. The big lesson here is that there is never a valid reason to perform trapeze without a safety net - kmail2 should have been deployed in parallel with kmail1 until such time as proved to be a complete and viable replacement, just as Apache2 lived beside Apache1 for years.

Akonadi – still alive and rocking

Posted Jan 9, 2016 10:49 UTC (Sat) by Wol (subscriber, #4433) [Link]

My MySQL is currently borked. I can't be bothered to investigate properly, but I think the reason is that, when I upgraded, the on-disk format changed and I was *supposed* to run some conversion utility.

Now I seem to be in a catch 22 - the MySQL service crashes on start because the disk format is wrong, and the conversion utility won't run because it needs MySQL to be running ...

I'll get it fixed some time - when I've got a day free to spend troubleshooting my PC ... and loads of time to read up on all the possible fixes on web sites that are slow as molasses because of all the ads ...

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 9, 2016 13:28 UTC (Sat) by jospoortvliet (guest, #33164) [Link] (16 responses)

Baloo isn't bothering me at all, Nepomuk was heavily over-engineered but Baloo seems great and works, being helpful and all.

KDE PIM as it is works for me - most of the time. It is fast - most of the time. But it continues to eat a LOT of ram and requires kicking (as in, an hour or two of fiddling with akonadiconsole and stuff) at least once or twice a month. For normal users, this is a no-go, sadly.

So yes, I believe Akonadi is certainly over-engineered as it took forever to get it mostly right and it still eats ram like cookies. Then again, I think the vision/idea behind it was great, just... too complicated.

Luckily, it seems its successor (akonadi-next) is going to be a lot more clever, as in, simpler and less error-prone as well as faster - let's hope it works out.

Akonadi – still alive and rocking

Posted Jan 9, 2016 15:29 UTC (Sat) by robert_s (subscriber, #42402) [Link] (15 responses)

"So yes, I believe Akonadi is certainly over-engineered"

I think this is in danger of being almost interpreted as a tangential complement. The problem really is that it's over- and crappily-engineered. It seems to have been designed very naively and have serious problems with concurrency - the filtering system commonly creates duplicate emails or loses them entirely, several times a day I'm told "resource KMail folders is broken"... it's one of those things that just makes me want to cry when I think of the state of software in 2015 (for software which we could get right in 1998).

For those who find it an ok experience, I'm guessing you're using IMAP and I ask you to try using it with local (maildir) folders.

Akonadi – still alive and rocking

Posted Jan 9, 2016 19:58 UTC (Sat) by Wol (subscriber, #4433) [Link] (14 responses)

imho, you could be describing thunderbird, not kmail, here ...

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 9, 2016 21:03 UTC (Sat) by petur (guest, #73362) [Link] (1 responses)

yay for gratuitous bashing....

(I have yet to see a single mail lost or a single hickup even in the last (many) years of using thunderbird)

Akonadi – still alive and rocking

Posted Jan 9, 2016 23:59 UTC (Sat) by Wol (subscriber, #4433) [Link]

I don't think I've actually LOST email ...

But there's a reason I've got the "remove duplicate messages" thunderbird add-on - it gets very regular use :-(
(and that's not because I get sent multiple copies ...)

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 10, 2016 20:49 UTC (Sun) by h2 (guest, #27965) [Link] (11 responses)

Thunderbird is the only email client that's managed to grow with me, from windows to gnu/linux, same data now for 15 years roughly, moved from one location to another, one disk, one partition.

kmail, on the other hand, has broken every major kde upgrade.

The concept of a near sacred respect for email data integrity, through changes of internal data handling, seems to be totally and utterly absent from the kmail/kpim project.

Since I rely on email for work, as of the qt 5.x changes, aka, kde plasma 5.x, I totally gave up on kmail, and sadly, also kde itself, except for a few programs that are still very good.

Since in my world, the fundamental purpose of my email client is to handle my email and not mess it up, kmail, which I had used for some secondary light weight purposes, became such a royal pain to use, a total time suck every time the systems it relies on were 'improved', that I decided to also just finally dump kmail, claws mail seems to get the idea of basic reliable email client for more basic uses, and thunderbird I've trusted for ages, and continue to trust. It's updates don't break things, and it's not trying to be clever.

I think there's a point in projects where the ideas and cleverness involved, sadly, exceed the man-hours/skill levels of the programmers involved, and to my eyes, kde has hit this point. I which I find sad, since it was a good desktop. Gnome and KDE seem to be suffering the same issues in this regard, and I think are a significant part of the reason gnu linux desktop marketshare has not risen at all, and may in fact be dropping.

Maybe it's best to stop trying to pretend that you can be the next os x or windows. osx is loved by its users, at least in theory, because it doesn't break things release to release. I know that's not achieved, but in user minds, it's one reason they love apple. The attempt to somewhat emulate osx desktops by breaking things fundamentally for users, then explaining why the break was 'good' and 'an improvement' does nothing to fix the break, and does nothing to retain users or expand the user base.

It's quite noteworthy that kde in debian sid/testing is a total mess, apparently some package maintainers in some distros are throwing up their hands and giving up. This is a new thing.

I like reliable, consistent desktops, and apparently xfce4 is the only relatively full featured project out there that shares this view of how my work space and machines should act long term.

Akonadi – still alive and rocking

Posted Jan 10, 2016 23:31 UTC (Sun) by smadu2 (guest, #54943) [Link] (3 responses)

"Thunderbird is the only email client that's managed to grow with me, from windows to gnu/linux, same data now for 15 years roughly, moved from one location to another, one disk, one partition."

Same for me (can not stress how good thunderbird has been for me in this regard)! I am still using the same ~/.thunderbird since ~ 2007. I have moved it through various laptops, distros, and companies and it just works. (My du -sh ~/.thunderbird is 37G). Well done Mozilla !

Akonadi – still alive and rocking

Posted Jan 11, 2016 17:55 UTC (Mon) by Wol (subscriber, #4433) [Link] (1 responses)

Well, I still miss Turnpike. If you weren't a Demon customer in the 90s, you probably won't know it, but the people who wrote it were anal about getting things right. And it had all sorts of wonderful little tweaks that made life so much simpler (like automatically sorting threads based on whether you had posted a news/email to it). Like having multiple email addresses per person. Like opening a search folder from the address book with all emails to/from that person (on all their addresses).

Etc etc. I guess a lot of that could be scripted into Thunderbird, but that's yet another system and scripting language to learn ...

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 13, 2016 12:31 UTC (Wed) by mpr22 (subscriber, #60784) [Link]

Turnpike is the only newsreader I've tried that was more agreeable than trn3. Sometime I must try trn4 properly.

Akonadi – still alive and rocking

Posted Jan 11, 2016 21:23 UTC (Mon) by dany (guest, #18902) [Link]

same for me, great tool using from 2002

Akonadi – still alive and rocking

Posted Jan 11, 2016 13:30 UTC (Mon) by Rehdon (guest, #45440) [Link]

I would upvote you if I could. I know it's a very young DE project, but Cinnamon seems to be developed according to the same "if it works don't fix it" philosophy, with gradual enhancements.

Rehdon

Akonadi – still alive and rocking

Posted Jan 12, 2016 6:53 UTC (Tue) by riteshsarraf (subscriber, #11138) [Link] (5 responses)

I think GNOME is in a much better position, than KDE project.

I was a KDE user myself, for more than 10 years. For a good time, I held my patience and worked with the transition from KDE3 to KDE4. It wasn't really a transition, but more like forget most of old data/formats, and move to the new ones. Same applied to bug reports.

Back to GNOME, it is good to see how they have progressed. I started recently with GNOME 3.14, and am now of 3.18. Overall I'm happy with how they steer the project. There are some nit picks on how the envision GNOME, but that is fine. Not all fingers are equal.

On the note of emails, now that being on GNOME, I wanted to use something native. Thunderbird (though not native) and Evolution were the 2 first choices. Chose Evolution over Thunderbird, as it gave me Maildir support. And I'm impressed that Evolution still is an amazing email client.

Akonadi – still alive and rocking

Posted Jan 12, 2016 23:33 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link] (4 responses)

I think you're giving GNOME a free pass because you were using KDE during the earlier major version changes for GNOME. Both the transition from 1 to 2 and the one from 2 to 3 were messy and involved lots of anguish for users. I'm generally happy with GNOME 3- I think many of the drastic changes that were so frustrating during the transition were actually good ideas, and most of the ones that were bad ideas can be plastered over with extensions- but I would never claim that the major version changes have been pleasant.

Akonadi – still alive and rocking

Posted Jan 13, 2016 6:50 UTC (Wed) by riteshsarraf (subscriber, #11138) [Link] (3 responses)

I would agree with your comments. But if you look on the KDE side, ideas like Akonadi, Nepomuk, Decibel, Solid - The Pillars of KDE; Most of those ideas either faded away, or are overly engineered products. It has been what, like 8 years, since KDE4 was released. And still, overall, it doesn't work as a usable desktop.

I just hope such ugly transitions don't ever happen again. Now the GNU/Linux Desktop Userbase has increased. Both, GNOME and KDE, should realize that transitions don't mean you rip everything apart and go back to the drawing board.

Akonadi – still alive and rocking

Posted Jan 13, 2016 7:12 UTC (Wed) by MattJD (subscriber, #91390) [Link] (1 responses)

>I would agree with your comments. But if you look on the KDE side, ideas like Akonadi, Nepomuk, Decibel, Solid - The Pillars of KDE; Most of those ideas either faded away, or are overly engineered products. It has been what, like 8 years, since KDE4 was released.

Only one has really faded away, the rest are all going strong. Akonadi is being rewritten, true, but its central idea is still there. Decibel the name has faded away, but telepathy is still making progress in KDE and will provide much of the same idea. Solid hasn't gone anywhere. It is still used for all the API independent hardware parts. Nepomuk is the only dead product. Parts are kept alive in replacements (Baloo), but its main purpose isn't being kept.

I don't think the problem with any of them was over engineering. It's just that they tried to do something new and large. I'm pretty sure if the technology we have today existed when some of those products were being designed, KDE would look much different.

I still enjoy using KDE as my desktop of choice. Yes some release have been painful, but I've only found since KDE 4.0 that they have improved.

Akonadi – still alive and rocking

Posted Jan 13, 2016 15:37 UTC (Wed) by Wol (subscriber, #4433) [Link]

> I still enjoy using KDE as my desktop of choice. Yes some release have been painful, but I've only found since KDE 4.0 that they have improved.

Well, every time I've tried Gnome, I've run screaming and kicking BACK to KDE. That said, the transition from KDE3 to KDE4 was so bad it forced me to install and use XFCE and LXDE. I never had any real problem on my "big" machines - Athlon X3 - but on my old machines ("fast" but outdated cpu, stuffed to the gills with all the MEGAbytes of ram that would fit) early KDE4 was unusable. I don't know how long it took to go from power-on to login screen - it never got that far before it was time to shut it down again ...

Cheers,
Wol

Akonadi – still alive and rocking

Posted Jan 13, 2016 18:53 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link]

Both, GNOME and KDE, should realize that transitions don't mean you rip everything apart and go back to the drawing board.

I'm not sure I agree. Making radical changes- including radical improvements- is often disruptive. Even if it's possible to have backward compatibility so people aren't forced to try the new stuff, there's considerable maintenance pain associated with maintaining two ways of doing things. Even worse, a lot of people who are afraid of change will never try the new approach, so the pain of changeover is only delayed until the old way is deprecated and removed.

Akonadi – still alive and rocking

Posted Jan 11, 2016 12:11 UTC (Mon) by sebas (guest, #51660) [Link] (1 responses)

What concerns me when reading through the comments here is that many of the cpmmenters completely ignore what Dan wrote, and are also unaware of the new developments in akonadi-next. The short version of the lookout is that Dan is improving the current akonadi implementation, and that akonadi-next learns a lot from the experiences made and improves the architecture itself. (A google search for akonadi next gives some interesting links to blogs if you're interested.)

Most of the comments bashing akonadi ignore the actual work being done. Surely, there are problems with akonadi, but I think many commenters here are just trigger-happy, reading akonadi in the title and then going on a rant how it sucks and doesn't work for them.

Dan's work is actually commendable, since he took over akonadi's maintenance and is improving it for the relatively short term, especially the client and server libraries and the communication between them. He already has made some really good progress while the long term strategy is a redesigned akonadi-next, without MySQL, based on a key-value store.

Some people may want to get back the kmail of KDE 3.5, thinking that it was all good. Problem is that it wasn't, and simply going back is also not an option. The code has evolved a lot in the past, and I highly doubt that going back and porting the KDE 3.5 code to Qt 5 is really a viable option, it may take at least as long as akonadi-next, and we'd still suffer from the deficiencies in kmail 3.x's architecture (blocking UI, problems with IMAP, no IMAP-IDLE, just to name a few), not to mention the manpower problem of doing it.

Honestly, I'd have expected the general akonadi-bashing on other forums, but not LWN. I come to LWN for informed comments and technical discussion, and that entails reading the actual post, putting it into context and actually understanding what's being done here (and often providing good insights about it). If we don't pay that kind of respect to the people who work on actually improving the software and working on the deficiencies which are pointed out here (and frankly, which are well-known), we might as well all read slashdot, or something similar. I, for one, applaud Dan and the Akonadi team's efforts for actually listening, working on improving things and telling us about its progress.

Akonadi – still alive and rocking

Posted Jan 14, 2016 2:14 UTC (Thu) by hirnbrot (guest, #89469) [Link]

>but I think many commenters here are just trigger-happy

While that might be true to a degree, the title (which is taken from the post verbatim) _really_ does not help. While I appreciate the ideas and hard, thankless work behind akonadi, I believe it's safe to say that the implementation has never been "rocking".

Akonadi – still alive and rocking

Posted Mar 6, 2016 23:10 UTC (Sun) by dcrobertson01 (guest, #107524) [Link]

A lot of this discussion is a bit above my head - all I can say is I've used KDE since the mascot was Kandalf, but I recently put Mint on my laptop, and have just wasted most of the morning on this *&^#$% Akonadi on my desktop machine.

No sooner do we finally get the sound to work on KDE - after being told not to worry about the lack of audio, look how technically superior it is - than we get the same thing with Akonadi. This is a new install, and it won't work. It may be the greatest technical breakthrough since the moon landings, but it fails.

I have been googling the problem and I get a lot of results but no solution. So good bye rainbows and unicorns. I'm going for something less sophisticated that actually works.


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds