One thing to consider when looking at storing persistent data is how much effort the data store developers have put into maintaining data integrity. This was driven home to me when I was administering a computer running OpenLDAP and Fedora. The computer was in a situation where it would occasionally be hard-booted, and that combined with some (apparent) version incompatibilities with the Fedora repos would cause the sleepycat backend for the LDAP server to corrupt itself.
Our eventual (horrible, rube-goldbergian) workaround was to store the actual information in MySQL/innodb and have a script nuke the LDAP database and re-inject the data whenever it fell over.
I decided after that experience that if the data has even a chance of being important, then the most important property of any datastore is that the information should always be there and never be corrupt, regardless of if the computer is hard-booted or if the disk drive lies or if Ted Tso decides that his ideology is more important than your data.
A bit later I heard an interview with Richard Hipp in which he discussed how much effort they put into making sure data in sqlite is "in the oxide" -- going so far as simulating hard boots during writes in their testing procedures. I've since made sqlite my default data store whenever the data might be important and it's not in an RDBMS and would hesitate to go to something else without some assurance that they take a similar amount of interest in data integrity.
That said, memcached is awesome for caching, and some of these things do sound interesting for storing unimportant data like search indicies.