Actually, Pick wouldn't run foul of the patent (aiui) because it doesn't store time-stamps with records, and so couldn't automatically expire records.
I want to re-implement a pick-style datastore, actually, and storing time stamps and expiring data was something I planned to do - I thought it was blatantly obvious! However, I would have done it as part of writing a record, because that's an obvious point at which to compact a bucket.
The original Pick model didn't include disk storage, it viewed it as a permanent virtual memory, and everything was based on hashed files, including RAM! This goes back to about 1967 - Pick is pretty much the same age as Unix :-)
Going forward to Prime INFORMATION, that had 2k buckets which can easily hold several records, depending on how big they are (stuff that was large than - by default - 1600 bytes got pushed to a secondary store). So, every time I add a record to a bucket, it's possible - even probable - that I need to compact the bucket to make room inside the 2k. (Deleting records, I would probably just flag them - not worth compacting them at that point.)
As for your surprise at the way of handling overflow buckets, you're thinking in terms of memory. And you badly misunderstand the hashing technique if you think expanding buckets will avoid collisions! :-) In a Prime Technical Paper I've got, they give the example of "number of buckets equals number of records", record id is sequential unique, BUT they add the records from 0 to 16 in random order. So at the end, with 17 records in seventeen buckets, there are no collisions. But if let's say the first two records to be added are both odd, 1 and 3 for example, they will both hash to bucket 1, and bucket 0 will be empty.
Linear hashing doesn't guarantee your file won't be lumpy. But the maths I've seen says that - absent a pathological situation - given a known key you need to access on average less than 1.05 buckets to find your record (or, more importantly, to know that it doesn't exist!) If every bucket access is a disk read, then that's damn fast! I don't know of any non-hashing technique that can come close. That's why, for pretty much ALL database sizes, a Pick system will leave pretty much any other database in the dust for speed.
My favourite war story is the company that ported their database from UniVerse (a Pick derivative) to Oracle. The Oracle consultants, after spending six months tweaking a particularly nasty SQL query, proudly announced to management that their new system was now ten percent faster than the system they were replacing. Unfortunately, they did it within earshot of the guy maintaining the old system, who scornfully responded "you're proud that your twin Xeon 800 is only 10% faster than my Pentium 90?" !!!
And that speed discrepancy is typical. I've seen loads of comments from people who can't believe how fast Pick systems are. We ran 32 users on a MIPS 3000 (equivalent to a 386) with 16Mb of RAM. System response was pretty good, although I don't think we hammered the system the way some people did. And watching the hard disk light, that machine was thrashing like mad ... !!! (Management didn't want to pay for a ram upgrade - until a ram chip failed and I wangled an upgrade with the support people :-)