LWN.net Logo

Future storage technologies and Linux

By Jonathan Corbet
April 6, 2011
The opening plenary session at the second day of the 2011 Linux Filesystem, Storage, and Memory Management workshop was led by Michael Cornwall, the global director for technology standards at IDEMA, a standards organization for disk drive manufacturers. Thirteen years ago, while working for a hardware manufacturer, Michael had a hard time finding the right person to talk to in the Linux community to get support for his company's hardware. Years later, that problem still exists; there is no easy way for the hardware industry to work with the Linux community, with the result that Linux has far less influence than it should. His talk covered the changes that are coming in the storage industry and how the Linux community can get involved to make things work better.

The International Disk Drive Equipment and Materials Association works to standardize disk drives and the many components found therein. While some may say that disk drives - rotating storage - are on their way out, the fact of the matter is that the industry shipped a record 650 million drives last year and is on track to ship one billion drives in 2015. This is an industry which is not going away anytime soon. Who drives this industry? There are, he said, four companies which control the direction the disk drive industry takes: Dell, Microsoft, HP, and EMC. Three of those companies ship Linux, but Linux is not represented in the industry's planning at all.

One might be surprised by what's found inside contemporary drives. There is typically a multi-core ARM processor similar to those found in cellphones, and up to 1GB of RAM. That ARM processor is capable, but it still has a lot of dedicated hardware help; special circuitry handles error-correcting code generation and checking, protocol implementation, buffer management, sequential I/O detection, and more. Disk drives are small computers running fairly capable operating systems of their own.

The programming interface to disk drives has not really changed in almost two decades: drives still offer a randomly-accessible array of 512-byte blocks addressable by a logical block addresses. The biggest problem on the software side is trying to move the heads as little as possible. The hardware has advanced greatly over these years, but it is still stuck with "an archaic programming architecture." That architecture is going to have to change in the coming years, though.

The first significant change has been deemed the "advanced format" - a fancy term for 4K sector drives. Christoph Hellwig asked for the opportunity to chat with the marketing person who came up with that name; the rest of us can only hope that the conversation will be held on a public list so we can all watch. The motivation behind the switch to 4K sectors is greater error-correcting code (ECC) efficiency. By using ECC to protect larger sectors, manufacturers can gain something like a 20% increase in capacity.

The developer who has taken the lead in making 4K-sector disks work with Linux is Martin Petersen; he complained that he has found it to be almost impossible to work on new technologies with the industry. Prototypes from manufacturers worked fine with Linux, but the first production drives to show up failed outright. Even with his "800-pound Oracle hat" on, he has a hard time getting a response to problems. "Welcome," Michael responded, "to the hard drive business." More seriously, he said that there needs to be a "Linux certified" program for hardware, which probably needs to be driven by Red Hat to be taken seriously in the industry. Others agreed with this idea, adding that, for this program to be truly effective, vendors like Dell and HP would have to start requiring Linux certification from their suppliers.

4K-sector drives bring a number of interesting challenges beyond the increased sector size. Windows 2000 systems will not properly align partitions by default, so some manufacturers have created off-by-one-alignment drives to compensate. Others have stuck with normal alignment, and it's not always easy to tell the two types of drive apart. Meanwhile, in response to requests from Microsoft and Dell, manufacturers are also starting to ship native 4K drives which do not emulate 512-byte sectors at all. So there is a wide variety of hardware to try to deal with. There is an evaluation kit available for developers who would like to work with the newer drives.

The next step would appear to be "hybrid drives" which combine rotating storage and flash in the same package. The first generation of these drives did not do very well in the market; evidently Windows took over control of the flash portion of the drive, defeating its original purpose, so no real performance benefit was seen. There is a second generation coming which may do better; they have more flash storage (anywhere from 8GB to 64GB) and do not allow the operating system to mess with it, so they should perform well.

Ted Ts'o expressed concerns that these drives may be optimized for filesystems like VFAT or NTFS; such optimizations tend not to work well when other filesystems are used. Michael replied that this is part of the bigger problem: Linux filesystems are not visible to the manufacturers. Given a reason to support ext4 or btrfs the vendors would do so; it is, after all, relatively easy for the drive to look at the partition table and figure out what kinds of filesystem(s) it is dealing with. But the vendors have no idea of what demand may exist for which specific Linux filesystems, so support is not forthcoming.

A little further in the future is "shingled magnetic recording" (SMR). This technology eliminates the normal guard space between adjacent tracks on the disk, yielding another 20% increase in capacity. Unfortunately, those guard tracks exist for a reason: they allow one track to be written without corrupting the adjacent track. So an SMR drive cannot just rewrite one track; it must rewrite all of the tracks in a shingled range. What that means, Michael said, is that large sequential writes "should have reasonable performance," while small, random writes could perform poorly indeed.

The industry is still trying to figure out how to make SMR work well. One possibility would be to create separate shingled and non-shingled regions on the drive. All writes would initially go to a non-shingled region, then be rewritten into a shingled region in the background. That would necessitate the addition of a mapping table to find the real location of each block. That idea caused some concerns in the audience; how can I/O patterns be optimized if the connection between the logical block address and the location on the disk is gone?

The answer seems to be that, as the drive rewrites the data, it will put it into something resembling its natural order and defragment it. That whole process depends on the drive having enough idle time to do the rewriting work; it was said that most drives are idle over 90% of the time, so that should not be a problem. Cloud computing and virtualization might make that harder; their whole purpose is to maximize hardware utilization, after all. But the drive vendors seem to think that it will work out.

Michael presented four different options for the programming interface to SMR drives. The first was traditional hard drive emulation with remapping as described above; such drives will work with all systems, but they may have performance problems. Another possibility is "large block SMR": a drive which does all transfers in large blocks - 32MB at a time, for example. Such drives would not be suitable for all purposes, but they might work well in digital video recorders or backup applications. Option three is "emulation with hints," allowing the operating system to indicate which blocks should be stored together on the physical media. Finally, there is the full object storage approach where the drive knows about logical objects (files) and tries to store them contiguously.

How well will these drives work with Linux? It is hard to say; there is currently no Linux representation on the SMR technical committee. These drives are headed for market in 2012 or 2013, so now is the time to try to influence their development. The committee is said to be relatively open, with open mailing lists, no non-disclosure agreements, and no oppressive patent-licensing requirements, so it shouldn't be hard for Linux developers to participate.

Beyond SMR, there is the concept of non-volatile RAM (NV-RAM). An NV-RAM device is an array of traditional dynamic RAM combined with an equally-sized flash array and a board full of capacitors. It operates as normal RAM but, when the power fails, the charge in the capacitors is used to copy the RAM data over to flash; that data is restored when the power comes back. High-end storage systems have used NV-RAM for a while, but it is now being turned into a commodity product aimed at the larger market.

NV-RAM devices currently come in three forms. The first looks like a traditional disk drive, the second is a PCI-Express card with a special block driver, and the third is "NV-DIMM," which goes directly onto the system's memory bus. NV-DIMM has a lot of potential, but is also the hardest to support; it requires, for example, a BIOS which understands the device, will not trash its contents with a power-on memory test, and which does not interleave cache lines across NV-DIMM devices and regular memory. So it is not something which can just be dropped into any system.

Looking further ahead, true non-volatile memory is coming around 2015. How will we interface to it, Michael asked, and how will we ensure that the architecture is right? Dell and Microsoft asked for native 4K-sector drives and got them. What, he asked, does the Linux community want? He recommended that the kernel community form a five-person committee to talk to the hard disk drive industry. There should also be a list of developers who should get hardware samples. And, importantly, we should have a well-formed opinion of what we want. Given those, the industry might just start listening to the Linux community; that could only be a good thing.


(Log in to post comments)

Future storage technologies and Linux

Posted Apr 6, 2011 2:28 UTC (Wed) by neilbrown (subscriber, #359) [Link]

> What, he asked, does the Linux community want?

That's easy. We want well documented access to the low level device features. Any address-translation-layer should be optional at most.

The hybrid drive that is part SMR and part non-shingled sounds like a neat idea. But if I had one I would want to be able to address the two parts directly, and know how much and at what alignment I need to write to the SMR section.

Then we can do whatever mapping is needed in Linux, maybe in the filesystem, maybe in a virtual block device, maybe both. And different people can try out different approaches.

i.e. we want freedom (is that a surprise?).

Future storage technologies and Linux

Posted Apr 6, 2011 13:51 UTC (Wed) by njwhite (subscriber, #51848) [Link]

> We want well documented access to the low level device features.
> Any address-translation-layer should be optional at most.

Indeed. To rely entirely on wierd vendor firmware to decide what to do with my data is to be locked in to the vendor for updates. Even very good firmware can't predict new situations which will come up. And the freedom to try new ideas and models is a great thing.

Also, to put on my paranoid hat for a moment, not having a large translation layer also reduces the surface for malicious stuff.

Future storage technologies and Linux

Posted Apr 7, 2011 6:15 UTC (Thu) by JoeBuck (subscriber, #2330) [Link]

But if you write your Linux driver to bypass the processor on your high-end disk, you're almost guaranteed to have a lower-performance system than the Windows competition, because you're forcing the main CPU to do all the work.

Now, if there were a way to write free firmware to run on that high-end dedicated ARM processor with 1Gb memory, that would be cool. So maybe the way to go is to start by assuming that you'll do that, then figure out what the interface should be between the host and the dedicated disk processor. Then both the FLOSS people and the proprietary people could program to that interface.

Future storage technologies and Linux

Posted Apr 7, 2011 21:19 UTC (Thu) by jhhaller (subscriber, #56103) [Link]

One of the reasons the drive firmware is faster is that on startup, the drive can start figuring out where all the data is before the OS boots. The advantage is bigger for solid state drives, where the controller doesn't have to wait for the drive to spin up. If the OS controlled everything, it couldn't start reading that data until the OS got to the point of reading the data. Plus, the "native" drives couldn't be used as a boot drive, as the BIOS wouldn't know how to read it.

Future storage technologies and Linux

Posted Apr 7, 2011 19:14 UTC (Thu) by eds (guest, #69511) [Link]

Every SSD in the world does far worse: data gets placed in some physical location that the host will never know, gets relocated periodically as part of wear-leveling and reclamation of partially-used flash blocks, and perhaps due to eventual ECC degradation.

Shingled HDD media, hybrid storage, and solid-state drives based on unreliable media will all force you into a situation where you will be unable to know the physical layout of your data, and you'll be at the mercy of the device firmware.

Learn to love the bomb?

Future storage technologies and Linux

Posted Apr 7, 2011 20:36 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

umm, 'normal' hard drives have been transparently relocating data for the last 15 years or so, this is nothing new.

Future storage technologies and Linux

Posted Apr 8, 2011 21:28 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

umm, 'normal' hard drives have been transparently relocating data for the last 15 years or so

Hardly ever. If you assume block N is always next to N+1, your performance won't be noticeably different from optimum.

I'm lost in this thread, though. In response to a comment giving a few reasons that it's better for Linux to control a drive than for a controller bundled with the drive to do it, eds writes, "Every SSD in the world does far worse." Far worse than what?

I thought this might be a misplaced comment meant to respond to the issue the article raises of these new disk drives having unpredictable layout. But the unpredictable layout of SSD blocks isn't the same problem: on a disk drive, unlike an SSD, it matters a lot for speed in what physical order you access the blocks.

Future storage technologies and Linux

Posted Apr 6, 2011 18:38 UTC (Wed) by cmccabe (subscriber, #60281) [Link]

> That's easy. We want well documented access to the low level device
> features. Any address-translation-layer should be optional at most.

Drive manufacturers could ship drives with a dual interface. The first interface would be a Windows-centric SATA interface, with a block remapping layer and the NTFS and FAT-specific hacks. The second would be a raw interface that let the OS access erase blocks (for flash) or different kinds of storage (for SMR).

The second interface would be popular with people using Linux servers (or desktops), companies building NAS devices, and writing drive diagnostic or defragmentation tools. Those markets aren't as big as the Windows desktop market, but they do exist.

It would be nice to see Red Hat, or Oracle, or some other company with the resources needed to engage with the drive manufacturers about this. Even Apple has a dog in this fight, since they would benefit from such an interface. If nobody comes forward, I think we're in danger of going down a WinModem-like path where every SSD or next-generation drive will be optimized for NTFS and Win7, and other filesystems will perform poorly.

Future storage technologies and Linux

Posted Apr 6, 2011 3:09 UTC (Wed) by drdabbles (subscriber, #48755) [Link]

Very nice write-up. I saw a video not too long ago about coming developments in storage devices that would require a significant but possible re-working of the VFS an perhaps block layer of the Linux Kernel. And it looks like, from this talk, that may _have_ to happen in chunks as time progresses.

Object storage is interesting, but difficult without vendor buy-in of your platform. I know a few object storage systems work by emulating a file system, but actually using a translation mechanism between the actual on-device file system and the emulated file system. One such example would be the Data Robotics Drobo devices. I'd also be wary of future patent troubles in the object storage arena, since lots of vendors seem to do it behind the scenes.

It would be fantastic to see representatives of Linux organizations working closely with hardware manufacturers! If Linux environments got to shape future devices rather than just be compatible with them, I think we'd start to see a significant change of paradigm of other hardware vendors. And since Linux has a solid foot hold in the enterprise arena, it's as good a place as any to start!

Future storage technologies and Linux

Posted Apr 6, 2011 6:24 UTC (Wed) by butlerm (subscriber, #13312) [Link]

It is not just random writes - without multiple track read heads, random reads will require three to five rotations, setting latency back perhaps thirty years. See here (pdf).

Without a persistent non-volatile cache and extensive operating system support, it is hard to see how such drives could perform well even for general purpose desktop usage. For database and general purpose server applications, they would probably be out of the question unless the operating system could do far more than give the drive "hints" about what should be stored in low read latency and high read latency zones. However, for applications like DVRs and nearline storage, 5-10x the capacity seems like more than a worthwhile trade off for substantially increased access times.

5-10 times?

Posted Apr 6, 2011 9:44 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

Is this 5-10x the same as the 20% increase in the article? Or are they unrelated?

5-10 times?

Posted Apr 6, 2011 15:32 UTC (Wed) by butlerm (subscriber, #13312) [Link]

The first 20% figure in the article refers to the gain from using 4K sectors. The second one I don't know about, but I assume it refers to the gain from using some form of first generation SMR, as opposed to the long term gains possible.

Of course I can't imagine why anyone would actually purchase (or market) an SMR drive with only a 20% capacity increase anytime soon given all the other problems.

TDMR vs SMR

Posted Apr 6, 2011 23:52 UTC (Wed) by akanaber (subscriber, #23265) [Link]

The multiple passes (or multiple track heads) issue is with Two-Dimensional Magnetic Recording, which seems to be the next thing they want to do after "ordinary" SMR. The presentation goes on to admit "We really need that multi-element reader!" (page 52).

That was an interesting link, thanks.

TDMR vs SMR

Posted Apr 8, 2011 22:27 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

The multiple passes (or multiple track heads) issue is with Two-Dimensional Magnetic Recording [TDMR], which seems to be the next thing they want to do after "ordinary" SMR.

I think the point is that SMR isn't much use without TDMR because inter-track interference otherwise prevents you from realizing most of the density improvement that SMR is capable of giving. So it's right to lump them together and say the implications of TDMR are the implications of SMR.

Future storage technologies and Linux

Posted Apr 6, 2011 11:51 UTC (Wed) by NAR (subscriber, #1313) [Link]

relatively easy for the drive to look at the partition table and figure out what kinds of filesystem(s) it is dealing with

Is it? I mean the drive is manufactured in 2012 and even if it can look at the partition table and see that there's btrfs in there, when the drive is actually used in 2017, the btrfs of 2017 could behave significantly different to the btrfs of 2012...

Future storage technologies and Linux

Posted Apr 6, 2011 14:14 UTC (Wed) by clugstj (subscriber, #4020) [Link]

This is easily the worst idea I've seen lately. I realize that the drive manufacturers are stuck interfacing w/ Microsoft, but guessing about how to handle the data is really a bad idea.

Future storage technologies and Linux

Posted Apr 6, 2011 15:48 UTC (Wed) by foom (subscriber, #14868) [Link]

SSDs do that already for NTFS... It will wait until the disk is idle for a while, then parse the partition table, and then the actual filesystem on the partition itself, to find out which blocks are no longer pointed to by any files. And then, it will erase those blocks. That seems like a really really bad idea to me, but then, nobody asked me.

All this, just so that the OS doesn't need to support TRIM...

Future storage technologies and Linux

Posted Apr 7, 2011 19:18 UTC (Thu) by eds (guest, #69511) [Link]

Can you link to some place with evidence of this? (Drives parsing filesystems to auto-trim blocks)

Future storage technologies and Linux

Posted Apr 8, 2011 1:30 UTC (Fri) by foom (subscriber, #14868) [Link]

It was from a paper exploring the data forensics ramifications. This one, I think:
http://www.jdfsl.org/subscriptions/JDFSL-V5N3-Bell.pdf

Quick google search also finds:
"Samsung controllers with recent firmware take an entirely different approach. Without any interaction from the OS, the Samsung firmware is able to 'read' the drive data in the background. When the drive has been idle for a period of time, the firmware reads any NTFS volume bitmaps present and automatically frees up flash blocks not allocated by that partition."

http://www.pcper.com/comments.php?nid=7488

Future storage technologies and Linux

Posted Apr 6, 2011 16:43 UTC (Wed) by iabervon (subscriber, #722) [Link]

Easy: the host OS installs the ARM build of btrfs-tools on the drive's userspace at boot time. That is, have some optional, per-partition firmware with a stable, standardized ABI that filesystem authors can write; without the firmware, the drive doesn't try to do anything clever specific to the filesystem. The filesystem-specific bits could probably even take care of reordering writes, since they could have access to both the physical locations of all of the sectors and the ordering requirements for filesystem consistency. Many things suddenly get easier if you can count on your device having CPU cores with a standard ISA, so that you can run some software on it instead of on the other side of the ATA bus.

Future storage technologies and Linux

Posted Apr 6, 2011 20:12 UTC (Wed) by drdabbles (subscriber, #48755) [Link]

When you say "easy", I think you're saying the concept is easy. The actual process of getting this to work is far from easy. Settling on an ABI and making sure it's future-proof for a hard drive that could live 10+ years in service isn't an easy thing!

What's worse, when you look at it from a manufacturer's point of view, is allowing custom firmware to be loaded to your drive. Manufacturers publish all kinds of error rates, defect rates, and other reliability metrics so enterprises can gauge how a drive will perform for a period of time. If you throw extra software into the mix, there is NO level of predictability! What happens if you aggressively park heads too often, and they mis-align or do damage? What happens if you perform complex functions for a long period of time and cause thermal issues with the CPU? These problems go on and on.

The repercussions of allowing custom firmware onto devices goes far beyond lifespan issues, too! Imagine if your hard drive's firmware crashed as much as [insert crashy OS you don't like here]! Data loss and/or corruption is a massive problem for storage devices when a manufacturer makes a mistake in firmware (Seagate has done this a couple times lately). It turns out, people expect hard drives to Just Work (tm).

I think the better alternative to creating on board software loads that do creative things is to work with manufacturers to build a generally well-rounded interface for all operating systems, and to leave the creativity up to the block and VFS layers.

One creative thing that could be done is to use the SSD portion of a hybrid drive as a large journal area, and use the filesystem's ability to journal data as well as metadata. Delay a write briefly to this area, causing the write to be a sequential write, thereby speeding up the write. When the filesystem flushes the journal, the data is committed to rotational media in the background. Obviously, the longer you delay a write, the more chance there is for data to be lost, so it's not perfect but it's a spit-balled idea.

Future storage technologies and Linux

Posted Apr 6, 2011 21:50 UTC (Wed) by iabervon (subscriber, #722) [Link]

If the firmware loaded to the drive by some OS corrupted data or parked the heads frequently, it wouldn't be all that different from the OS corrupting data or parking the heads itself, particularly since the firmware would only be present while that particular OS was running. I'd much rather trust that my OS implemented the filesystem right twice than trust that both my OS and my hard drive manufacturer implemented the filesystem right. Of course, it would only run the OS's firmware in the drive processor's userspace, and the drive's kernel (and ASIC portions) would take care of the important stuff.

Future storage technologies and Linux

Posted Apr 6, 2011 15:49 UTC (Wed) by Darkmere (subscriber, #53695) [Link]

I'm looking at the hybrid drives and starting to wonder. If for example metadata/inode tables and such were on the flash part, and "Linear" data on the magnetic part.

Then there's the metadata-journal, as well as maybe data journals, acting as an intermediate cache in order to allow both data-realignment and "online defrag" on the metal.

And of course, OS-hinted prefetch.

Future storage technologies and Linux

Posted Apr 6, 2011 16:42 UTC (Wed) by southey (subscriber, #9466) [Link]

Thirteen years ago, while working for a hardware manufacturer, Michael had a hard time finding the right person to talk to in the Linux community to get support for his company's hardware. Years later, that problem still exists

One would of thought that they would have actually stepped up and paid someone to do that some time ago!

Future storage technologies and Linux

Posted Apr 7, 2011 13:22 UTC (Thu) by wookey (subscriber, #5501) [Link]

Does IDEMA have any membership from the NAND flash device manufacturers (SD/MMC etc)? On the whole the disk drive people have done a reasonable job of not letting their firmware/hardware get in the way, and exposing the hardware usefully, but the flash card people have done a hopeless job of just letting us use the flash in a reliable and appropriate way. Linux has known how to deal with NAND for a long time, but the block interface presented by SD and the like doesn't let us do it properly, and the secret sauce inside routinely does a terrible job with anything that isn't FAT.

We really badly need to be able to talk to those people and get more suitable code in their tiddly little controller chips (also probably ARM), or at least get them to tell us what they are doing so we can deal with their 'optimisations'. Preferably put on our own free software of course, but one thing at a time.

The merging of flash and rotating storage may help with this process, or it may make it worse if we don't get to stick our oar in.

I've long wondered who to hassle about this, and still have no idea.

Future storage technologies and Linux

Posted Apr 7, 2011 16:12 UTC (Thu) by ernest (subscriber, #2355) [Link]

Good grief! Why do I buy an expensive NAS containing an ARM processor and 128 MB ram to control a hard disk which itself contains a perfectly functioning ARM with a gig of ram !!

Just add a network controller to a hard disk (easier than adding a PCI controller I would assume) and I could buy a NAS for the price of hard disk! (dead cheap)

Ernest.

Future storage technologies and Linux

Posted Apr 8, 2011 22:48 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Good grief! Why do I buy an expensive NAS containing an ARM processor and 128 MB ram to control a hard disk which itself contains a perfectly functioning ARM with a gig of ram !!

My guess is that the disk drive uses all of its CPU and memory -- it's not normal to build more capacity into an embedded device than it can use. So to add NAS functions to it and end up with the same NAS capacity, you'd need to add the extra processor and 128M of memory, plus everything else which is now in the NAS unit.

Want open source? Publish open documents!

Posted Apr 7, 2011 16:17 UTC (Thu) by roman (subscriber, #24157) [Link]

If IDEMA would like participation from the free software community, they might try making their specifications publicly available at no charge. $300 for an individual membership in order to get access is a substantial disincentive for noncommercial developers (for example, college students like Linus was when he started writing the kernel).

Future storage technologies and Linux

Posted Apr 7, 2011 17:03 UTC (Thu) by pr1268 (subscriber, #24648) [Link]

> While some may say that disk drives - rotating storage - are on their way out, the fact of the matter
> is that the industry shipped a record 650 million drives last year and is on track to ship one billion drives in 2015.

Doesn't the concept of a "drive" include any persistent storage medium, including SSDs? If that's the case, then certainly it's understandable that "drive" shipments would be on the increase. I'm just unsure about the increase in spinning drive sales.

I'll openly admit to being a little unclear as to what the definition of a disk drive is in this day and age. Assuming any storage device which presents itself to the underlying OS as a block (or character) device (i.e., /dev/hd? or /dev/sd? in Linux) is a drive, then perhaps even the MicroSD card in my BlackBerry qualifies as a "drive".

Future storage technologies and Linux

Posted Apr 8, 2011 23:10 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

While some may say that disk drives - rotating storage - are on their way out, the fact of the matter is that the industry shipped a record 650 million drives last year and is on track to ship one billion drives in 2015.

Doesn't the concept of a "drive" include any persistent storage medium, including SSDs?

In this case, it is extremely clear, to me, that "drive" is an abbreviation referenced back to "disk drives - rotating storage" in the previous sentence.

Exactly what a "disk drive" is depends on what level you're considering. The characteristics of a disk drive depend on the level at which you look. At one level, a disk drive is a volume of storage, so e.g. a ramfs filesystem shows up in a desktop GUI as a "drive." At a lower level, it's something that behaves like a block device, so the ramfs filesystem doesn't make the cut, but soldered-on flash memory with a block driver does. Going lower, it's something with an ATA or SCSI interface, so the soldered on flash isn't a disk drive, but a logical volume on a RAID array is. At another level, a disk drive is something you plug into a disk drive slot, so a logical RAID volume isn't, but a 2.5" Seagate SATA SSD is. And finally, if you're a disk drive developer, a disk drive is something with rotating disks in it.

I don't know Blackberry, but my pocket computer has no disk drive legacy (never uses the word "drive"), so the SD card in it is not a disk drive, but if I put the same card into my Ubuntu netbook, it could be one.

The "D" is SSD was originally "disk" and SSD was intended to be the oxymoron it is, to make the point that its a disk drive for every purpose except actually being one. Now, people often say "device" or "drive" to avoid the contradiction, but it doesn't work, because a storage device that doesn't emulate a disk drive -- at some level -- isn't an SSD. And by the way, there isn't anything to "drive" in an SSD, so that's still an oxymoron.

Future storage technologies and Linux

Posted Apr 11, 2011 18:22 UTC (Mon) by jmorris42 (subscriber, #2203) [Link]

> 'm just unsure about the increase in spinning drive sales.

Believe. Most of the increase is almost certainly embedded in DVDs. Every cable company and sat operator pushes them now. They get a small monthly fee for the feature. Since the cable box already knows how to demodulate digital cable from the wire/dish and decode MPEG2/4 and offer a complex menu and guide data, the only additional hardware is the hard drive. So they are renting you a $50 (quan 1 from you choice of online vendor) product for $10/mo. How can they lose?

Future storage technologies and Linux

Posted Apr 15, 2011 23:40 UTC (Fri) by guest (guest, #2027) [Link]

What I would like to see are hard drives with independent read- and write heads so that you can read one track and write to a different one at the same time.

Future storage technologies and Linux

Posted Apr 17, 2011 20:01 UTC (Sun) by landley (guest, #6789) [Link]

Is it just me or does "large block SMR" sound a whole lot like flash? As in the erase block size is enormous and you want a log structured filesystem, of which we now have bunches?

A 32 meg block size for 64 gig flash sounds pretty reasonable, actually...

Future storage technologies and Linux

Posted Apr 19, 2011 19:54 UTC (Tue) by geek (guest, #45074) [Link]

"Disk drives are small computers running fairly capable operating systems of their own." - is it Linux?

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds