LogFS merged into the mainline kernel
      Posted Mar 9, 2010 1:55 UTC (Tue)
                               by leemgs (guest, #24528)
                              [Link] 
       
     
      Posted Mar 9, 2010 16:58 UTC (Tue)
                               by blitzkrieg3 (guest, #57873)
                              [Link] (20 responses)
       
     
    
      Posted Mar 9, 2010 17:45 UTC (Tue)
                               by joern (guest, #22392)
                              [Link] (19 responses)
       
One difference may be that NILFS had a little more corporate funding and more than one person working on it.  Also I wasn't aware of Valerie's law (a filesystem takes five years) when I started.  This is one of those moments when I hate her for being right. 
And was NILFS2 really designed to run on flash?  I was not aware of that. 
     
    
      Posted Mar 9, 2010 18:15 UTC (Tue)
                               by blitzkrieg3 (guest, #57873)
                              [Link] (18 responses)
       
As for the differences, I was mostly hoping for a comparison of architectural differences and features.  
Thanks for your work and finally getting it merged :) The more SSD filesystems the better. 
     
    
      Posted Mar 9, 2010 18:27 UTC (Tue)
                               by dwmw2 (subscriber, #2063)
                              [Link] (15 responses)
       
LogFS is designed to work on real flash, directly. Flash != SSD.
      
           
     
    
      Posted Mar 9, 2010 18:49 UTC (Tue)
                               by blitzkrieg3 (guest, #57873)
                              [Link] 
       
     
      Posted Mar 9, 2010 19:36 UTC (Tue)
                               by nix (subscriber, #2304)
                              [Link] (12 responses)
       
 
     
    
      Posted Mar 11, 2010 4:02 UTC (Thu)
                               by pj (subscriber, #4506)
                              [Link] 
       
     
      Posted Mar 11, 2010 9:06 UTC (Thu)
                               by joern (guest, #22392)
                              [Link] (10 responses)
       
In a way I'm doing the same.  The SSD you can buy at your local electronics store has a SATA interface and is hiding its flashiness behind a translation layer.  It is trying to mimic a hard disk.  Between your operating system and the actual medium is a fairly large number of bridge chips, translating from CPU bus to PCI or PCIe, to SATA, to some internal SSD bus and to the actual NAND interface.  Each step adds latency. 
The SSD you should be able to buy simply has fewer steps in between.  Rip out the translation layer.  Skip SATA and attach to PCIe.  Skip PCIe and attach to a hypertransport socket on your mainboard.  Simply move it as close to the CPU and RAM as possible. 
Support for block devices came as a hind-sight, because I too have to live in the present and it is a lot harder and more expensive to create my own SSD than to change my filesystem.  But that doesn't mean I have to like the current situation.  Nor should you. ;) 
     
    
      Posted Mar 11, 2010 13:21 UTC (Thu)
                               by nix (subscriber, #2304)
                              [Link] 
       
But still, this is a really cool project that I can't use sensibly. I hope you're right and that should be written 'can't use *yet*'! 
     
      Posted Mar 11, 2010 14:49 UTC (Thu)
                               by tack (guest, #12542)
                              [Link] (5 responses)
       
     
    
      Posted Mar 11, 2010 15:08 UTC (Thu)
                               by dwmw2 (subscriber, #2063)
                              [Link] (4 responses)
       
     
    
      Posted Mar 11, 2010 17:48 UTC (Thu)
                               by dlang (guest, #313)
                              [Link] (3 responses)
       
I really doubt that it's all done in software. At the speeds they are working, I would expect it to be in firmware if not in a custom ASIC or PGA 
     
    
      Posted Mar 19, 2010 11:21 UTC (Fri)
                               by dwmw2 (subscriber, #2063)
                              [Link] (2 responses)
       
     
    
      Posted Mar 19, 2010 12:42 UTC (Fri)
                               by dlang (guest, #313)
                              [Link] (1 responses)
       
     
    
      Posted Mar 19, 2010 12:45 UTC (Fri)
                               by dwmw2 (subscriber, #2063)
                              [Link] 
       
     
      Posted Mar 11, 2010 18:11 UTC (Thu)
                               by linusw (subscriber, #40300)
                              [Link] (1 responses)
       
If we instead twist the question and ask: suppose we can have raw NAND access to the flash, what kind of hardware accelerators would LogFS love to see in order to do it's job properly? Is there some loop in your code that you would just love to hand over to a piece of hardware, say "here is buffer A, here is buffer B, here are parameter X,Y,Z, do the stuff (even DMA it from source to destination etc) and then IRQ  me when you're done"?  
     
    
      Posted Mar 12, 2010 14:27 UTC (Fri)
                               by joern (guest, #22392)
                              [Link] 
       
If you have done all the above, you are in excellent shape.  A few additional bells and whistles are possible, but nothing really fundamental. 
     
      Posted Mar 12, 2010 16:16 UTC (Fri)
                               by giraffedata (guest, #1954)
                              [Link] 
       
So stop confusing people by referring to this as an SSD.  This thing you should be able to buy is not an SSD, it's an alternative to an SSD.  SSD means it emulates a disk drive.  It's a very important category of storage device because you can use it with existing non-flash-aware system components.  As we all know, there is a price to pay for that.
      
           
     
      Posted Mar 9, 2010 19:52 UTC (Tue)
                               by joern (guest, #22392)
                              [Link] 
       
One example is around TRIM.  Lacking TRIM, one way to tell the SSD that certain blocks are unused is never to use them.  So a filesystem should f.e. leave all free space at the end while reusing the front again and again.  However, on another breed of SSD with less useful wear leveling, this behaviour will wear out the front, while the end never gets used at all. 
But overall logfs will also work on SSD. 
     
      Posted Mar 9, 2010 20:25 UTC (Tue)
                               by joern (guest, #22392)
                              [Link] (1 responses)
       
LogFS supports raw flash, which requires a number of tricks that NILFS lacks.  Wear leveling, journal relocation, erase before write, that sort of thing. 
Used/free space tracking on a naive LFS requires a write, which changes used/free space and would require yet another write, ad infinitum.  NILFS solves this problem by pre-calculating the state after the write.  LogFS solves the problem by writing the old state and and caching small updates in the journal.  I believe UBIFS solves it by storing the state in a seperate UBI volume. 
Surely there are more differences.  I just don't know NILFS well enough to speak on its behalf.  So beware of any mistakes in the above. :) 
     
    
      Posted Mar 9, 2010 22:41 UTC (Tue)
                               by blitzkrieg3 (guest, #57873)
                              [Link] 
       
     
      Posted Mar 11, 2010 14:06 UTC (Thu)
                               by NRArnot (subscriber, #3033)
                              [Link] (11 responses)
       
In particular I imagine that one could make a pretty standard Linux system boot blisteringly fast if it had its /usr on a few Gb of (cheap!?) raw flash.  
 
     
    
      Posted Mar 11, 2010 14:52 UTC (Thu)
                               by gkarabin (guest, #16189)
                              [Link] (10 responses)
       
In any case, you buy them through distribution channels, usually in bulk quantities.  Digikey is an example of a supplier that works in lower quantities.  Here's an example part: 
http://search.digikey.com/scripts/DkSearch/dksus.dll?Deta... 
 
 
 
     
    
      Posted Mar 11, 2010 22:32 UTC (Thu)
                               by NRArnot (subscriber, #3033)
                              [Link] (9 responses)
       
What did the LogFS developers develop on? 
     
    
      Posted Mar 12, 2010 5:37 UTC (Fri)
                               by Necronom (guest, #22645)
                              [Link] (5 responses)
       
You'd have to write a driver, but there is Intel Turbo Memory.  
     
    
      Posted Mar 12, 2010 12:25 UTC (Fri)
                               by rvfh (guest, #31018)
                              [Link] (4 responses)
       
Why did nobody get interested in using that? I would be cool to stick / on it! 
     
    
      Posted Mar 12, 2010 14:47 UTC (Fri)
                               by joern (guest, #22392)
                              [Link] 
       
     
      Posted Mar 12, 2010 17:49 UTC (Fri)
                               by etienne_lorrain@yahoo.fr (guest, #38022)
                              [Link] (2 responses)
       
     
    
      Posted Mar 15, 2010 15:41 UTC (Mon)
                               by fragmede (guest, #50925)
                              [Link] (1 responses)
       
04:00.0 Memory controller [0580]: Intel Corporation Turbo Memory Controller 
(from http://lkml.indiana.edu/hypermail/linux/kernel/0812.0/018...) 
     
    
      Posted Mar 16, 2010 9:36 UTC (Tue)
                               by etienne_lorrain@yahoo.fr (guest, #38022)
                              [Link] 
       
     
      Posted Mar 12, 2010 14:43 UTC (Fri)
                               by joern (guest, #22392)
                              [Link] (2 responses)
       
That being said, if a large number of people say that they would buy a raw flash device for, say, 20% more than a SATA SSD, it might help persuade potential manufacturers.  Not to mention, I am rather curious how many people share my preferences. 
What did the LogFS developers develop on?  Well, a large number of devices.  A real kernel running on hardware, a virtualized kernel running in QEMU or KVM or a standalone port of the Linux VFS to userspace.  Devices range from regular files and virtualized flash though OLPC prototypes and similar embedded boards to standard PCs with crap SSDs and hard disks.  At least that covers the ones I can talk about in public. ;) 
     
    
      Posted Mar 21, 2010 19:43 UTC (Sun)
                               by robert_s (subscriber, #42402)
                              [Link] (1 responses)
       
In fact, it may even be the simplest possible 'hello world' PCIe device to design. An entry level PCIe FPGA development card and some NANDs soldered onto a prototype board would get things going. It really isn't beyond "amateur" electronic engineers' capabilities, so there must not be much interest in the idea. 
It's certainly easier than creating an open graphics card. 
     
    
      Posted Mar 24, 2010 16:14 UTC (Wed)
                               by joern (guest, #22392)
                              [Link] 
       
     
    LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
      Why are you talking about SSDs?
LogFS merged into the mainline kernel
      LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
us will ever see. Alas.
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
      If we can get docs on how to talk to the card, then yes. I believe that all their silly translation layer stuff to make it pretend to be spinning rust is done in software.
      
          LogFS merged into the mainline kernel
      LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      "I don't know about docs, but the driver code itself is available."
For the FusionIO boards? Where?
      
          LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
The SSD you should be able to buy simply has fewer steps in between. Rip out the translation layer. Skip SATA and attach to PCIe. Skip PCIe and attach to a hypertransport socket on your mainboard
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
LogFS merged into the mainline kernel
      
Where to buy?
      
Where to buy?
      
Where to buy?
      
Where to buy?
      
Turbo Memory
      
Turbo Memory
      
Turbo Memory
      
lspci for Intel Turbo Memory
      
[8086:444e] (rev 11)
Subsystem: Intel Corporation Device [8086:444b]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at f2600000 (32-bit, non-prefetchable) [size=1K]
I/O ports at 2000 [size=256]
[virtual] Expansion ROM at 80000000 [disabled] [size=64K]
Capabilities: [48] Power Management version 3
Capabilities: [50] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-
Capabilities: [68] Express Legacy Endpoint, MSI 01
lspci for Intel Turbo Memory
      
Where to buy?
      
Where to buy?
      
Where to buy?
      
 
           