|
|
Subscribe / Log in / New account

Red Hat acquires Inktank

Red Hat has announced that it has signed a deal to acquire Inktank, the company formed around the Ceph filesystem, for $175 million. "Combined with Red Hat's existing GlusterFS-based storage offering, the addition of Inktank positions Red Hat as the leading provider of open software-defined storage across object, block and file system storage."

to post comments

Red Hat acquires Inktank

Posted Apr 30, 2014 14:51 UTC (Wed) by nix (subscriber, #2304) [Link] (7 responses)

Seems a bit overlapping to me, but maybe I'm missing something.

I hope we don't lose one of those filesystems in the merger.

Red Hat acquires Inktank

Posted Apr 30, 2014 15:28 UTC (Wed) by drag (guest, #31333) [Link] (5 responses)

The way I look at it they are complimentary. Similar to how Amazon offers both EBS and S3.

Large scale object store versus a easy distributed file system.

Red Hat acquires Inktank

Posted Apr 30, 2014 15:50 UTC (Wed) by fandingo (guest, #67019) [Link]

> S3

I think that's why it's such a good acquisition. There's a lot of appeal of using object storage, and I think that we'll see lots of applications leverage it over the next few years. File-based data adds lots of complexity that most application developers would do better to off-load to an external component. It's nice to have that built directly into the storage system rather than needing to run something separate like OpenStack's Swift (plus Keystone plus a DB and MQ).

Red Hat acquires Inktank

Posted Apr 30, 2014 20:06 UTC (Wed) by rodgerd (guest, #58896) [Link] (3 responses)

Exactly. Gluster is very easy to set up and gives me resilience. It solves a lot of the same problems as Windows' DFS. I've been underwhelmed by RH trying to sell it to me as a SAN replacement when all I want is the ability to run a few redundant fileservers.

Ceph is harder to get set up in the first place, but could let you replace SANs, gives you S3 in your datacentre. It certainly doesn't solve the problem of "I just want some redundant file servers". Not easily, anyway.

Red Hat acquires Inktank

Posted Apr 30, 2014 23:05 UTC (Wed) by drag (guest, #31333) [Link] (2 responses)

> It certainly doesn't solve the problem of "I just want some redundant file servers". Not easily, anyway.

Well I envision Ceph with something like this:

http://www.quantaqct.com/en/01_product/02_detail.php?mid=...

It's a opencompute rack built around the concept of combining a bunch of 2u systems (4 nodes each) with a bunch of JBOD arrays. 28 directly attached SAS/Sata drives per chassis, with a total of 420 drives in a rack.

With Ceph you don't want to use any sort of raid, hardware or otherwise. It's aware of individual disks and locations and will manage the redundancy and try to make sure that the data is close to the node that needs it. Properly configured you shouldn't have any problems with losing a entire JBOD array.

Out of that storage, on top of Ceph, you'd carve out your block devices, file systems, and/or object stores.

So if you want a nice and redundant file server you then create a big file system on Ceph, mount it on your Linux servers, give them different points on your external network to hook into, and share it out with NFS or Samba or whatever you want.

Doesn't seem too difficult... Provided, of course, everything works as advertised. Which remains to be seen.

For smaller businesses you just do the same thing, but with a bunch of conventional 4U or tower systems, stuffed full of hard drives, with a separate physical network for Ceph and your 'production' network all your desktops or web servers or whatever else runs on.

Or just run it with openstack.. with software defined machines, on software defined networks, to share your software defined storage out to your software defined whatever else. Because, you know, software is always so awesome.

Red Hat acquires Inktank

Posted Apr 30, 2014 23:17 UTC (Wed) by kloczek (guest, #6391) [Link] (1 responses)

> With Ceph you don't want to use any sort of raid, hardware or otherwise. It's aware of individual disks and locations and will manage the redundancy and try to make sure that the data is close to the node that needs it. Properly configured you shouldn't have any problems with losing a entire JBOD array.

So it is something like AFS+ZFS? Uff .. good to know :)

Red Hat acquires Inktank

Posted May 1, 2014 2:03 UTC (Thu) by drag (guest, #31333) [Link]

> So it is something like AFS+ZFS? Uff .. good to know :)

It's like AFS in the same way that Webdav is like AFS.

So, yes, it involves files and networks.

Red Hat acquires Inktank

Posted Apr 30, 2014 16:09 UTC (Wed) by wazoox (subscriber, #69624) [Link]

Gluster is noticeably fragile; though it's better known at the moment, I'm pretty sure it's deemed to die a slow death and be superseded by Ceph, which is much more robust and built on better foundations (kernel driver vs fuse, carefully designed architecture vs design grown out of a quick hack are the most salient points).

Red Hat acquires Inktank

Posted May 1, 2014 0:39 UTC (Thu) by cyperpunks (subscriber, #39406) [Link]

Can Ceph be combined with LVM (where RedHat seems to have full control), to be something useful?

Red Hat acquires Inktank - opening up proprietary components

Posted May 3, 2014 0:08 UTC (Sat) by csamuel (✭ supporter ✭, #2624) [Link]

One important fact that's not been mentioned is that Red Hat will be opening up previously closed-source add-on components of the stack from Inktank. As Sage Weil writes:

http://ceph.com/community/red-hat-to-acquire-inktank/

# One important change that will take place involves Inktank’s
# product strategy, in which some add-on software we have
# developed is proprietary. In contrast, Red Hat favors a pure
# open source model. That means that Calamari, the monitoring
# and diagnostics tool that Inktank has developed as part of
# the Inktank Ceph Enterprise product, will soon be open sourced.


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds