|
|
Subscribe / Log in / New account

Speeding up Linux disk encryption (Cloudflare)

Speeding up Linux disk encryption (Cloudflare)

Posted Mar 26, 2020 11:52 UTC (Thu) by geuder (subscriber, #62854)
Parent article: Speeding up Linux disk encryption (Cloudflare)

That's a great blog post to read. My main thoughts were:

That's what you get with one size fits all. The kernel is supposed to support everything from spinning disks, tiny 32 bit systems up to nvme, hundreds of gigs of RAM. It would be a miracle if all scenarios performed ideally. Still it performs well enough in many cases.

There is certainly happening a lot of bit rot in the kernel. The resources I have had at work have been always orders of magnitude smaller than what Cloudflare seems to have. Still we have identified similar problems, where 10+ year old code just doesn't work very well. With small resources all you can do is go away (not from Linux, but from a certain fs for example) or make a really dirty hack that you don't dare to show anybody else, even if it happens to work in your system.

Anyway, with Linux all these options exist, at Cloudflare scale and for the 0.3 person kernel teams. If you don't like it, go out, get/buy/write a kernel and report when you are happier :) I'm willing to listen, but I don't hold my breath until that.


to post comments

Speeding up Linux disk encryption (Cloudflare)

Posted Mar 27, 2020 17:12 UTC (Fri) by geuder (subscriber, #62854) [Link] (4 responses)

> you don't dare to show anybody else

Sorry, sloppy wording. I did not intend to ask for violating GPL here. I just meant writing blog posts or posting it to a kernel list. In your tar ball there is always hope that nobody ever looks at it :) Although I as a developer prefer complete git history over tar balls...

Speeding up Linux disk encryption (Cloudflare)

Posted Mar 29, 2020 17:05 UTC (Sun) by rillian (subscriber, #11344) [Link]

Don't worry. There's plenty of hope no one will look at your git repo either. :)

Speeding up Linux disk encryption (Cloudflare)

Posted Apr 3, 2020 15:27 UTC (Fri) by paulj (subscriber, #341) [Link] (2 responses)

I think it's pretty clear that for any project kept in git, that the full git repo is the preferred source for such a project. And generally, for any project kept in an SCM where checkouts imply the full history is distributed, a checkout with the full history will be the preferred form of source access for its developer.

Seems pretty obvious, except to those invested in it not being obvious.

Speeding up Linux disk encryption (Cloudflare)

Posted Apr 4, 2020 2:07 UTC (Sat) by pabs (subscriber, #43278) [Link] (1 responses)

That depends on your available bandwidth and the size of the repo, some people in some places for some repos are likely to prefer `git clone --depth=1` plus remote history interactions rather than a full copy of the history.

Speeding up Linux disk encryption (Cloudflare)

Posted Apr 4, 2020 5:23 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

Not to mention reproducible build and offline build folks might like reproducible inputs. Yeah, you can use the commit id, but then you have to clone the whole thing (`git clone --depth 1 $repo $sha` is quite unreliable; you need to clone a refname, but then you need to guess at your `--depth` for how far back you want).


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds