|
|
Subscribe / Log in / New account

Composefs for integrity protection and data sharing

By Jake Edge
December 7, 2022

A read-only filesystem that will transparently share file data between disparate directory trees, while also providing integrity verification for the data and the directory metadata, was recently posted as an RFC to the linux-kernel mailing list. Composefs was developed by Alexander Larsson (who posted it) and Giuseppe Scrivano for use by podman containers and OSTree (or "libostree" as it is now known) root directories, but there are likely others who want the abilities it provides. So far, there has been little response, either with feedback or complaints, but it is a small patch set (around 2K lines of code) and generally self-contained since it is a filesystem, so it would not be a surprise to see it appear in some upcoming kernel.

Features

There is a lengthy introduction to composefs in the cover letter and more information in the documentation patch. Unlike many filesystems, composefs is not backed by a block device but instead by a set of regular read-only files: an image file that contains the directory structure and file metadata and a directory of content-addressed objects that are the file contents. Since the files themselves are in an object store, they are effectively deduplicated at the file level; files that have identical content are only stored once even if the metadata (e.g. owner, permissions, extended attributes) is different. In addition, when a file is read from composefs, the backing file is used—and cached in the page cache—so any read of another composefs file with the same content will use that page-cache entry.

The filenames used in the object store correspond to the fs-verity digests of the contents, so composefs can detect any changes to files using the fs-verity mechanism. The image file can contain the digests for the files to ensure they are not changed out from under composefs and the image file itself can be protected with fs-verity as well. The fs-verity digest of the image file could be retrieved from a secure location (e.g. a signed kernel command line) and passed to the mount command, allowing composefs to ensure that the entire filesystem is unchanged from its expected state.

The files needed for a composefs instance are created using the mkcomposefs tool:

    $ mkcomposefs --digest-store=objects rootfs/ rootfs.img
That command would process the rootfs directory, storing the image file corresponding to its directory structure and file metadata in rootfs.img, and create an object store with the file contents into the objects directory. As with Git, the objects directory has subdirectories named for the first two hex characters of the digest hash, each of which contains files named with the remaining characters of the hash.

A filesystem created that way could then be mounted as follows:

    $ mount -t composefs rootfs.img -o basedir=objects,verity_check=2 /mnt
The verity_check option governs whether to do fs-verity checking on the image and files; 0 disables fs-verity checking, 1 only checks images that specify it, and 2 requires fs-verity. There is also a digest option that can be used to pass the fs-verity digest hash for the image file to the mount command, which allows composefs to verify it. That provides end-to-end verification as noted in the cover letter: "So, given a trusted set of mount options (say unlocked from TPM), we have a fully verified filesystem tree mounted, with opportunistic fine-grained sharing of identical files."

Use cases

Multiple containers on a system often share many files, but that file data is typically replicated for each container image. Composefs can be used to create multiple, different, read-only container image files that all refer to the same object store, so that all of the files are only stored once. In addition, files that are being used by multiple containers simultaneously (e.g. shared libraries) will likely reside in the page cache, saving a slower retrieval from persistent storage.

The second use that Larsson and Scrivano envision is to replace the "link farm" that gets created for OSTree-based root filesystems with a composefs mount. Currently, OSTree uses hard links into its content-addressed object store, but that structure is not protected against changes because it does not get validated at run time, only when it is being created. By using fs-verity on the image file and object store, though, a composefs-based root filesystem would have its integrity protected automatically.

Beyond those two immediate uses for composefs, there are others on the horizon:

[...] there seems to be a wealth of other possible uses. For example, many systems use loopback mounts for images (like lxc or snap), and these could take advantage of the opportunistic sharing. We've also talked about using fuse [Filesystem in Userspace] to implement a local cache for the backing files. I.e. you would have a second basedir be a fuse filesystem, and on lookup failure in the first basedir the fuse one triggers a download which is also saved in the first dir for later lookups. There are many interesting possibilities here.

The reaction to the patches has been non-existent so far, other than some documentation updates suggested by Bagas Sanjaya. The facilities provided by composefs seem useful, however, and build on the fs-verity feature that is, after some fits and starts, already present in the kernel. There is also work in progress to support composefs in OSTree, but there is something of a chicken-and-egg problem until the filesystem lands in the kernel. With luck, and a continued lack of any serious opposition to composefs, that problem could be addressed—perhaps even early next year.


Index entries for this article
KernelData integrity
KernelFilesystems/composefs


to post comments

Composefs for integrity protection and data sharing

Posted Dec 7, 2022 22:53 UTC (Wed) by Karellen (subscriber, #67644) [Link] (7 responses)

I'm getting a bit confused about the repeated mention that underlying files are only stored once and shared by composefs in the kernel.

If there are multiple users on a system, who each create/run their own containers, do they share a basedir? If so, how do the permissions work? Or don't they need to, because of the content addressing?

Or would each user have their own basedir, with their own copies of "the same" file?

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 0:52 UTC (Thu) by gray_-_wolf (subscriber, #131074) [Link] (5 responses)

I think that depends on you. If my (very quick) skim through the linked
documentation patch is correct, you set the basedir during the mount:

mount -t composefs image.cfs -o basedir=/dir /mnt

So I think figuring out how to share it between users (if desired) and how to
fill it is left as an exercise to the user. I would imagine at least container
runtimes will have one-per-user, same way I have
~/.local/share/containers/storage for storing podman images.

The basedir can even contain multiple, comma-separated paths, which is pretty
cool. The described download-on-demand model using FUSE looks interesting.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 1:44 UTC (Thu) by hsiangkao (guest, #123981) [Link] (4 responses)

> The basedir can even contain multiple, comma-separated paths, which is pretty
cool. The described download-on-demand model using FUSE looks interesting.

If download-on-demand needs another FUSE, why not using a FUSE fs entirely?

I’d suggest people looks what current EROFS does over the past year, rather than **completely** ignore those recent efforts.

BTW, EROFS could support page cache sharing in v6.3 if no strange happens.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 10:16 UTC (Thu) by smcv (subscriber, #53363) [Link] (3 responses)

> If download-on-demand needs another FUSE, why not using a FUSE fs entirely?

I would guess: so that the fast path (no download, load from cache) happens entirely in-kernel and doesn't pay the performance penalty of FUSE, even though the slow path (download required) does.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 10:27 UTC (Thu) by hsiangkao (guest, #123981) [Link] (2 responses)

EROFS over fscache can do all of them in kernel already since 5.19.

Composefs for integrity protection and data sharing

Posted Dec 10, 2022 18:15 UTC (Sat) by gscrivano (subscriber, #74830) [Link] (1 responses)

How does it _all_ happen in the kernel if fscache makes requests to userspace to retrieve data from the cache?

Using a FUSE filesystem as a secondary store, composefs would need a request to userspace only for files that are not available locally, and fully resolved within the kernel. EROFS ends up in request to userspace also for files that are already in the local cache but unknown to fscache.

Additionally composefs can use multiple sources transparently for each mount instead of forcing a single userspace handler for /dev/cachefiles.

Composefs for integrity protection and data sharing

Posted Dec 11, 2022 4:58 UTC (Sun) by hsiangkao (guest, #123981) [Link]

> Using a FUSE filesystem as a secondary store, composefs would need a request to userspace only for files that are not available locally, and fully resolved within the kernel. EROFS ends up in request to userspace also for files that are already in the local cache but unknown to fscache.

That is not the truth. EROFS only ends up in request to userspace for files that are _not_ in the local cache and unknown to fscache. If the files is already in the local cache, EROFS won't trigger into userspace anymore.

Please take a look at this: https://d7y.io/blog/2022/06/06/evolution-of-nydus/

> Additionally composefs can use multiple sources transparently for each mount instead of forcing a single userspace handler for /dev/cachefiles.

It's just a interface. Fscache itself also has multiple daemon and multiple directory plan.

In addition, as I said many times in this thread, if you want to directly access files and such overlay model is considered as _safe_ (which I don't think so), EROFS can support this mode in a few days as well.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 3:42 UTC (Thu) by xanni (subscriber, #361) [Link]

According to the description, the metadata (including permissions) is in the filesystem image provided at mount time, not in the object store. This is how the same object store can be shared between multiple filesystems, each with their own metadata and thus also different permissions.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 3:46 UTC (Thu) by hsiangkao (guest, #123981) [Link] (11 responses)

> Unlike many filesystems, composefs is not backed by a block device but instead by a set of regular read-only files: an image file that contains the directory structure and file metadata and a directory of content-addressed objects that are the file contents.

I will make it more clear soon in the documentation in the next pull request for Linux v6.2 that EROFS now supports block-based referenced blobs (since v5.16 [1]) and file-based referenced blobs via fscache (since v5.19 [2] [3]. One of the main reasons to use fscache is that we've learned from Incremental FS discussion [4] and also we need a local cache management framework but we don't want to entirely reinvent another wheel and duplicate the codebase. it seems the main fscache can fit this) These feature above can always be used _without_ EROFS compression enabled.

An EROFS image (since v5.16, actually we initially announced it in the summer of 2021 [5] [6] and some IRC channels, then months later I noticed composefs suddenly appeared somehow on the Internet) that can contain several blobs. The primary blob contains the directory structure and file metadata so that each file can refer to some parts and the whole of the other blobs (chunk-based data deduplication, just like what casync does). IOWs, one blob (other than the primary blob) can have a part of a file _or_ an entire file (just like what OSTree now does in the per-file sharing) _or_ several files.

Now EROFS can use together with Nydus image service [7] [8] to do data-deduplication, data integrity protection, lazy-pulling (also called on-demand read, or download-on-demand, what ever) Nydus/OCI/(e)stargz/tar/EROFS (and more) image formats. Nydus image service already supports containerd, harbor, buildkit, and podman ecosystems.

As the next step, EROFS will support page cache sharing, self-contained integrity protection and convergent encryption no matter the underlay medium is block-based or file-based, integrating to more boot loaders (u-boot already supports EROFS) providing a unified approach as a generic image-based read-only solution for all open-source communities.

Finally, I'd like to repeat here why we introduced EROFS at that time initially. One of the main reasons was Squashfs on-disk format having not be developed for a decade and many improvements with no response (at that time or until now, [8][9][10] and more.) or NACK [11]. It becames inactive despite it has more and more complaints [12] [13]. And we've also shown that why EROFS outperforms other read-only filesystems at ATC 19 [14] and LSF/MM 2019 (but sadly I didn't see any explicit report on LWN unlike composefs without any state-of-art comparison.). EROFS is still very actively developing driven by many vendors (as you can see in the commit messages) and individuals with new features.

> The filenames used in the object store correspond to the fs-verity digests of the contents, so composefs can detect any changes to files using the fs-verity mechanism. The image file can contain the digests for the files to ensure they are not changed out from under composefs and the image file itself can be protected with fs-verity as well. The fs-verity digest of the image file could be retrieved from a secure location (e.g. a signed kernel command line) and passed to the mount command, allowing composefs to ensure that the entire filesystem is unchanged from its expected state.

One concern of this is that it cannot do this if the underlayfs does't support fs-verity like XFS, FAT, NTFS, local fs abc, or network fses or whatever else fses. It's basically not self-contained.

[1] https://lwn.net/Articles/874683/
[2] https://lwn.net/Articles/896140/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/...
[4] https://lore.kernel.org/r/CAOQ4uxhDYvBOLBkyYXRC6aS_me+Q=1...
[5] https://lore.kernel.org/linux-fsdevel/20210730194625.9385...
[6] https://lore.kernel.org/r/20211009061150.GA7479@hsiangkao...
[7] https://nydus.dev/
[8] https://github.com/dragonflyoss/image-service
[9] https://lore.kernel.org/all/af77c1f80e2725c4cf1bf106d6add...
https://lore.kernel.org/all/975b0f7acbb65445551ee374a2dd3...
https://lore.kernel.org/all/1702a314dc9de4626fbefc788213a...
https://lore.kernel.org/all/15428d5047390927114ad49d7721b...
https://lore.kernel.org/all/d6cbe74944ad1a6be21cc74b99b30...
[10] https://lore.kernel.org/all/20190717114151.10508-1-zbesta...
https://lore.kernel.org/all/20190717120644.11128-1-zbesta...
https://lore.kernel.org/all/20190719020653.8396-1-zbestah...
[11] https://lore.kernel.org/all/81a996d7-ba4c-e5a0-d0ce-11951...
[12] https://lore.kernel.org/all/20170922215508.73407-1-drosen...
[13] https://forum.snapcraft.io/t/squashfs-performance-effect-...
[14] https://www.usenix.org/system/files/atc19-gao.pdf

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 8:29 UTC (Thu) by diconico07 (guest, #117416) [Link] (8 responses)

I'm not sure Composefs authors will see your comments here, this should be posted in the RFC thread, so that may trigger a discussion and maybe shed light on why they chose to do composefs rather than building over EROFS (maybe they just don't know about the features of EROFS you highlight here).

Concerning the lack of coverage of newest EROFS features, maybe you can write some article about it, it is not unusual to have guest writers on LWN (please note that I'm not part of LWN and don't know their policy for such articles) and I'd love to hear more about these features and how to use them.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 10:27 UTC (Thu) by hsiangkao (guest, #123981) [Link] (7 responses)

> I'm not sure Composefs authors will see your comments here, this should be posted in the RFC thread, so that may trigger a discussion and maybe shed light on why they chose to do composefs rather than building over EROFS (maybe they just don't know about the features of EROFS you highlight here).

I could do like this, yet one reason I don't do it now despite all of above is that I didn't see any useful comments (either negative or positive) from experienced filesystem developers (as well as overlayfs folks) about such overlay model first (especially from the point of view of security concerns if fs-verity of underlayfs is unusable). And considering if introducing another brand new filesystem, why not using a finer-grained deduplicated approaches like chunk/block-based deduplication like casync (which already covers per-file deduplication) but insisting on per-file hardlink-like deduplication?

Also months ago one of composefs authors already asked me on one Slack channel about "building over EROFS" just after I noticed composefs work. Overall I think it's not hard for EROFS to support such per-file model but my personal question is still above (does the model above really sounds reasonable? If it sounds reasonable, I'm also quite happy to develop this in a few days if they don't have time). I requested him to ask the Linux filesystem community but they seemed they still insist on doing this.

Then, I also noticed another filesystem called ostreefs [1], yet I don't have time to look much into that again.

> Concerning the lack of coverage of newest EROFS features, maybe you can write some article about it, it is not unusual to have guest writers on LWN (please note that I'm not part of LWN and don't know their policy for such articles) and I'd love to hear more about these features and how to use them.

Thank you! If LWN experienced writers ignore EROFS/Nydus work even EROFS is actually gaining more powerful/popular these years (and most all mainstream Linux distributions have landed EROFS) but I'm not a native English speaker but I will try my best to show all the potential use cases! Thank you again!

[1] https://github.com/ostreedev/ostreefs

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 16:54 UTC (Thu) by gscrivano (subscriber, #74830) [Link] (6 responses)

Let me try to add some more context. I've contacted you on Slack because I was curious to see if there was a way to achieve with EROFS what I was playing with composefs, it would have been easier to do it with something already present in the upstream kernel, but it seemed immediately very different than what I had in mind.

I don't see composefs competing with EROFS. They can be used for similar use cases but I think they are very different.

You've started EROFS to improve over squashfs, while I was looking at how we could improve overlayfs, and especially how it is used with OCI containers: overlayfs puts together directories and composefs puts together files.

composefs would have probably been forgotten as another toy project if Alex hadn't added support for fs-verity and fixed the image format to be fully reproducible.

It is much simpler than EROFS and does very few things. Its simplicity is reflected in the source code:

$ wc -l ~/composefs/kernel/*.[hc] | tail -1
2287 total
$ wc -l ~/linux/fs/erofs/*.[hc] | tail -1
9098 total

We are mostly about putting together already existing pieces. We do not implement any deduplication or encryption, we just use what is already in the kernel.

Overall, I think composefs is a big improvement on what we have today. With just a few features, it solves a list of long-standing issues that exist with containers. From more serious ones like the lack of file integrity checks to more mundane ones like why do users have to worry about how they sort their ADD and RUN statements in a Dockerfile to optimize the reusing of layers/files? It is also useful without containers, it extends fs-verity to entire directories!

Another point is that long-term goal/wish is to be able to use composefs from a user namespace, would that be ever possible with EROFS and cachefiles?

You've pointed out the lack of a finer granularity for the deduplication. That is a conscious tradeoff: having to work at the file level simplifies the implementation. Now, composefs needs only to open the file from the underlying file system and delegate any operation to it. It doesn't have to worry about how chunks are glued together.
I've nothing against this feature though, we just didn't need it for our use cases. If a finer granularity is useful, then it can be added in the future.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 17:16 UTC (Thu) by hsiangkao (guest, #123981) [Link] (5 responses)

> Let me try to add some more context. I've contacted you on Slack because I was curious to see if there was a way to achieve with EROFS what I was playing with composefs, it would have been easier to do it with something already present in the upstream kernel, but it seemed immediately very different than what I had in mind.
> I don't see composefs competing with EROFS. They can be used for similar use cases but I think they are very different.

Please give more details first in which cases they behaves different.

> You've started EROFS to improve over squashfs, while I was looking at how we could improve overlayfs, and especially how it is used with OCI containers: overlayfs puts together directories and composefs puts together files.

So that is why I'd like to hear from overlayfs folks about this overlay model on the mailing list as well before I kick off the thread on the mailing list.

> composefs would have probably been forgotten as another toy project if Alex hadn't added support for fs-verity and fixed the image format to be fully reproducible.

Nope, composefs doesn't even consider endianness now as the very first version of Squashfs. I'm not sure how it could be _fully_ reproducible.

As I said above, I don't think fs-verity actually works if underlayfs doesn't support fs-verity, which also includes one of composefs example -- FUSE.

> It is much simpler than EROFS and does very few things. Its simplicity is reflected in the source code:
> $ wc -l ~/composefs/kernel/*.[hc] | tail -1
> 2287 total
> $ wc -l ~/linux/fs/erofs/*.[hc] | tail -1
> 9098 total

Please take the initial 4.19 EROFS version as the start (and exclude EROFS compression part since composefs doesn't support compression), since EROFS tends to be a generic filesystem for all backends like block, file, or later mtd with a lot of features.
Also let's do a wild guess if composefs finally merges, if you'd like to add more features, can its codebase stays at the same level?

> Overall, I think composefs is a big improvement on what we have today. With just a few features, it solves a list of long-standing issues that exist with containers. From more serious ones like the lack of file integrity checks to more mundane ones like why do users have to worry about how they sort their ADD and RUN statements in a Dockerfile to optimize the reusing of layers/files? It is also useful without containers, it extends fs-verity to entire directories!

EROFS will support self-contained data integrity later (I assumed also by using fs-verity), not like what composefs does --- just by fsverity_get_digest().

> You've pointed out the lack of a finer granularity for the deduplication. That is a conscious tradeoff: having to work at the file level simplifies the implementation. Now, composefs needs only to open the file from the underlying file system and delegate any operation to it. It doesn't have to worry about how chunks are glued together.

I don't know why you think chunk-based indexes is complex, like [2] [3]?

> I've nothing against this feature though, we just didn't need it for our use cases. If a finer granularity is useful, then it can be added in the future.

and it's much much like EROFS then.

[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/...
[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/...

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 18:06 UTC (Thu) by gscrivano (subscriber, #74830) [Link] (4 responses)

> Please give more details first in which cases they behaves different.

Given a directory with a bunch of files, how do you setup an EROFS mount that contains the metadata for the file system but refers to the files for their actual payload and without requiring the image blob to be transformed to a different format first?

This is the use case I am interested in.

> Nope, composefs doesn't even consider endianness now as the very first version of Squashfs. I'm not sure how it could be _fully_ reproducible.

What version are you looking at? It does since https://github.com/containers/composefs/pull/24/commits/5...

> Also let's do a wild guess if composefs finally merges

we posted an RFC to gather feedback after we worked on it for quite some time to see if people find it useful but you turned it as if it were an attack on EROFS. It is not.

> if you'd like to add more features, can its codebase stays at the same level?

From the discussion we just had, it seems EROFS still misses page cache sharing and data-integrity check, so it is likely EROFS will grow more as well?

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 18:39 UTC (Thu) by hsiangkao (guest, #123981) [Link] (2 responses)

> Given a directory with a bunch of files, how do you setup an EROFS mount that contains the metadata for the file system but refers to the files for their actual payload and without requiring the image blob to be transformed to a different format first?
> This is the use case I am interested in.

Why doesn't EROFS work like this? if you consider each EROFS blob as a per-file blob data (currently it's identified by an 16-bit blob ID, but it can extend if you really need like OSTree --- massive per-file blobs), and if each EROFS file has only _one_ chunk pointing to one blob ID.

Does it behave any different? You only change the integer blob ID into a string and strict it with one-file one-chunk.

The only difference is that EROFS uses fscache to manage its cache but that is partially due to our lazy pulling requirement (also I also tend to manage such blobs with a unified in-kernel framework rather than direct access random underlayfs files without some permission check. Take one example in my opinion, one composefs file "/bin/su" but the file was suddenly replaced by a malicious root shell. If fs-verity is disabled, how to prevent this --- on the other side, overlayfs doesn't have this issue since it doesn't keep another permission), you could refer to Incremental FS discussion [1]. Also EROFS already has an in-house version to access files directly for our special uses [2].

> Also let's do a wild guess if composefs finally merges we posted an RFC to gather feedback after we worked on it for quite some time to see if people find it useful but you turned it as if it were an attack on EROFS. It is not.

I just want to say composefs is much much similar to EROFS.

> From the discussion we just had, it seems EROFS still misses page cache sharing and data-integrity check, so it is likely EROFS will grow more as well?

Jingbo Xu is working on page cache sharing for Linux 6.3.
Data-integrity check and encryption for confidential containers will be discussed on the mailing list right after page cache sharing is landed.

[1] https://lore.kernel.org/all/20190502040331.81196-1-ezemts...
[2] https://github.com/alibaba/cloud-kernel/commit/6654d200b4...

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 21:24 UTC (Thu) by gscrivano (subscriber, #74830) [Link] (1 responses)

thanks for the useful information.

> The only difference is that EROFS uses fscache to manage its cache but that is partially due to our lazy pulling requirement

so that is a significant difference. If I understand it correctly we will need to either setup fscache and populate its cache or have a different daemon before we can use this mechanism.

Would it ever be possible to use fscache from a user namespace?

Composefs for integrity protection and data sharing

Posted Dec 9, 2022 2:30 UTC (Fri) by hsiangkao (guest, #123981) [Link]

> so that is a significant difference. If I understand it correctly we will need to either setup fscache and populate its cache or have a different daemon before we can use this mechanism.

Sorry I just went to sleep. Bytedance's folks already developed fscache failover feature and fully daemonless mode for their cloud production, and it's also useful to all network fses. Basically we already developed a lot of features for fscache, it just needs time to upstream.

Overall I just tried to say currently composefs is very similar to EROFS, even it has some difference (such as directly accessing files) it can be adapted without any diffcult.

> Would it ever be possible to use fscache from a user namespace?

I missed this part at that time, sorry. I think EROFS has the same security model as all on-disk fses with on-disk permission model (no matter it's block-based or file-based.) So the question is no different from other on-disk fses, including composefs.

Composefs for integrity protection and data sharing

Posted Dec 8, 2022 18:45 UTC (Thu) by hsiangkao (guest, #123981) [Link]

> What version are you looking at? It does since https://github.com/containers/composefs/pull/24/commits/5...

I'm sorry I didn't follow the recent version, glad to know it's already improved.

Squashfs on-disk format and "lack of development"

Posted Dec 9, 2022 18:44 UTC (Fri) by plougher (guest, #21620) [Link] (1 responses)

> One of the main reasons was Squashfs on-disk format having not be developed for a decade and many improvements with no response (at that time or until now, [8][9][10] and more.) or NACK [11]. It becames inactive despite it has more and more complaints [12] [13].

If you wish to throw mud, please do it on a forum that I use.

The reason why the Squashfs on-disk format has "not been developed" for a decade is because it was a condition of mainlining in 2009.

Also you seem to have deliberately cherry-picked emails which show myself in a bad-light, concentrating on 2018/2019 when I was *extremely* busy in my paid work as kernel maintainer for Redhat, dealing with the fallout of Spectre/Meltdown etc.

My Squashfs work is *unpaid* and *voluntary*, and it will take second place to paid work.

I don't think someone being paid to work on a filesystem is in a position to criticise.

Phillip Lougher (Squashfs author and maintainer).

Squashfs on-disk format and "lack of development"

Posted Dec 10, 2022 0:12 UTC (Sat) by hsiangkao (guest, #123981) [Link]

> If you wish to throw mud, please do it on a forum that I use.

Since I don't think it needs some real discussion, it just shows a fact of squashfs work during 2018-2020 (maybe).

> My Squashfs work is *unpaid* and *voluntary*, and it will take second place to paid work.
> I don't think someone being paid to work on a filesystem is in a position to criticise.

My work at Red Hat was _only_ XFS and currently EROFS still only takes very little time of me (we still need a lot of time to maintain our kernel storage stack.)


Copyright © 2022, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds