btrfs: compression: refactor and enhancement preparing for subpage compression support
From: | Qu Wenruo <wqu-AT-suse.com> | |
To: | linux-btrfs-AT-vger.kernel.org | |
Subject: | [PATCH v4 0/9] btrfs: compression: refactor and enhancement preparing for subpage compression support | |
Date: | Thu, 17 Jun 2021 13:14:41 +0800 | |
Message-ID: | <20210617051450.206704-1-wqu@suse.com> | |
Archive-link: | Article |
There are quite some problems in compression code: - Weird compressed_bio::pending_bios dance If we just don't want compressed_bio being freed halfway, we have more sane methods, just like btrfs_subpage::readers. So here we fix it by introducing compressed_bio::io_sectors to do the job. - BUG_ON()s inside btrfs_submit_compressed_*() Even they are just ENOMEM, we should handle them. With io_sectors introduced, we have a way to finish compressed_bio all by ourselves, as long as we haven't submitted last bio. If we have last bio submitted, then endio will handle it well. - Duplicated code for compressed bio allocation and submission Just small refactor can handle it - Stripe boundary is checked every time one page is added This is overkilled. Just learn from extent_io.c refactor which use bio_ctrl to do the boundary check only once for each bio. Although in compression context, we don't need extra checks in extent_io.c, thus we don't need bio_ctrl structure, but can afford to do it locally. - Dead code removal One dead comment and a new zombie function, btrfs_bio_fits_in_stripe(), can be removed now. Changelog: v2: - Rebased to latest misc-next - Fix a bug in btrfs_submit_compressed_write() where zoned write is not taken into consideration - Reuse the existing chunk mapping of btrfs_get_chunk_map() v3: - Fix a bug that zoned device can't even pass btrfs/001 This is because at endio time, bi_size for zoned device is always 0. We have to use bio_for_each_segment_all() to calculate the real bio size instead. In theory, it should also happen more frequently for non-zoned device, but no catch for all test cases (with "-o compress") except btrfs/011. - Fix btrfs/011 hang when tested with "-o compress" This is caused by checking both atomic value without protection. Checking two atomic values is no longer atomic. In fact, with compressed_bio::io_sectors introduced, pending_bios is only used to wait for any pending bio to finish in error path. Thus dec_and_test_compressed_bio() only need to check if io_sectors is zero - Fix a error that in error handling path, we may hang due to missing wake_up() in dec_and_test_compressed_bio() v4: - Use formal words for BUG_ON() removal patch titles - Remove compressed_bio::pending_bios As compressed_bio::pending_sectors can replace it completely - Remove unnecessary comments and BUG_ON()s - Use wait_var_event() APIs to reduce the memory overhead - Comments update to follow the same schema for moved comments Qu Wenruo (9): btrfs: remove a dead comment for btrfs_decompress_bio() btrfs: introduce compressed_bio::pending_sectors to trace compressed bio more elegantly btrfs: handle errors properly inside btrfs_submit_compressed_read() btrfs: handle errors properly inside btrfs_submit_compressed_write() btrfs: introduce submit_compressed_bio() for compression btrfs: introduce alloc_compressed_bio() for compression btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time btrfs: make btrfs_submit_compressed_write() to determine stripe boundary at bio allocation time btrfs: remove unused function btrfs_bio_fits_in_stripe() fs/btrfs/compression.c | 587 +++++++++++++++++++++++------------------ fs/btrfs/compression.h | 4 +- fs/btrfs/ctree.h | 2 - fs/btrfs/inode.c | 42 --- 4 files changed, 336 insertions(+), 299 deletions(-) -- 2.32.0