I don't see the need for a journal on each chunk as being a critical problem. It does mean that the plan of having 1G chunks doesn't work, but for many larger filesystems that was already questionable (I try to split my _files_ into 1G chunks if I can, but if they are < 10g I may not try too hard)
I have a use case where I want to have a ~140 TB array that will essentially be a single directory of files, I was thinking in terms of chunk sizes in the multi-TB size, at that point the need to have a separate journal per chunk isn't that bad in terms of overhead
yes, 1G chunks with many small files can be reasonable, but in most cases where you are talking extrememly large logical drives, you aren't dealing with text files, you are dealing with large files as well.
the fact that this wouldhave allowed filesystem checks to a chunk what the logical drive is online would have been a very useful thing.