Digging a big hole is difficult (if, like me, you aren't very fit) but it is hardly complex. Solving sudoku is certainly complex but I do not find it particularly difficult (fun though).
You could think of 'complexity' as meaning 'room for errors to creep in'. It is certainly easy to make mistakes in sudoku. Less so in hole digging.
The complexity is not in the code, but in the need for the code. It means that I cannot simply archive each file in isolation, but need to interpret it in a larger context. It means to extract a file from an archive, I either need 2 passes, or I need to remember where every linked file was and rewind to read it.
It means it is imperative that the filesystem provides a unique inode number for every file, that is not re-used during the time that tar runs. This is not always as easy as it sounds.
Suppose while tar is running it finds a file '/long/path/foo' with a link count of 2. Immediately thereafter I remove both links and create a new file with two links, one at '/other/path/foo' and it happens to get the same inode number. When tar gets to that other foo, what does it do? Is it the other link which happens to have been changed in the mean time - so probably best to record the link and not the file - or is it a brand new file - so best to archive it and forget about finding the second link to the first foo.
Even if you think the answer to the above is obvious, the fact that I had to ask the question is a complexity.
So no: it isn't difficult to fix the glaring obvious issues. But it still adds complexity which we might be better off without.
Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds