|
|
Subscribe / Log in / New account

splitting the large CVE list in the security tracker

splitting the large CVE list in the security tracker

Posted Dec 12, 2018 9:02 UTC (Wed) by mjthayer (guest, #39183)
In reply to: splitting the large CVE list in the security tracker by JoeBuck
Parent article: Large files with Git: LFS and git-annex

> Perhaps it would be possible to use some kind of wrapper so that the file could be maintained as a large file, but git would store it as many pieces. If the file has structure, the idea would be to split it before checkin and reassemble it on checkout.

Taking this further, what about losslessly decompiling certain well-known binary formats? Not sure if it would work for e.g. PDF. Structured documents could be saved as folders containing files. Would the smudge/clean filters Antoine mentioned work for that?

On the other hand, I wonder how many binary files could really be versioned sensibly which do not have some accessible source format which could be checked into git instead. I would imagine that e.g. most JPEGs would be successive versions which did not have much in common with each other from a compression point of view. It would just be the question - does one need all versions in the repository or not? And if one does, well not much to be done.


to post comments

splitting the large CVE list in the security tracker

Posted Dec 15, 2018 0:59 UTC (Sat) by nix (subscriber, #2304) [Link]

The LZMA compression system already does some of this, with a customizable filter system, though at the moment the only non-conventional-compression filters are filters for a lot of ISAs that can absolutize relative jumps to increase the redundancy of executables. :)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds