The Compact C Type Format in the GNU toolchain
The Compact C Type Format in the GNU toolchain
Posted Aug 7, 2019 19:41 UTC (Wed) by madscientist (subscriber, #16861)In reply to: The Compact C Type Format in the GNU toolchain by fuhchee
Parent article: The Compact C Type Format in the GNU toolchain
Hm... we must be talking about something different. Let be more clear.
I compile a program on my build system (I use a sysroot to ensure that it links against sufficiently older system libraries that it can run "anywhere"). I send my program out to run tests some other system running some random distribution completely different than the one it was built on, which is using a different GNU libc, etc. Maybe Travis, or AWS, or just a local test farm.
It fails and a core is generated. To debug that core I need my program, the debuginfo from my program (if the program is stripped), the core file, and the system libraries from the system it was running on when the core is generated.
I can't see any way that a buildid compiled into my binary can be sufficient to retrieve the runtime system libraries.
Posted Aug 7, 2019 19:52 UTC (Wed)
by fuhchee (guest, #40059)
[Link] (3 responses)
Posted Aug 7, 2019 20:22 UTC (Wed)
by madscientist (subscriber, #16861)
[Link] (2 responses)
I think it would also be helpful if the client interface provided separate lookup and download methods rather than forcing them to both be a single method (there can be a simplified "do both" method as well if wanted). I can easily imagine situations where we want to know whether a given buildid exists on the server without actually downloading it.
For example, suppose I have a suite of test servers running random environments; during test runs a core is generated. I want to know if the program under test and/or system libraries for this system already exist in the debug server or not: I just want to look them up but not download them. If they don't exist perhaps I'll include them along with the core file when I bundle up the build results. If they do exist I don't need to add them.
Or perhaps I have an automated way for the test system to upload binaries and/or system libraries that aren't already on the debug server (I understand that upload is not in scope for this project and would need some other process) but I don't want to bother uploading things that I already have so I need to be able to check.
A simple program that uses the client interface to look up and/or download files would be very useful, as an example if nothing else (and probably for people who would like to add scripting to systems where it's not so simple to recode them to use it).
Cheers!
Posted Aug 7, 2019 21:09 UTC (Wed)
by fuhchee (guest, #40059)
[Link] (1 responses)
Yes.
> I think it would also be helpful if the client interface provided separate lookup and download methods
Will consider that ... though there may be better ways to service the needs you outline. Deduplication at upload time should be easy too. Re. optimizing packaging of core dumps ... not sure how much sense that makes. The core dump recipient could consult the same debuginfo servers too; or you could preemptively package all the files. Will think on it more.
> A simple program that uses the client interface to look up and/or download files would be very useful
It just appeared in the repo! We employ only the most talented psychics and keyboard monks.
Posted Aug 7, 2019 21:29 UTC (Wed)
by madscientist (subscriber, #16861)
[Link]
If you mean deduplication by the server that's probably helpful but it's a lot of wasted effort to upload 10's or 100's of MB of libraries, binaries, etc., only to have it tossed on the floor as duplicate. Consider a build farm with 200 systems, which are upgraded via apt-get update or whatever at random intervals so they have different system libraries, different program instances, etc... having every system upload all its files for every core even though the system libraries might only change once every few weeks or less seems like overkill.
> Re. optimizing packaging of core dumps ... not sure how much sense that makes. The core dump recipient could consult the same debuginfo servers too; or you could preemptively package all the files.
For this I wasn't thinking that the dbgserver code would do that, I was thinking about scripting that users are using with their test clients to bundle results of failures so they can be uploaded to a test server for further investigation. Our current scripting already preemptively packages all the files: what I'd like to be able to do is detect when some/all of these items are not needed and skip that to reduce the size of uploaded artifacts.
When you're talking about moving content into/out of AWS or other cloud providers, the amount of data sent over the network directly equates to $$ spent and reducing it is always welcome.
Thanks for working on this, it'll be very cool!
The Compact C Type Format in the GNU toolchain
I see. So dbgserver_find_executable() is intended to be used with shared libs as well? Or is this part not quite complete?
The Compact C Type Format in the GNU toolchain
The Compact C Type Format in the GNU toolchain
The Compact C Type Format in the GNU toolchain