IMHO there is much more than ABI that matters for tracing.
For a trace to be useful not only the ABI should be compatible but also the
calltrace, the timing and the global behaviour.
If the tracing infrastructure is really heading for common deployment it will
be used a lot! Not only a few script here and there but many scripts running
in parallel on a production system.
Tracing is not only about simply dumping a few tracepoints including additional
information to userspace but to get a picture about what's going on.
Just dumping data to userspace even if compatible between different kernel
versions but with ever changing internal structures (not C structures but
implementations, features, timing, ...) is NOT going to work very well in the
BUT from the requirement "compatible ABI, calltrace, timing and global
behaviour" it is clear that this is not doable with the current approach as it
would mean completely locked down and stable kernel internals. Thus one
approach which might work is define "virtual tracepoints" of events that are
not implementation depended to the current kernel architecture.
E.g for the process/scheduler part which can also be provided with some future
new scheduler implementation. All of those "virtual tracepoints" may be
grouped by section, so one group for processes one for block i/o, one for
network, ... This "virtual tracepoints" should define a stable ABI and should
never change, not for the arguments and also not for the behaviour.
Then there may be the normal tracepoint (maybe even called dirty probes *g*)
which ARE implementation depended. Even those tracepoints are _required_,
because sometimes we have to deal with problems from a particular
implementation (e.g cfq making some problems). No stable ABI can ever be
defined to full cover every aspect of cfq and guarantee compatibility with
every future i/o scheduler, so why bother to define one? There may be a set of
probes in the block i/o group to get some scheduler informations but only
common to all scheduler. (e.g latency's of all io requests by this pid, or by
processname matching this pattern).
Maybe, even providing an abstract custom language to filter informations and let
it run on a VM in the kernel itself.
The language itself should be designed in a way to not let the user crash the system
or hang in an infinit loop.
All this infrastructure should be usable on a heavy loaded production system
without even the tiniest chance of disrupting operations.
Some future considerations:
- A proper tracing infrastructure might supersede many kernel <-> userspace
interfaces, as they are all special cases of a programmable information
gathering framework within the kernel.