|
|
Log in / Subscribe / Register

That newfangled Journal thing

That newfangled Journal thing

Posted Nov 21, 2011 9:04 UTC (Mon) by nim-nim (subscriber, #34454)
In reply to: That newfangled Journal thing by hjh
Parent article: That newfangled Journal thing

> One obvious separation into a frontend, for receiving, annotating and
> generating log records, integrated into systemd, and a configurable back
> end would make good on the biggest concerns mentioned over and over.

This has been tried with gconf and didn't work too well.

The developpers wrote(and rewrote) the frontend so it was fast and easy to integrate in apps (their pet peeve) and the user-friendly backend that anyone could use without reading the source code never materialised.

Developpers don't want to worry about the backend, if you let them sell you an unfinished backend (we'll do better later, promise) you'll never see the replacement before the app dies.


to post comments

That newfangled Journal thing

Posted Nov 21, 2011 11:56 UTC (Mon) by hjh (guest, #4352) [Link]

> The developers wrote(and rewrote) the frontend so it was fast and easy
> to integrate in apps (their pet peeve) and the user-friendly backend that
> anyone could use without reading the source code never materialised.

not if there is a simple documented streaming interface, to the backends that could even just as well stream into a file. Somthing with log messages like:

#time: <date> #id: org.kernel.subsys.facility.xyz #...
#uuid:<UUID> #other: <other> #future1: <future1>
_ mesg_line_1
_ mesg_lin_e2
_ ...
_ mesg_line_n

#time:<date> # .....
...

such a format should be possible to commit to for developers as it's expandable with new records elements in the future, and as it does not require to commit to more complicated backend storage design. Binaries like core dump shouldn't go to the log anyway, only as references to a file location. Admins concerned can opt out of the provided backend and
use files or a database instead to store the messages with some more
storage space required. I even think that using a database for indexing
purpose is the better solution, than yet another special database for logging only to better with compression.

The best would be, to separate the backend into a second process, which supports the compression, hashing and indexing stuff. The second process could then easily be located on a different host, and gets the records synchronously or in batch mode.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds