|
|
Subscribe / Log in / New account

Toward more reliable logging

By Jonathan Corbet
April 13, 2012
Messages from the kernel are created by humans, usually using one of the many variants of the printk() function. But, increasingly, those messages are read by machines in the form of log file parsers, automated management systems, and so on. The machines have, for some time, struggled to make sense out of those human-created messages which, often as not, are unpredictable in their organization, lacking important information, and subject to change. So it is not surprising that there has been ongoing interest in adding some structure to kernel log messages; the subject was recently raised by the audience at the Collaboration Summit kernel panel. Even so, almost every attempt to improve kernel logging has failed to make much (if any) headway.

The same fate seemed to be in store for Lennart Poettering and Kay Sievers when they presented their ideas at the 2011 Kernel Summit; in particular, their concept of attaching 128-bit unique ID numbers to each message was met with almost universal disdain. Lennart and Kay have not given up, though. The latest form of their work on the kernel side of the problem can be seen in the structured printk() patch recently posted by Kay.

The patch does a few independent things - a cause for a bit of complaining on the mailing list. The first of these is to change the kernel's internal log buffer from a stream of characters into a series of records. Each message is stored into the buffer with a header containing its length, sequence number, facility number, and priority. In the end, Kay says, the space consumed by messages does not grow; indeed, it may shrink a bit.

The record-oriented internal format has a number of advantages. If messages are being created faster than they can be consumed by user space, it is necessary to overwrite the oldest ones. Current kernels simply write over the character stream, with the result that truncated messages can find their way into the log. The new implementation drops entire messages at once, so the stream, while no longer complete, is not corrupted. The sequence numbers allow any consumer to know that messages have been dropped - and exactly how many it missed. The record-oriented format also enables improved handling of continuation lines at the cost of making the use of the KERN_CONT "priority" mandatory for such lines.

The second change is to allow the addition of a facility number and a "dictionary" containing additional information that, most likely, will be of interest to automated parsers. The dictionary contains "KEY=value" pairs, separated by spaces; these pairs will contain, for example, device and subsystem names to unambiguously identify the device that generated the message. Kernel code that wants to attach a facility number and/or dictionary to a message will use the new function printk_emit() to do so:

    int printk_emit(int facility, int level, const char *dict, 
			size_t dictlen, const char *fmt, ...);

Regular printk() turns into a call to printk_emit() with a facility of zero and a null dict.

Creation of the dictionary string itself is left as an exercise for the caller; it is not something one is likely to see done in most places where printk() is called. In fact, the only full user of printk_emit() in the patch is dev_printk() which uses it to add a dictionary with SUBSYSTEM and DEVICE fields describing the device of interest. If some form of this patch is merged, one can expect this usage pattern to continue; the creation of dictionaries with ancillary information will mostly be done with subsystem-specific print functions.

Finally, the patch changes the appearance of log messages when they reach user space. After some complaints from Linus, the format has evolved to look something like this:

    7,384,6193747;sr 1:0:0:0: Attached scsi CD-ROM sr0
     SUBSYSTEM=scsi
     DEVICE=+scsi:1:0:0:0

The comma-separated fields at the beginning are the message priority (7, in this case), the sequence number (384), and the system time in microseconds since boot. The rest of the line is the message as passed to printk(). If the message includes a dictionary, it appears in the following lines; following the style set in RFC 821, continuation lines begin with white space. The result, it is hoped, is an output format that is simultaneously easy for humans to read and machines to parse.

The behavior of the /dev/kmsg device changes somewhat as well. In current kernels, this device is only useful for injecting messages into the kernel log stream. Kay's patch turns it into a device supporting read() and poll() as well, with multiple concurrent readers supported. If messages are overwritten before a particular reader is able to consume them, the next read() call will return an EPIPE error; subsequent reads will start from the next available message. This device thus becomes a source of kernel log data that is easy to work with and that reliably delivers log messages or ensures that the recipient knows something was lost.

Modulo some quibbling over the log format, the response to the patch seems to be mostly positive. The biggest exception is arguably Ingo Molnar, whose suggestion that tracepoints and perf should be used instead does not appear to have received a lot of support. Even Linus seems mostly happy; the absence of the 128-bit unique ID perhaps has a lot to do with that. But, beyond that, a more robust log buffer with sequence numbers has some clear advantages; Linus suggested that, if that part were split out, he might even consider merging it for the 3.4 release. That seems unlikely to happen at this point in the cycle, but it wouldn't be surprising to see something come together for the 3.5 development cycle. If that happens, Linux will still lack a true structured logging mechanism, but it would have something with more structure and reliability than it has now.

Index entries for this article
KernelKernel messages
KernelMessages


to post comments

Toward more reliable logging

Posted Apr 13, 2012 20:26 UTC (Fri) by vrfy (guest, #13362) [Link]

Toward more reliable logging

Posted Apr 13, 2012 21:06 UTC (Fri) by slashdot (guest, #22014) [Link] (3 responses)

The dictionary API seems a totally wrong design, because it has redundant information, is incredibly cumbersome to use and thus is not likely to be used by anyone.

MUCH better API:
printk(KERN_EMERG "%[subsys]s: %[dev]s: device caught fire", subsys_name, dev_name);

Then just output the format string and arguments separately instead of doing formatting in the kernel (and of course, emulate the old interface by doing it in the kernel on demand).

For info that shouldn't be in the human-readable output, add a special syntax such as %{subsys}s that doesn't output anything.

Toward more reliable logging

Posted Apr 14, 2012 18:48 UTC (Sat) by smurf (subscriber, #17840) [Link] (2 responses)

This work until you happen to come across a substituted string which happens to contain a colon.
Your proposal also is much more complicated, for no tangible gain AFAICS.

Toward more reliable logging

Posted Apr 15, 2012 0:40 UTC (Sun) by slashdot (guest, #22014) [Link] (1 responses)

You don't parse the colons...

The kernel sees "%[subsys]s" and adds the argument as the value of the "subsys" key, using the same format (or alternatively, just prints the unformatted format string, and userspace does the work).

This would make the API easy to use, compared to the patch which requires horrible code like this:

char* buf = kmalloc(64 + strlen(subsys_name) + strlen(dev_name));
sprintf(buf, "SUBSYS=%s\nDEVICE=%s\n", subsys_name, dev_name);
printk_emit(buf, strlen(buf), KERN_EMERG "%s: %s: device caught fire", subsys_name, dev_name);
kfree(buf);

Compare that to just:
printk(KERN_EMERG "%[subsys]s: %[device]s: device caught fire", subsys_name, dev_name);

It's clear that the API in the patch is totally inferior and that almost nobody will use it except dev_printk.

Toward more reliable logging

Posted Apr 15, 2012 14:56 UTC (Sun) by abacus (guest, #49001) [Link]

Does that mean that you're not familiar with kasprintf() ?

Toward more reliable logging

Posted Apr 13, 2012 22:00 UTC (Fri) by dlang (guest, #313) [Link] (6 responses)

generating a sequence number will require a global lock around this value, and can lead to cache ping-pong between processors. I thought this type of thing was being frowned upon.

text buffers can (conceptually anyway) be per-cpu and combined only when read.

Toward more reliable logging

Posted Apr 13, 2012 22:12 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

You'll need sync points to write to log buffers anyway.

And if you spend significant amount of time on locking the log buffer then you probably have MUCH bigger problems.

Toward more reliable logging

Posted Apr 13, 2012 22:21 UTC (Fri) by slashdot (guest, #22014) [Link] (3 responses)

Well, in theory if you have synchronized TSCs you can write the TSC to a local buffer, and then merge the buffers, generating sequence numbers only for the final log.

While the log merging thread does indeed need to synchronize, it doesn't add latency to the log writers, at the cost of multiplying the memory comsumption by the number of CPUs.

But this is an implementation detail, and isn't likely to be worth bothering since logging is supposed to be rare.

Toward more reliable logging

Posted Apr 14, 2012 15:08 UTC (Sat) by Ben_P (guest, #74247) [Link] (2 responses)

Using the TSC seems like it would cause some serious issues if the id's need to increase with time. It is pretty common for some cores to get clocked down or completely turned off in the course of normal operation.

Toward more reliable logging

Posted Apr 17, 2012 5:25 UTC (Tue) by Jannes (subscriber, #80396) [Link] (1 responses)

A modern TSC keeps counting, at the same constant rate.

Toward more reliable logging

Posted Apr 20, 2012 12:37 UTC (Fri) by jengelh (guest, #33263) [Link]

Which not everybody has.

Toward more reliable logging

Posted Apr 15, 2012 0:35 UTC (Sun) by jzbiciak (guest, #5246) [Link]

Something tells me that the amount of cacheline ping-ponging this would cause wouldn't be measurable in the least. The context switch for the logger to receive the message ought to be at least an order of magnitude more expensive.

They should be paying attention to the lumberjack project

Posted Apr 13, 2012 22:09 UTC (Fri) by dlang (guest, #313) [Link] (59 responses)

When considering the format for the output, they should be paying attention to the lumberjack project, which has developers from syslog-ng, rsyslog, nxlog, RedHat representatives and others (including me) discussing how to standardize structured logs across the board.

It's disappointing that Lennart and Kay are choosing not to participate in these discussions and going their own way.

So far, Lumberjack seems to be standardizing on a JSON representation as the least-common-denominator format for passing structured logs around.

They should be paying attention to the lumberjack project

Posted Apr 13, 2012 23:33 UTC (Fri) by ebiederm (subscriber, #35028) [Link] (6 responses)

What is wrong with structured format in RFC5424?

Call me dense but it looks to me like the problem of structured logging have already solved and standardized.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 1:08 UTC (Sat) by dlang (guest, #313) [Link] (4 responses)

RFC5424 lets you do flat name/value pairs, but it turns out that it's extremely valuable to be able to have more structure than that. Thus the move to JSON

They should be paying attention to the lumberjack project

Posted Apr 16, 2012 6:21 UTC (Mon) by quotemstr (subscriber, #45331) [Link] (3 responses)

> RFC5424 lets you do flat name/value pairs, but it turns out that it's extremely valuable to be able to have more structure than that.

Having more structure than that adds too much complexity to be worthwhile. You can approximate structure using keys.with.dots or numbers --- e.g., filename1=foo filename2=bar. Hierarchy adds far more complexity to makes indexing and browsing, very common cases, than it adds to the very rare log message with repeated units.

Can you give me an example of a situation in which hierarchical logging is clearly better than key-value pairs?

They should be paying attention to the lumberjack project

Posted Apr 16, 2012 6:30 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Flattening hierarchy is extremely easy. So why not do it if you want?

And I don't really see how hierarchical data makes logging more complex. If you don't care about it, then simply ignore it.

They should be paying attention to the lumberjack project

Posted Apr 17, 2012 16:34 UTC (Tue) by k8to (guest, #15413) [Link] (1 responses)

It's really hard to describe transactions with simple keyval pairs.

They should be paying attention to the lumberjack project

Posted Apr 17, 2012 16:36 UTC (Tue) by quotemstr (subscriber, #45331) [Link]

Can you give me a concrete example?

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 23:27 UTC (Sat) by dlang (guest, #313) [Link]

For what it's worth, Rainer Gerhards (the author of RFC5424) is a participant in both the lumberjack project, and the CEE standardization project (http://cee.mitre.org/ working to standardize log messages)

They should be paying attention to the lumberjack project

Posted Apr 13, 2012 23:52 UTC (Fri) by aliguori (subscriber, #30636) [Link] (46 responses)

JSON cannot reliably represent integers. Its a poor choice for something that will log large integers often.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 1:16 UTC (Sat) by dlang (guest, #313) [Link] (5 responses)

I may be missing something, but as I understand it, JSON does let you represent Integers, it doesn't force all numbers to be floating point

I raised the issue of very large integers (>32 bits) and the feeling is that they are rare enough that we can live with them being treated as strings.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 4:20 UTC (Sat) by nybble41 (subscriber, #55106) [Link] (3 responses)

Even if it did force all numbers to be floating-point, the range of integers which can be represented precisely in double-precision (eight-byte) IEEE floating point format is a superset of both signed and unsigned 32-bit fixed-point notation.

Floating-point has a certain reputation, but much of it is undeserved, at least for plain integer math. Lua, for example, treats all numbers as double-precision floating point by default on platforms with an FPU; there is no separate integer type. You only run into problems when you exceed 52 significant bits. (Note that any fraction with a factor larger than two in the denominator requires infinite significant bits, which is obviously much more than 52. This is where most of the reputation originates. However, it doesn't affect integers.)

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 10:58 UTC (Sat) by liljencrantz (guest, #28458) [Link]

JavaScript does the same.

They should be paying attention to the lumberjack project

Posted May 30, 2012 21:38 UTC (Wed) by scientes (guest, #83068) [Link] (1 responses)

>(Note that any fraction with a factor larger than two in the denominator requires infinite significant bits, which is obviously much more than 52. This is where most of the reputation originates. However, it doesn't affect integers.)

not really. It's rather, any factor with a denominator that is not in the part of the set 2^n, (or with decimal floating point, 2^n * 5^m)

They should be paying attention to the lumberjack project

Posted May 31, 2012 0:19 UTC (Thu) by nybble41 (subscriber, #55106) [Link]

>> (Note that any fraction with a factor larger than two in the denominator requires infinite significant bits, which is obviously much more than 52. This is where most of the reputation originates. However, it doesn't affect integers.)

> not really. It's rather, any factor with a denominator that is not in the part of the set 2^n, (or with decimal floating point, 2^n * 5^m)

I was referring to the prime factors; any number of the form 2^n * 5^m with naturals n and m (!= 0) has prime factors 2 and 5. A number with a prime factor of 3 or 7, for example, would not terminate in either binary or decimal. However, your description is equivalent.

They should be paying attention to the lumberjack project

Posted Apr 27, 2012 15:42 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

JSON definitely does not put any upper or lower boundaries on integers, so the limits of the language or parser/encoder that operate on it apply.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 1:25 UTC (Sat) by dlang (guest, #313) [Link] (39 responses)

Do you have a suggestion for a serialization format to use that would be better?

among the selection criteria are (not in order)

1. size of the resulting serialized string (XML struggles here)

2. availability of libraries in all languages to deal with the serialization format.

3. ability to be transported through traditional logging mechanisms (syslog for example)

4. ability for existing logging tools that can deal with ad-hoc formats to be able to interact with the serialized data

5. human readability of the resulting string

6. ability to represent hierarchical structures.

7. avoiding people saying 'why did you use X' (JSON obviously fails here as well, but it does have a fairly large mindshare)

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 2:46 UTC (Sat) by slashdot (guest, #22014) [Link] (7 responses)

Are hierarchical structures really needed in logging? Why?

How do you plan to index and search a set of records consisting of hierarchical structures?

A list of key/value pairs seems much better, and can support indexing and SQL queries trivially.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 23:21 UTC (Sat) by dlang (guest, #313) [Link] (5 responses)

> Are hierarchical structures really needed in logging? Why?

I was not involved with the discussions, but a few things come to mind

1. the data you want to log may include structures

2. you may need to log multiple data items of the same type (like a list of filesnames)

you can always choose to flatten a hierarchical structure if you need to, but it's much harder to re-create the structure after you have flattened it

not all storage is SQL.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 23:35 UTC (Sat) by man_ls (guest, #15091) [Link] (2 responses)

Nice! Now all we need is a MongoDB database inside the kernel. And it could be used for all kinds of things: device trees, file structures, even memory management come to mind.

Just joking. I think I get the least-common-denominator motivation. But isn't getting JSON into any kind of logging facility going to cause immediate "designed-by-committee" fears into kernel developers, and therefore ignore the lumberjack project?

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 0:01 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

remember that what we are talking about is not anything consumed inside the kernel, just the format of a one-way feed to output from the kernel to userspace

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:01 UTC (Sun) by man_ls (guest, #15091) [Link]

OK, that makes sense. JSON is trivial to generate.

They should be paying attention to the lumberjack project

Posted Apr 17, 2012 16:38 UTC (Tue) by k8to (guest, #15413) [Link] (1 responses)

To be fair, a list of say filenames isn't hard to do in kv pairs.

filename:foo filename:bar filename:baz

Sure, that's more trouble to parse than assuming you can't have repeats, but it's not much work.

However, it starts getting tedious if you need to say something like:

severity:fatal message="corruption in filesystem regarding following items" filename:foo inode:3 filename:bar inode:6 filename:baz inode:8

At this point you really want more structure, or else the consuming end has to intuit to group things.

They should be paying attention to the lumberjack project

Posted Apr 21, 2012 8:19 UTC (Sat) by neilbrown (subscriber, #359) [Link]

.... filename-1:foo inode-1:3 filename-2:bar inode-2:6 filename-3:bar inode-3:8

Explicit structure embedded in the names of name:value pairs.

Yes, it's ugly. But it's simple and if most cases don't need any real structure, then the ugliness will hardly be noticed.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:56 UTC (Sun) by dlang (guest, #313) [Link]

to some extent, hierarchical structures can be simulated by item names

for example, a firewall log message needs to have the following items in it

the device generating the log message
the device that is the source of the traffic
the device that is the destination of the traffic

each of these device definitions may include more than one piece of information (hostname, FQDN, IP address, port number)

you could have

loghost: hostname, logip: 1.1.1.1, sourcehost: hostname2, sourceIP: 2.2.2.2, sourceport: 1234, destinationhost: hostname3, destinationIP: 3.3.3.3, destinationport:1234

or you could have
logsource { name: hostname, ip: 1.1.1.1}, source { name: hostname2, ip: 2.2.2.2, port: 1234}, destination { name: hostname3, ip: 3.3.3.3, port 1234 }

personally, I find the second arrangement better and less likely to get confused by people adding extra information to a particular component

as another example, think of all the contexts that a userid can appear in, including what user the applications writing the log message is running as, should all these different possible contexts use a different tag? or are we better off using the same tag everywhere and using the hierarchy information to determine the context?

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 3:20 UTC (Sat) by drag (guest, #31333) [Link] (3 responses)

This may come off as a bit abusive and probably is full of fail but what I'd like to see in a log format is null deliminated strings.

And it would look something like this:
log_version\0 machine_ident\0 machine_fqdn\0 timestamp\0 service_ident\0 service_string\0 process_id\0 severity\0 data\0 checksum\0\0\n

something simple like that. The *ident fields are UUID and are completely arbitrary.

The 'machine_ident' would be generated when the syslog-like daemon first starts up like ssh keys are. When the logging daemon connects to a service or starts a new log file it just pukes out a log entry with various useful system identification strings that can be easily picked up by any logging parsing software. Like how browsers do when they connect to a web server. That way it makes it easy to identify the machine by UUID. As long as you can read the first log entry in any file or any time it connects to a network logging daemon then you can figure out what it is pretty easily.

Timestamps are just x.xxxx seconds from unix epoch, GMT. Can have a fine grain of a time stamp as the application warrants and the system can deliver on.

Severity level is similar to how Debian does their apt-pinning. Just a number, like 0-1000. And that number maps to different severity levels:
0-250 - debug
250-500 - info
500-750 - warning
750-1000 - error

That way application developers have a way of saying "well this error is more of a error then that error", which seems important.

The actual data field can be whatever you want. Any data as long as no nulls. Probably more structuring can be layered on later, but this makes it easy to incorporate legacy logging data into this format. Just take the string as delivered by the application/server, stuff the entire thing into <data> and wrap it in those other fields as well as can be done. <data> being JSON would be fine by me and the fact that it's JSON or whatever would be recorded as part of the version string.

I know something like that would make my job a lot easier. :)

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 23:23 UTC (Sat) by dlang (guest, #313) [Link] (2 responses)

The biggest problem with your approach is that it requires throwing away all existing logging and log processing tools, and as you aren't going to get everyone to buy into the new scheme at once and modify every program in the world to use your new scheme the probable result is that nothing will happen instead.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 3:38 UTC (Sun) by drag (guest, #31333) [Link] (1 responses)

I guess so.

I figured it would be the logging daemon's job to put in all the fields as well as it can, but shovel in the log from the application into the 'data' section. If it leaves the 'severity' section empty or whatever then that would be legal. It's a 'best effort' type thing rather then requiring strict compliance.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 4:03 UTC (Sun) by dlang (guest, #313) [Link]

The idea here (lumberjack and CEE) is to support and encourage the applications (including the kernel) to create structured logs so that the data that you are referring to as the 'data' section is easier (and thus more reliable) to deal with.

the first step is to have the normal message just stuck in the 'data' section, and the lumberlog library ( http://algernon.github.com/libumberlog/umberlog.html ) is designed to do just that. It can be LD_PRELOADed for any application and it modifies the syslog() call to log a structured log (JSON structure with added metadata). It then allows the application programmer to change syslog() calls to ul_syslog() calls and add additional name-value pairs.

the next step is to create a more complete logging API that allows the application programmer to more easily create structured logs. Debate over how that could/should work is ongoing.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 0:45 UTC (Sun) by jzbiciak (guest, #5246) [Link] (18 responses)

YAML, perhaps? I guess YAML is a superset of JSON anyway these days.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:08 UTC (Sun) by dlang (guest, #313) [Link] (16 responses)

what is it about YAML that would make it preferable? What can you encode in YAML that you can't do in JSON? (remember we are talking about logs here, not arbitrary documents) What can you encode in YAML better (more clearly, more simply) than you can in JSON?

doing a quick google search for YAML vs JSON I am not seeing anything that strikes me as being drastically better about JAML, and the fact that the JSON spec is considerably simpler seems to be an advantage.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:27 UTC (Sun) by jzbiciak (guest, #5246) [Link] (15 responses)

Nothing in particular. It was the only other format that seemed to fit all your selection criteria. I wasn't too surprised to find out that YAML 1.2 is a strict superset of JSON. JSON probably wins on simplicity.

Some additional things in YAML that might be helpful that I don't think are in JSON (but could be mistaken): Explicit typecasting, and the ability to have internal cross-references.

I'm not entirely certain internal cross-references would be useful, although maybe they're useful to refer back to a component of an earlier log message. (Flip side: There's value in redundancy in logs, especially when records go missing.) Explicit typecasting might be useful if there's ever a case where a given value looks like a number but really ought to be treated as a string.

All that said, those are dubious benefits, and JSON probably wins on simplicity. I only mentioned YAML because it was the only other format I could think of offhand that survives the selection criteria fairly well.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:46 UTC (Sun) by dlang (guest, #313) [Link] (14 responses)

Given that filtering criteria may mean that prior log entries are not available, references to them can't count on working, and I don't see much likelihood of them being useful within a single log message (a full document yes, a single log message no)

My query about different serialization protocols was serious. I don't pretend that I know all of them and the advantages of each, so it is very possible that there is something out there that's better.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 2:41 UTC (Sun) by jzbiciak (guest, #5246) [Link] (13 responses)

The fact that very few standard formats survive the selection criteria illustrates the challenge, too. Good luck!

I mentioned YAML because I've found it very lightweight for the things I've used it for, and it is very human-friendly. I didn't realize that JSON is a proper subset of YAML until I looked up some comparisons. So, JSON wins similarly in the human-friendly department, and its simpler spec makes it easier to adopt.

Simple Declarative Language looks interesting. It appears to be a modest step up from JSON, adding explicit types to containers and the ability to add attributes to the type. Sure, you can capture that in a JSON serialization by adding explicit fields, but making it a first class aspect of the syntax has a certain economy to it. I hadn't heard of SDL before today. It looks interesting. Unfortunately, the list of languages that have SDL APIs seems out of line with my usual requirements of C and Perl.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 3:00 UTC (Sun) by jzbiciak (guest, #5246) [Link] (5 responses)

Expanding on my SDL comment... You could easily imagine capturing many repeated aspects of a log entry in the entry type and attributes, rather than fields within the entry record itself. eg:

Example record from my /var/log/messages:

Apr  8 14:23:44 elysium kernel: [9234662.980516] r8169 0000:03:00.0: eth0: link up

One possible way to split between attributes and keys within the container:

entry date=1333913564 host=elysium source=kernel level=info timestamp=9234662.980516 \
     { message="r8169 0000:03:00.0: eth0: link up" }

Or something...

Honestly, I go back and forth between the value of attributed types vs. just embedding the information as fields within the structure. What color do I want my bikeshed today?

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 6:33 UTC (Sun) by lindi (subscriber, #53135) [Link] (4 responses)

You'd also want to have a way to extract that "eth0" in a programmatic way.

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 21:18 UTC (Fri) by giraffedata (guest, #1954) [Link] (3 responses)

You'd also want to extract the "0000:03:00.0", "r8169" (device driver name), and possibly "up".

And the date, host, and source values aren't from the kernel, so they wouldn't be in there.

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 21:29 UTC (Fri) by dlang (guest, #313) [Link] (2 responses)

the information may not be from the kernel, but by the time anything other than the log transport sees the data, it will need to be there (and arguably the timestamp should be put there by the kernel)

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 22:28 UTC (Fri) by giraffedata (guest, #1954) [Link] (1 responses)

Aren't we talking about in what form the kernel should produce log messages?

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 22:30 UTC (Fri) by dlang (guest, #313) [Link]

I had wandered a bit from that, but yes, that's where we started.

And the kernel should put the timestamp on the messages it generates, you don't know how long it's going to be before some other process picks them up and could add a timestamp to them.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 3:03 UTC (Sun) by dlang (guest, #313) [Link] (6 responses)

I agree abut SDL, but whatever is used needs to be supported in every language (or something that's easy enough to manually create)

Keep in mind that JSON is just the least common denominator, the 'everything must support this' option. It is expected that most logging libraries, and the logging transports (i.e. syslog daemons) will support additional options. At the moment the other options expected for later are

BSON (more efficient transport with type information)

XML (because someone will want it, it's hard to do structured stuff and ignore XML ;-)

but others can be added as/if needed.

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 7:54 UTC (Fri) by man_ls (guest, #15091) [Link] (5 responses)

What about protocol buffers? Have they fallen out of grace already?

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 15:11 UTC (Fri) by dlang (guest, #313) [Link] (4 responses)

protocol buffers are good for some things, but they serialize into a binary format, which is not compatible with existing logging tools.

Also (as I understand them) protocol buffers require absolute agreement between the sender and the receiver on the data structures to be passed. This is hard to do for logging libraries that will be written in many different languages, multiple log transport tools, and the vast array of log analysis/storage tools.

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 17:23 UTC (Fri) by smurf (subscriber, #17840) [Link] (3 responses)

No, they're backwards compatible
From the documentation:

>> You can add new fields to your message formats without
>> breaking backwards-compatibility; old binaries simply
>> ignore the new field when parsing

https://developers.google.com/protocol-buffers/docs/overview

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 17:31 UTC (Fri) by dlang (guest, #313) [Link] (2 responses)

Ok, but in any case, they won't work with the existing (text based) logging tools.

Yes, any change to the message being logged 'breaks' existing tools that depend on exact matches of known log messages, but as long as the new log format is still text based, all the existing tools can be tweaked (new regex rules) and handle the log messages.

If you switch to something other than text streams for your messages, you will require that all logging tools be re-written to handle your new format. Since this is unlikly to happen, there is a very large emphisis in being compatible with the existing tools.

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 21:10 UTC (Fri) by man_ls (guest, #15091) [Link] (1 responses)

Protocol buffers is a binary protocol, like BSON. If binary formats are being considered (as I deduced from your message) then protocol buffers should be considered. (I myself think that BSON has a much brighter future, but I was just wondering.)

They should be paying attention to the lumberjack project

Posted Apr 20, 2012 21:37 UTC (Fri) by dlang (guest, #313) [Link]

nxlog already has a binary transport, but it can only be used from nxlog to nxlog. There is though of having a binary transport, but that's a bit out still as the discussion is still focusing on the right way to generate the data and what tags are going to be used.

CEE is supposed to be releasing a 1.0beta spec, and the initial fields planned are documented at https://fedorahosted.org/lumberjack/wiki/FieldList#Unifie...

for the API, the initial focus is on trying to get a good C API that can replace the syslog() call. RedHat has a largish project that they've been calling ELAPI (Enhanced Logging API https://fedorahosted.org/ELAPI/wiki/WikiPage/Architecture ) that they are now realizing largely overlaps with the capabilities of the modern syslog daemons, so they are going though the code they wrote for that and ripping out lots of it to only keep what's needed. There is some question of if the result is still in the 'sledgehammer to swat a fly' category and so you have lumberlog working from the other direction

NOT YAML

Posted Apr 15, 2012 2:54 UTC (Sun) by dskoll (subscriber, #1630) [Link]

No, not YAML! YAML used to be a nice simple language, but the spec has become bloated and byzantine. It includes features that are completely inappropriate for structured logging like the ability to represent self-referential data structures.

I recently went through a large amount of pain converting our (commercial) product to use JSON for serialization rather than YAML just because YAML was becoming such a PITA.

Compare the one-page JSON spec with the monstrous YAML 1.2 spec.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 0:54 UTC (Sun) by aliguori (subscriber, #30636) [Link] (4 responses)

I really don't have a suggestion. The closest thing I can think of is S-Expressions or YAML.

JSON is woefully under specified. There is no guidance in the spec about how implementations should handle numerics. It defers to EMCAScript for any specification ambiguity and EMCAScript is pretty clear that all numbers are represented as IEEE double precision floating point numbers.

AFAICT, a conforming implementation must truncate any number > 2^52 or at least treat two numbers as equivalent if they truncate to the same number.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 1:36 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

what is the numeric range in YAML?

2^52 is a rather large number, how likely are you to need larger numbers in log messages? Are those cases distinct enough that you could just use the 'string' type for the number?

S-Expressions seem even more under specified than JSON from what I see from a quick google search

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 2:10 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

2^64-1 is quite likely to appear at least in some log entries.

They should be paying attention to the lumberjack project

Posted Apr 15, 2012 3:02 UTC (Sun) by dskoll (subscriber, #1630) [Link]

According to the JSON specification, the following is a perfectly valid number:

18446744073709551615 (AKA 264-1)

It may be that some (most?) JSON libraries can't handle such a large number correctly, but it's valid according to the JSON grammar.

They should be paying attention to the lumberjack project

Posted Apr 27, 2012 15:45 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

Only floats. Integers are not limited.

They should be paying attention to the lumberjack project

Posted Apr 16, 2012 16:48 UTC (Mon) by jhoblitt (subscriber, #77733) [Link] (2 responses)

The use of JSON (and/or YAML and/or a serialization format that handles nested data types) is absolutely the right approach as "soon or later" someone will want to log data that doesn't fit well into $whatever_new_format_we_just_came_up_with.

Why wasn't the author of logstash included in the lumberjack discussions?

They should be paying attention to the lumberjack project

Posted Apr 16, 2012 18:20 UTC (Mon) by dlang (guest, #313) [Link]

> Why wasn't the author of logstash included in the lumberjack discussions?

I don't know. I found out about the Lumberjack discussions via announcments (I don't remember exactly where, I think I remember seeing an announcement here on lwn), it's an open list lumberjack-developers@lists.fedorahosted.org

RedHat had some sort of meeting back in Febuary where they had several of the syslog developers there, and the lumberjack project was annouced right about the beginning of March.

the volume on the mailing list varies greatly, we had several days of 40-50 messages/day and then there have been no messages for over a week now.

They should be paying attention to the lumberjack project

Posted Apr 26, 2012 12:27 UTC (Thu) by b0ti (guest, #81465) [Link]

> Why wasn't the author of logstash included in the lumberjack discussions?
He was.

They should be paying attention to the lumberjack project

Posted Apr 14, 2012 23:38 UTC (Sat) by dlang (guest, #313) [Link] (2 responses)

There appear to be a lot of people working on 'improved logging' nowdays that are not aware of what is currently available.

Besides the repeated statement that syslog does not support structured logging, there are a lot of features that people list and say "since syslog can't do this, we have to create a new logging infrastructure". Every list along these lines that I have seen has resulted in answers of "modern syslog daemons have supported items 1,2,3,4,5 for years, you can do #2 this way, we do something very similar to #6 already so it'll be implemented this weekend, and #7 is already on the roadmap but will take a little more time to get finished"

There are three modern syslog daemons that I am aware of, syslog-ng, rsyslog, and nxlog, all of these can do a lot of fancy stuff with logs (filtering, sending to multiple destinations, reformatting, inserting into databases, etc)

Anyone who is looking at doing a lot with logging should really look at these and see what they will do before going off and creating a new logging infrastructure.

They should be paying attention to the lumberjack project

Posted Apr 17, 2012 16:46 UTC (Tue) by k8to (guest, #15413) [Link] (1 responses)

Well, this is more about generating useful log messages, which is the application side. Handling those messages reasonably is the featureset you're referring to.

Although I think handling/transferring is really just the basic work, and where we are headed is to searching/reporting/alerting. Good logs and a good infrastructure that you feed them to can give you a very large amount of knowledge about your systems relatively easily.

They should be paying attention to the lumberjack project

Posted Apr 17, 2012 19:00 UTC (Tue) by dlang (guest, #313) [Link]

Things work better if everyone generating structured messages use the same structure, otherwise you end up having to use heuristics to guess which structure the incoming log is in, and that path leads us right back the the current mess where every app (and frequently every log message in every app) defines it's own 'structure'.

They should be paying attention to the lumberjack project

Posted May 25, 2012 21:55 UTC (Fri) by alan (guest, #4018) [Link] (1 responses)

Lennart works for RedHat, I would expect that he would be well informed/represented in those discussions.

They should be paying attention to the lumberjack project

Posted May 25, 2012 22:34 UTC (Fri) by dlang (guest, #313) [Link]

remember that RedHat is now a fairly large organization, so there is room for it to be fragmented.

The fact is that they were not involved with the discussions. They may have been informed about them, but for the people who are claiming that they have the answer to all logging issues to not be available to discuss the issues when the people who have been doing logging for years is not good (especially since so many of their claims that the new logging stuff is needed because syslog is so primitive and can't do X have been shown to be wrong)


Copyright © 2012, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds