What's coming in glibc 2.10: XML
What's coming in glibc 2.10: XML
Posted Apr 21, 2009 4:24 UTC (Tue) by bvdm (guest, #42755)In reply to: What's coming in glibc 2.10: XML by jordanb
Parent article: What's coming in glibc 2.10
Posted Apr 21, 2009 21:56 UTC (Tue)
by jordanb (guest, #45668)
[Link] (5 responses)
The JSON spec is terse but it also rigorously defines the syntax of a very simple data serialization format. There are no omissions that make it incomplete. This is the advantage of JSON for simple data serialization tasks.
Posted Apr 22, 2009 4:15 UTC (Wed)
by dlang (guest, #313)
[Link] (4 responses)
Posted Apr 22, 2009 16:09 UTC (Wed)
by jordanb (guest, #45668)
[Link] (3 responses)
"JavaScript Object Notation (JSON) is a text format for the
The standard is so small because it only has six data
Another advantage of JSON is that if you have simple mostly
[{
Posted Apr 22, 2009 16:11 UTC (Wed)
by jordanb (guest, #45668)
[Link]
Posted Apr 22, 2009 18:51 UTC (Wed)
by bronson (subscriber, #4806)
[Link] (1 responses)
"hlt_bug" : false,
Even so, I'd still much rather work with JSON than XML!
Posted Apr 22, 2009 18:56 UTC (Wed)
by jordanb (guest, #45668)
[Link]
What's coming in glibc 2.10: XML
What's coming in glibc 2.10: XML
What's coming in glibc 2.10: XML
market it for web development as a way to send data out to your
client-side JS by simply serializing JSON objects into <script>
tags in the webpage. That's the only reason for the focus on
Javascript -- not to rely on the ECMAScript standard. In fact the
only reference to ECMAScript is informational:
serialization of structured data. It is derived from the object
literals of JavaScript, as defined in the ECMAScript Programming
Language Standard, Third Edition [ECMA]."
types ('object' (essentially associative array), list, number,
boolean, string, and 'null'). It has no ability to do node
attributes and is only tree-based to the extent that objects and
arrays can contain more of them. It is certianly not a rival to
XML for dealing with large scale or very complicated or nuanced
data. But it is an excellent alternative when you have small
amounts of structured data that you wish to serialize.
tabular datasets they can be serialized (with judicious use of
whitespace) in a manner that's both machine and
human-readable. For fun and as an example I decided to 'encode'
my /proc/cpuinfo in JSON:
processor : 0,
vendor_id : "GenuineIntel",
cpu_family : "6",
model : "8",
model_name : "Pentium III (Coppermine)",
stepping : 1,
cpu_MHz : 498.283,
cache_size : [ 262144, "256 KB" ],
fdiv_bug : false,
hlt_bug : false,
f00f_bug : false,
coma_bug : false,
fpu : true,
fpu_exception : true,
cpuid_level : 2,
wp : true,
flags : [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8",
"sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "mmx", "fxsr", "sse", "up" ],
bogomips : 997.69,
clflush_size : 32
}]
What's coming in glibc 2.10: XML
What's coming in glibc 2.10: XML
"f00f_bug" : false,
"coma_bug" : false,
What's coming in glibc 2.10: XML