Insecurity and Python pickles
Insecurity and Python pickles
Posted Mar 13, 2024 22:05 UTC (Wed) by Wol (subscriber, #4433)In reply to: Insecurity and Python pickles by NYKevin
Parent article: Insecurity and Python pickles
Maybe because a two-dimensional table is unusual, unnatural, and constricting?
Is SQLite capable of storing a 4th-normal-form structure in a single row?
Cheers,
Wol
Posted Mar 13, 2024 22:31 UTC (Wed)
by NYKevin (subscriber, #129325)
[Link] (7 responses)
Posted Mar 14, 2024 1:09 UTC (Thu)
by intelfx (subscriber, #130118)
[Link] (6 responses)
I don't quite see how is any of this useful in the context of a data _interchange_ file format? All of these files are written exactly once and then read or distributed. If something happens during writing of such a file, the partial result is simply discarded and recomputed because it has no meaning.
Posted Mar 14, 2024 3:55 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
With SQLite: import sqlite3, then write a few lines of SQL. Done.
Without SQLite: You have to write out this JSON stuff by hand, make sure your format is unambiguous, parse it back in, etc., and probably you also want to write tests for all of that functionality.
Posted Mar 14, 2024 3:58 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link]
Posted Mar 14, 2024 7:36 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
You're assuming your JSON doesn't have a schema/definition.
There's a whole bunch of JSON-like stuff (XML/DTD, Pick/MultiValue) where having a schema is optional but enforceable.
If you *declare* that JSON/XML/MV etc without a schema is broken, then all this stuff can be automated extremely easily.
Cheers,
Posted Mar 14, 2024 9:14 UTC (Thu)
by atnot (subscriber, #124910)
[Link] (2 responses)
That's not quite it. You need to laboriously map all of the objects you have into an SQL model first. Then learn about prepared statements, etc. if you don't already know all of this stuff, which as the average scientist you don't. That's easily a dozen lines of code.
> Without SQLite: You have to write out this JSON stuff by hand, make sure your format is unambiguous, parse it back in, etc., and probably you also want to write tests for all of that functionality.
All of this needs to be done for SQL too. You don't just magically get the valid python objects you put in out again. Even if you use a third party ORM-like thing, what about third party objects that were never intending this. And tests are needed for all this stuff.
It's not like Rust etc. where there's a defacto standard for ser/des that everything implements, all of this is real work.
Meanwhile with pickle: You import pickle and just give it the python object you want to save and it works. One line. And it's just built into the language. Sure it's insecure, but you'll fix that maybe once this paper is out.
Posted Mar 14, 2024 10:32 UTC (Thu)
by aragilar (subscriber, #122569)
[Link]
Posted Mar 14, 2024 16:58 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link]
This is a game of "don't read the thread." I made that comment in response to an assertion that some data could not be mapped into SQL because it was not 2D. In that case, you already have to turn it into bytes anyway (e.g. with numpy.ndarray.tofile() into a BytesIO object, which was already being done in the code I was commenting on in the first place). My point is that you can put metadata and other such stuff into "real" SQL columns, and store anything that doesn't easily map to SQL objects as TEXT, and then you can skip the nonsense with JSON. You have not meaningfully responded to that assertion, you've simply talked past me.
Posted Mar 14, 2024 9:28 UTC (Thu)
by aragilar (subscriber, #122569)
[Link] (3 responses)
Posted Mar 14, 2024 16:54 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (2 responses)
Actually, I would argue that tabular data does NOT make sense in a database. It starts with the very definition of data.
Relational data is defined as "coming in rows and columns". If it doesn't fit that definition, it's not data. Relational dictates what is acceptable.
My definition (the one I use in my databases) is "data is what the user gives you". I'm happy with whatever I'm given.
Now let's define metadata. I don't know what the Relational definition of metadata is, probably "it's the schema". My definition (that I use in my databases) is "metadata is data I have inferred about the data the user gave me". And one of my cardinal rules is NEVER EVER EVER MIX data and metadata in the same table !!!!!!
But that's exactly what the job of a relational data analyst is - to infer data about the user data, then promptly mix both data and metadata up in the same table. How else would you represent a list in a 2D table?
> Naturally, higher dimensional data requires different tools (e.g. HDF5).
No. Higher dimensional data requires a multi-dimensional database. Preferably with a query language that is multi-dimensional-aware.
SQL contains huge amounts of database functionality, because it was designed to query unstructured data. So the query had to be able to express the structure. Get a database where the schema can DEscribe the structure, and the query language can be simultaneously more expressive, more powerful, and simpler, because you've pushed a lot of the complexity into the database where it belongs.
SQL puts huge amounts of complexity in the wrong place. Some complexity is unavoidable, but dealing with it in the wrong place causes much AVOIDABLE complexity.
Just look back at my rants here about relational. Just don't set off another one, the regulars here will have their heads in their hands :-)
The best way to let you see roughly where I'm coming from, is I see everything similar to an XML/DTD pair. That is EASY to manipulate with automated tools. And those tools are heavily optimised for fast efficient processing. Okay, that's not an exact description of MultiValue, but it's close. Oh - and if I store one object per XML table, the tool makes it dead easy to link different objects together.
Cheers,
Posted Mar 15, 2024 8:43 UTC (Fri)
by aragilar (subscriber, #122569)
[Link] (1 responses)
I think the data you're talking about is more graph-like right (and feels like the kind of thing where you want to talk about the structure of how data is related)? That feels different in kind to both the above, and so naturally tools designed for other types of data don't match?
My understand of ML/AI is generally they're pushed into one of the two bins above, but that may be a bias based on the data I encounter.
Posted Mar 15, 2024 9:52 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
No surprise ...
> * arrays of records (and collections of these arrays): generally having a db makes it easier and faster to do more complex queries over these vs multiple files (or a single file with multiple arrays), and formats designed for efficient use of "tabular" data (e.g. parquet) are better than random CSV/TSV.
So are your records one-dimensional? That makes your "arrays of records" two-dimensional - what I think of as your typical relational database table.
And what do you mean by "a complex query"? In MV that doesn't make sense. Everything that makes SQL complicated, belongs in an MV Schema - joins, case, calculations, etc etc. Pretty much ALL MV queries boil down to the equivalent of "select * from table".
> * n-dimensional arrays: this represent images/cubes/higher moments of physical data (vs metadata), and so are different in kind to the arrays of records. This is is where HDF5, netCDF, FITS (if you're doing observational astronomy) come in.
And if n=2? That's just your standard relational database aiui.
It's strange you mention astronomy. Ages back there was a shoot-out between Oracle, and Cache (not sure whether it was Cache/MV). The acceptance criteria were to hit 100K inserts/hr or whatever - I don't know what these speeds are, I'm generally limited by the speed people can type. Oracle had to struggle to hit the target - all sorts of optimisations like disabling indices on insert and running an update later etc etc. Cache won, went into production, and breezed through 250K within weeks ...
> I think the data you're talking about is more graph-like right (and feels like the kind of thing where you want to talk about the structure of how data is related)? That feels different in kind to both the above, and so naturally tools designed for other types of data don't match?
Graph-like? I'm not a visual person so I don't understand what you mean (and my degree is Chemistry/Medicine).
To me, I have RECORDs - which are the n-dimensional 4NF representation of an object, and the equivalent of a relational row!
I then have FILEs which are a set of RECORDS, and the equivalent of a relational table.
All the metadata your relational business analyst shoves in the data, I shove in the schema.
With the result that all the complexities of a SQL query, and all the multiple repetitions across multiple queries, just disappear because they're in the schema! (And with a simple translation layer defined in the schema, I can run SQL over my FILEs.)
I had cause to look up the "definition" of NoSQL recently. Version 1 was the name of a particular database. Version 2 was the use I make of it - defined by the MV crowd, "Not only SQL" - databases that can be queried by SQL but it's not their native language (in MV's case because it predates relational). Version 3 is the common one now, web stuff like JSON that doesn't really have a schema, and what there is is embedded with the data.
So I understand talking past each other with the same language is easy.
But all I have to do is define N as two (if my records just naturally happen to be 1NF), and I've got all the speed and power of your HDF5-whatever, operating like a relational database. But I don't have the complexity, because 90% of SQL has been moved into the database schema.
Cheers,
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Wol
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Insecurity and Python pickles
Wol
Insecurity and Python pickles
* arrays of records (and collections of these arrays): generally having a db makes it easier and faster to do more complex queries over these vs multiple files (or a single file with multiple arrays), and formats designed for efficient use of "tabular" data (e.g. parquet) are better than random CSV/TSV.
* n-dimensional arrays: this represent images/cubes/higher moments of physical data (vs metadata), and so are different in kind to the arrays of records. This is is where HDF5, netCDF, FITS (if you're doing observational astronomy) come in.
Insecurity and Python pickles
Wol
