|
|
Subscribe / Log in / New account

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

UCLA has a report on "software obfuscation" research by computer science professor Amit Sahai. Essentially, code can be encrypted in such a way that it still operates correctly but cannot be reverse engineered. "According to Sahai, previously developed techniques for obfuscation presented only a "speed bump," forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an "iron wall," making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today's computers — a game-change in the field of cryptography. The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property by preventing the theft of new algorithms and by hiding the vulnerability a software patch is designed to repair when the patch is distributed."

to post comments

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 21:14 UTC (Thu) by smoogen (subscriber, #97) [Link] (1 responses)

I am pretty sure that is going to impact performance somewhere...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 21:19 UTC (Thu) by Tara_Li (guest, #26706) [Link]

Yeah, this is sounding something like those claims of loss-less compression to 1% of the source size for any input.

Ultimately, the computer steps from a to b to c to d. A lot of work has gone into compilers to make sure the steps from a to b to c to d are as small as possible, and arranged carefully to match the processor. And this work is claiming that it can find a way to re-rig everything, and still have the attacker not understand what is going on?

I'm suspecting a Denmark-sized piece of limburger cheese - it's just not passing the smell test to me.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 21:22 UTC (Thu) by theophrastus (guest, #80847) [Link] (17 responses)

If it "cannot be reversed engineered" how can it ever be (reassembled) to run on the CPU? (the chip's frame of reference of "the analog hole")

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 21:54 UTC (Thu) by and (guest, #2883) [Link] (5 responses)

> If it "cannot be reversed engineered" how can it ever be (reassembled) to run on the CPU? (the chip's frame of reference of "the analog hole")

that was the question which immediately came to my mind when reading this. programs can *always* be reverse engineered (in principle), simply because the CPU has to execute them correctly for all inputs. this story stinks...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 22:29 UTC (Thu) by zlynx (guest, #2285) [Link] (2 responses)

Right. You could always rig up a CPU simulator like Bochs to output every instruction executed. Any program that does anything real has to use system calls eventually and from those you can trace the instruction stream backward to find the instructions that actually do something. You can do a data flow analysis back from the system call also to find what instructions actually contribute to real data.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 23:32 UTC (Thu) by dlthomas (guest, #89935) [Link] (1 responses)

Of course, then you have the inefficiency of a simulator times the (doubtless tremendous) inefficiency this technique adds...

On the flip side, you only need to do it once of course.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 0:27 UTC (Fri) by zlynx (guest, #2285) [Link]

You may be able to run it under a debugger trace as well. These can either single step the code or insert breakpoints at the next conditional jump point.

Although I've heard of screwing this up by intentionally doing a segfault and capturing it in a signal handler, modify something, then resume execution.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 12:44 UTC (Fri) by dakas (guest, #88146) [Link] (1 responses)

I remember some obfuscation code messing with undefined opcodes/flags that resulted on some instruction counter running backwards.

Stuff like that makes it very hard to reasonably work with a disassembler. However, that is not a mathematical method, and of course it might fail with future processors supporting the same instruction set.

Something similarly distracting would be a pointer running in parallel with the executed code and decrypting it just-in-time. With current CPU architectures, you would not actually want to do this kind of thing since it totally messes up the prefetch pipelines.

So this sort of "decrypt on the fly" approach only makes moderate sense if it is actively supported by the CPU.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 14:34 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

There's an analysis[1] of an NES copy protection scheme which used all kinds of hardware-specific nastiness to hide the actual behavior from disassemblers. I found it a fascinating read about the lengths some have gone to in order to make their code harder to understand. I could see someone implementing for x86 assembly which disassembled to one thing, but is jumped to offset by one byte, which hides the code from a straight dump (And refusing to run if being ptrace'd or in valgrind).

[1]http://blog.kevtris.org/blogfiles/EWJ2PROT.TXT

The "analogue hole" is the wrong metaphor here

Posted Aug 1, 2013 22:32 UTC (Thu) by tialaramex (subscriber, #21167) [Link] (4 responses)

So far as I can see from the explanation the code still works, the problem is that you can't figure out what it's for.

The purpose of reverse engineering is not to duplicate a piece of code, _that_ is called copying. The reverse engineer wants to understand what the code does, and why. This might be as part of a "Chinese Wall" cloning project, but it might also be in order to develop a countermeasure, to refine the technique used, simple curiosity or many other reasons.

If you doubt that this could ever be tricky, take a moment to examine the remaining candidate 6 state binary Busy Beavers. Remember, all you have to do is figure out which of these very simple programs halts last, or produces the most output before halting, if indeed they halt at all. Nothing more than that.

Six states, two symbols. Unsolved. How big do you suppose the real world algorithms that these UCLA scientists played with were? A hundred states, a thousand? And with at least hundreds and more likely billions of distinct symbols depending on how you want to think about it. It's no surprise that such obfuscation is possible in principle, it just remains to be seen how effective their particular approach is.

The "analogue hole" is the wrong metaphor here

Posted Aug 1, 2013 23:35 UTC (Thu) by dlthomas (guest, #89935) [Link]

I have no problem with assuming this is *theoretically* possible - it's sort of like homomorphic encryption. I have much skepticism that this is *practically* possible. The program has to run at sufficient speed to be practical, and the CPU has to execute some stream of instructions to do it.

The "analogue hole" is the wrong metaphor here

Posted Aug 2, 2013 8:18 UTC (Fri) by and (guest, #2883) [Link] (2 responses)

> If you doubt that this could ever be tricky, take a moment to examine the remaining candidate 6 state binary Busy Beavers.

There are undecidable problems, sure. But reverse engineering prevention techniques are different, because the code which is to be executed *is* decidable (or it does seriously wrong) and in the end the CPU just follows it instruction by instruction. You may make the paths which go from input to output (much) more convoluted, but in the end you always need to go from A to B if the obfuscated code still claims to be correct. So if you know A, and B and you can follow the path from A to B, why should it be impossible to understand the path given enough determination?

The "analogue hole" is the wrong metaphor here

Posted Aug 2, 2013 10:22 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

Sure, that sounds reasonable doesn't it. Let's just simulate all the A->B transitions, keeping track of each one, and then stare at it really hard and we'll achieve enlightenment. First of all, we begin with more values of A than there are particles in the observable universe, and then we ... oh dear, this is starting to sound as though we're going to need to buy more RAM. More RAM than has ever existed or could ever exist. Oops.

The reason I selected the Busy Beavers is _not_ per se because S and Σ are non-computable functions, but because our actual experience when we try to analyse this stuff (arbitrary machines/ programs which we stumble upon when examining the exhaustive set) is so different from analysing programs which are the result of human engineering. It's like the discovery that /almost all/ real numbers are irrational. The numbers we tend to think about are all nicely rational, like five, nine seventeenths, 65536. We can so easily convince ourselves that the irrational numbers are rare exceptions to be mostly ignored. But in fact they're the overwhelming majority of all reals, our intuition about "numbers" is just wrong and our intuition about understanding programs is wrong too.

Some of the simple six state machines whose behaviour /is/ understood after a great deal of research effort are _fiendish_ and they're far smaller than any program that someone's going to want to protect with obfuscation. Try something easy to start, what's going on here?

A: B1R B0L
B: C1L E0R
C: E1R D0L
D: A1L A1L
E: A0R F0R
F: E1R Z1R

The "analogue hole" is the wrong metaphor here

Posted Aug 2, 2013 13:19 UTC (Fri) by ballombe (subscriber, #9523) [Link]

> because the code which is to be executed *is* decidable (or it does seriously wrong)

You are abusing the word decidable in a way that make your statement meaningless.

(I do agree that the article is misleading)

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 1:40 UTC (Fri) by ras (subscriber, #33059) [Link] (5 responses)

There a couple of misconceptions here.

Firstly, what they refer to as "software" is a circuit, or more precisely a function constructed entirely of boolean operations - AND's, OR's and NOT's. The function may not have loops. "Function" means it has no state - it is a pure function whose output depend purely on it's inputs. Ie F(A,B,...) = Z.

Clearly for any such function you can brute force it by just feeding it all possible inputs and noting it's output. But like all brute forcing once the number of bits in the input gets to a certain point (128 bits or so), this becomes computationally infeasible in both time and space.

As an example of how this might be useful, lets say F(E) decrypts an AES stream with a known key. So by distributing F, you have distributed something that decrypts an AES data stream while never revealing the key.

Their abstract says how they define obfuscation:

> Indistinguishability obfuscation requires that given any two equivalent circuits C_0 and C_1 of similar size, the obfuscations of C_0 and C_1 should be computationally indistinguishable.

The full paper can be read here: http://eprint.iacr.org/2013/451

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 5:41 UTC (Sat) by kleptog (subscriber, #1183) [Link]

Yeah, once I opened the paper it was clear it was talking about circuits rather than software, which is a world of difference. For circuits I can imagine that you could transform gates to produce equivalent circuits while making it much harder to work out what it's doing.

Applying this to software would be much more difficult, since your input is defined by the CPU and your output program must be valid for that CPU. This limits the number of possible transformations, but you could get pretty creative. For example: replace (in C) i+1 with -~i.

In a sense there's an overlap here with compiler optimisation, where the compiler is trying to find transformations which are faster. For example in loop where i is used as index to an array, the resulting ASM code might have i prescaled by the wordsize to same time.

If you allowed the compiler to do transformations which made it slower, you'd be halfway there. For example storing a pointer as the sum of two variables, and doing half the manipulations on one and half on the other variable. And adding them only during dereferencing. Fairly cheap on x86 but a pain to reverse engineer. (Although, a compiler smart enough to do that, would probably be smart enough to undo it if fed its own output).

Anyway, nice in theory, whether it gets any practice, we'll see...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 11:42 UTC (Sat) by Karellen (subscriber, #67644) [Link] (3 responses)

"lets say F(E) decrypts an AES stream with a known key. So by distributing F, you have distributed something that decrypts an AES data stream while never revealing the key."

I'm sure I'm missing something, but I don't see how that is that useful. If you reveal F, why would anyone ever need you to reveal the key? No-one needs the key, as everyone already has F!

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 13:09 UTC (Sat) by ras (subscriber, #33059) [Link] (2 responses)

It was the simplest illustration I could think of that demonstrates what is possible. But yeah, it may not be directly useful.

Here is one that is.

Say I store some data on a server that I don't trust, so I encrypt it with a key the server doesn't know. Assume there is a lot of the data, and I want to search it. For example, it is an imap store, and I am searching my gigabytes of emails from my phone.

I create a F(EMAIL), where EMAIL is an email message. It outputs 1 if the message matches, 0 if not. I send F to the server, and instruct it to send every message that matches back to me. It has no idea what question I asked, nor has it any idea what messages it is sending back because they are still encrypted. In fact, it doesn't know it is sending email messages, because it doesn't know it is an email store.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 4, 2013 16:04 UTC (Sun) by zblaxell (subscriber, #26385) [Link]

If F is secure, then F can also sign and encrypt its own output, as well as decrypt its input. This would prevent the entity executing F from knowing even the 1/0 result of F's computation on encrypted data. If F was tampered with while it executed, the signature it generated would probably be wrong.

It enables people to force other people to either execute their code exactly as written, or not at all--but only those two alternatives.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 5, 2013 1:25 UTC (Mon) by foom (subscriber, #14868) [Link]

Earlier work (indeed, cited by the present paper) examines exactly on that use-case, and was very interesting reading as well.

Dawn Xiaoding Song, David Wagner, and Adrian Perrig, "Practical Techniques for Searches on Encrypted Data". http://www.cs.berkeley.edu/~dawnsong/papers/se.pdf

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 1, 2013 21:50 UTC (Thu) by allesfresser (guest, #216) [Link] (1 responses)

Personally I would call Prof. Sahai a traitor to his field, since the only purpose for his "invention" is to make life harder for his peers, for the benefit of the pathologically selfish, paranoid and power-hungry.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 13:20 UTC (Fri) by drag (guest, #31333) [Link]

Seems par for the course for me.

It's been my understanding for a long while that most major universities support IP and feel that the use of patents and copyrights are just a way of getting money back from the business community that may profit from their research.

On the funny note

Posted Aug 1, 2013 23:14 UTC (Thu) by bojan (subscriber, #14302) [Link] (1 responses)

I had this for years. Works like this. Write a piece of code. Revisit it several years later. No idea what it does... :-)

PS. Seriously, this doesn't look like progress to me.

On the funny note

Posted Aug 2, 2013 19:51 UTC (Fri) by dakas (guest, #88146) [Link]

I had this for years. Works like this. Write a piece of code. Revisit it several years later. No idea what it does... :-)
Once disassembled some Reversi program written in Z80 assembly. That guy could code. Registers used logically and consistently, scoring tables straightforward, alpha/beta pruning straight out of the textbook... Everything made perfect sense.

Once recoded an old arcade game of mine written in Z80 assembly into C. Wanted to go C++, object oriented for the monsters and so on. The comments were pretty much exclusively restricted to clock cycle tallies (no timer chip, and those are bad for deterministic game play anyway).

Everything still made sense. So much so that there was just nothing to be gained by trying to recode the logic into some kind of OO programming.

I agree that most programs nowadays are an exercise in obfuscation right from the get-go. Real programs are like a song. It doesn't matter whether it's a music style you heard before or not: you notice whether it's good, whether its tools and principles make for a cohesive whole, whether things fall into place everywhere.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 0:08 UTC (Fri) by shmerl (guest, #65921) [Link] (3 responses)

> The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property

Sounds like they already figured the sickest application of this idea - the unbreakable DRM.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 4:18 UTC (Fri) by proski (subscriber, #104) [Link] (2 responses)

This approach can hide the decryption algorithm, but it cannot hide the output. Unless the monitor is a part of the copy protection, the output would be unencrypted. And if the monitor is participating in the DRM scheme, there is no need to decrypt anything on the host system, just send the encrypted data to the monitor.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 15:00 UTC (Fri) by jzbiciak (guest, #5246) [Link] (1 responses)

Isn't it typically the case that the encrypted bitstream is compressed, while the bitstream going to your monitor is not compressed? The two forms of encryption are rather different. The incoming stream is encrypted with a cipher like AES or DVB-CSA. The stream going to your monitor is encyrpted with HDMI's encryption scheme.

Furthermore, the keys associated with your DRMed content are probably associated with you[*], whereas the keys used to communicate with your monitor or TV are associated with the display vendor.

If you want to put the entire DRM job in the monitor, you're also putting a sekrit key manager that will now have some association with your identity (since you need to give it your keys so it can unlock the content you're renting) into the display itself, along with some number of video codecs. If your monitor goes out, do you lose access to all that "content"?


[*] The "royal you" referring to those who subject themselves to DRM, not necessarily you personally. Somehow I don't expect to find too many DRM fans around here, although we're all subject to it to varying degrees whether we like it or not.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 17:07 UTC (Fri) by hummassa (subscriber, #307) [Link]

Every time I have to explain for a layman friend *why* DRM does not work and mathematically speaking does not exist, I get to this point:

even after all that work, the "pirate" will set up an analog-hole rig where he fixes some cameras and mikes around the monitor and speakers, and with some calibration and filtering (to eliminate watermarks) you get something that is indistinguishible from the original bitstream. Said stream will find its way to the torrents.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 1:02 UTC (Fri) by sitaram (guest, #5959) [Link] (1 responses)

I don't understand a word of the actual paper, but these guys have been doing this sort of stuff for some years now. I can't say it passes or fails the smell test for me simply because I know I am incapable of making that determination.

Homomorphic encryption, and its follow-ons like functional encryption have been holy grails for people wanting to *truly* leverage cloud computing, where you can send off your data *and* computation without worrying that the cloud operator snarfs it all.

Of course, no one writing about these things mentions that if you can protect your computation on the cloud, malware can protect itself on your computer just as well! Just remember: Stuxnet took months (years???) to analyse even without all this... :-)

Fun times for people trying to protect systems... at least when this algorithm moves from "Greek" to <some computer language>

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 22:11 UTC (Fri) by ssmith32 (subscriber, #72404) [Link]

Stuxnet did not take years. Weeks at most.

For an Wired (a bit over-dramatized) account:

http://www.wired.com/threatlevel/2011/07/how-digital-dete...

Sure over a month.. but note the guys doing this were reversing this on top of their normal jobs, so it wasn't 100% time. And they were wringing out every detail.

And also it had novel functionality. If they see a repeat threat, I'm sure they'd have it reversed quite quickly.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 1:59 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (9 responses)

Here are some bullet points based on my skimming of the comments[1] it garnered on Reddit (specifically, /r/programming) recently:

- It's been noted that the article stinks, but the paper has merits[2];
- It uses homomorphic encryption, so don't expect wonders from it[3]; and
- It seems to be able to allow one to encrypt with a secret key such that only a specific public key may decrypt the contents. To me, this sounds like something that could be used in broadcasting anonymity setups where it is known who made the content which is encrypted, but not necessarily the recipient(s) since only those with the proper public key would actually have access.

[1]https://pay.reddit.com/r/programming/comments/1jckzt/comp...
[2]https://pay.reddit.com/r/programming/comments/1jckzt/comp...
[3]https://pay.reddit.com/r/programming/comments/1jckzt/comp...
[4]https://pay.reddit.com/r/programming/comments/1jckzt/comp...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 2:03 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (1 responses)

Oh, can't forget the best response[1]:

> > have designed a system to encrypt software so that it only allows someone to use a program as intended while preventing any deciphering of the code behind it
> I thought the Perl people had already solved that.

[1]https://pay.reddit.com/r/programming/comments/1jckzt/comp...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 9:10 UTC (Sat) by jengelh (guest, #33263) [Link]

And if Perl alone was not enough, there is Acme::EyeDrops.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 10:02 UTC (Fri) by Karellen (subscriber, #67644) [Link] (6 responses)

"only those with the proper public key would actually have access."

The thing with public keys is, they're public. Anyone could have a given public key. Do you have your keys backwards?

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 13:53 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (4 responses)

They're public in the sense that the key isn't a secret to all but the owner(s). Maybe "protected" key is a better term? (Of course, I still have yet to actually read the paper, so maybe I'm misled or there's a name given in the paper.)

I think it would be nice for multiple people to verify that, yes Alice is the source of some data, but not be able to know what that data is.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 11:46 UTC (Sat) by Karellen (subscriber, #67644) [Link] (3 responses)

So you're talking about a shared private key? But in that case, it would still be the *public* half of the shared key used to perform the encryption, would it not?

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 14:42 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (2 responses)

No, because the private key could make new public keys. As I understand the comment, data could be encrypted such that *anyone* can verify that a specific private key is the origin of the data, but only able to be decrypted with a *subset* of "public" keys. This is important since private keys have uses beyond public keys (new subkeys, web of trust, etc.). I see this having use in dead drop and whistleblower delayed release where some data is encrypted with a trusted key to verify that it is from the source, but not accessible. I suppose you could encrypt with an unknown key then sign with a known key, then release the public key for the unknown key later, but it'd be interesting if only one key were needed for that.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 21:50 UTC (Sat) by Karellen (subscriber, #67644) [Link] (1 responses)

"No, because the private key could make new public keys."

Uh, what? Public/Private key pairs come in pairs. They're generated together at key creation time, and they're intimately linked to each other. You can't make new public keys for a private key.

Or, have I completely misunderstood public-key crypto?[0] If so, do you have any useful links I could read?

[0] http://en.wikipedia.org/wiki/Public-key_cryptography

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 22:04 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

Oops, that was a mistake I forgot to remove (I got mixed up with new subkeys and assumed things based on the `-y` option to ssh-keygen, but that returns the same data all the time). It can, however, be used to, say, sign a malicious key as trusted, so you don't necessarily want to share the actual private key.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 16:24 UTC (Fri) by rsidd (subscriber, #2582) [Link]

Indeed, "encrypt with a secret key and decrypt with a public key" sounds more like digital signing than encryption. (I haven't read TFA.)

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 19:23 UTC (Fri) by marcH (subscriber, #57642) [Link] (5 responses)

"preventing the theft of new algorithms"... How low will we go?

What next, the theft of mathematical proofs?

http://www.wto.org/english/docs_e/legal_e/27-trips_04_e.htm

If the Western World does not start soon taking control of its destiny back from retarded lawyers and out of control, casino finance then Asia will increasingly push us into irrelevance.

Time to make sure my kids learn Chinese or Korean...

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 20:35 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (1 responses)

Well, one difference here is that it's a technical solution, not a political solution. Technical solutions exist whether we want them to or not and don't have an inherent "good" or "bad" about them; it's the use that matters there. If you'd like to push for outlawing uses of this technology, go ahead, but outlawing the technology itself is unlikely to get you very far.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 21:57 UTC (Fri) by marcH (subscriber, #57642) [Link]

I generally don't mind the technology and even less this one: very far from a new atomic bomb or chemical weapon...

I was only reacting to this specific sentence in the sales pitch. The fact that such a sentence can be casually inserted in such a pitch tells IMHO a lot about how much lawyers have managed to pervert minds.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 3, 2013 2:18 UTC (Sat) by lipak (guest, #43911) [Link]

> What next, the [prevention of] theft of mathematical proofs?

This concept already exists and is called "Zero knowledge proof".

In case you did not know, this is a proof that is developed by A in such
a way that A can convince B that A knows the proof without allowing B to
actually replicate the proof.

It looks as if what is being developed is "Zero knowledge algorithm".

This is an algorithm that is developed by A in such a way that A can
distribute a program based on this algorithm to B in such a way that B
can run the program but cannot replicate the algorithm.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 7, 2013 9:49 UTC (Wed) by branden (guest, #7029) [Link] (1 responses)

"If the Western World does not start soon taking control of its destiny back from retarded lawyers and out of control, casino finance then Asia will increasingly push us into irrelevance."

No, they'll just do more of the same, bigger and perhaps even more brutally. And our out-of-work baka gaijin "intellectual property protection" specialists will be happy to help them do it.

Macao's "casino finance" puts Las Vegas and Monte Carlo to shame.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 8, 2013 21:50 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Macao's "casino finance" puts Las Vegas and Monte Carlo to shame.

Wall Street and unreal high frequency trading were (among others) what I had in mind.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 22:29 UTC (Fri) by ssmith32 (subscriber, #72404) [Link] (1 responses)

I was trying to wrap my head around this thing yesterday. I still don't get the original paper (my set theory classes are lost in the cobwebs of old age), but this:

http://link.springer.com/content/pdf/10.1007%2F978-3-540-...

As far as I could tell, is the clearest explanation of the type of "obfuscation" they are talking about. It seems like while it would be computationally infeasible to recover the original code, it does not mean it would be computationally infeasible to discover the functionality of the transformed code.

So if you literally wanted to "copy" the code, you're out of luck, but if you just want to understand what the code is accomplishing in the end, you're not.

YMMV

The most interesting quote to me from the paper linked to in the article was:

"One issue is that despite the substantial investment, the obfuscated program may be learnable in that
its behavior can be completely determined by a (large) polynomial number of input queries. If this is
the case, then simply giving out the program in the clear would technically count as valid obfuscation
from the cryptographic de finition even though this strategy would be very disappointing in a real sense.
We would like to be able to bridge this gap between theory and practice and figure out ways to rigor-
ously de fine and argue about meaningful obfuscation notions for learnable programs. Getting a better
understanding of this gap appears critical for using obfuscation in non-cryptographic applications. We
emphasize that an important initial step is to simply understand this problem better"

If someone could explain how the sections that follow this quote clarify the "gap" they mention, that would be probably be the best thing.

Computer scientists develop 'mathematical jigsaw puzzles' to encrypt software (UCLA)

Posted Aug 2, 2013 23:35 UTC (Fri) by luto (guest, #39314) [Link]

The kind of obfuscation they're using essentially gives the property that you can't learn anything from the obfuscated algorithm that you couldn't also learn from any other algorithm that does the same thing. So, to the extent that there are multiple ways to do something, you can't learn anything about the particular way it's done.

How much protection this gives is mostly a matter of conjecture for the time being, except for the functional encryption bit.


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds