|
|
Subscribe / Log in / New account

No, and ...

No, and ...

Posted Jun 5, 2009 1:14 UTC (Fri) by bojan (subscriber, #14302)
In reply to: No, and ... by welinder
Parent article: Donald Knuth: Mathematical Ideas, or Algorithms, Should Not Be Patented (Groklaw)

This (like any other) program terminates. One would require some kind of perpetual motion machine to execute it forever.

The end state of it will depend of how exactly we reached the end of processing: signal, crash, reboot, electricity switched off, hardware failure, end of civilisation etc. But for each of these events, we will have some end-state of that program.

More specifically, consider a Linux program compiled from your code. If you send it a SIGKILL, it will stop. Obviously, has a well defined end-state.


to post comments

No, and ...

Posted Jun 5, 2009 2:47 UTC (Fri) by sepreece (guest, #19270) [Link] (1 responses)

Uh, no. If you send the process a SIGKILL, the end state is not well-defined, because you don't know what state the program will be in when it gets the signal. Nor is the finiteness of the universe sufficient - an algorithm has to define its halting condition (e.g., difference from desired result less than sigma). Make the program slightly more complicated - have it increment a non-initalized variable on each iteration. Now it's non-deterministic (because the initial state is undefined) and its end state is also undefined. Not an algorithm.

A program can implement an algorithm, but it is not itself the algorithm. Nor is every program an algorithm.

No, and ...

Posted Jun 5, 2009 3:05 UTC (Fri) by bojan (subscriber, #14302) [Link]

> If you send the process a SIGKILL, the end state is not well-defined, because you don't know what state the program will be in when it gets the signal.

Of course it will be well defined. It will be one of its possible states. In fact, you can even find out what that state was, although that's not even necessary to have a well defined state (reality doesn't care what you can and cannot see).

> Now it's non-deterministic

Just because you didn't set the variable, doesn't mean it was not set to a particular value. Which then gives the program another possible state.

Try this. Have an undefined variable in your program. The run it under GDB. Of course, you'll get "random" garbage in the variable, which is still a value, which gives your program a well defined state. For example, if this was a 32-bit integer variable, the initial state will be any value that this variable can hold.

Maybe your flowchart didn't take that into account. Irrelevant for the purposes of having a state.

No, and ...

Posted Jun 5, 2009 4:35 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (8 responses)

bojan, don't invent new terminology. From a computational theory standpoint, an infinite loop doesn't halt (ie. terminate). We aren't discussing the heat death of the universe, we are discussing algorithms.

No, and ...

Posted Jun 5, 2009 5:20 UTC (Fri) by bojan (subscriber, #14302) [Link] (7 responses)

I'm not saying that an infinite loop terminates. I'm saying that every program terminates. When it does, it will have a state which will be determined by whatever happened until that point.

Similarly, when you push the reset button on you computer, that program (kernel + userspace) will terminate and have an end-state. (In the olden days of Atari, you could even save your ramdisk across resets, if I remember correctly).

We may think that a computer program is what we wrote in that piece of source code only. But the fact is that there is so much more to it when it actually becomes a program that runs on the computer. Just remember the ext4 v. POSIX episode. The crashes of the kernel were causing an end-state that we didn't particularly like. So, the algorithm was changed to have a different end-state in case of a crash.

It is easy to pretend that these events are not the input for the logic of the algorithm that is running on our computers. (Un)Fortunately, the reality is different.

No, and ...

Posted Jun 5, 2009 6:30 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (6 responses)

If you suggest that all programs must necessarily halt, then what is your definition of computation? And does your definition form an argument for or against the patentability of algorithms?

Just as an example, the "Halting Problem" doesn't exist by your definition of computational theory, and yet it is one of the most fundamental results in everyone else's version.

No, and ...

Posted Jun 5, 2009 7:03 UTC (Fri) by bojan (subscriber, #14302) [Link] (5 responses)

A computer program executes on a computer. Hence, it eventually terminates. Hence, it will have an end-state. These are all very trivial facts.

Halting problem:

http://en.wikipedia.org/wiki/Halting_problem

Deals with an idealistic scenario, much like it is depicted in that forever loop. It is, of course, useful for finding theoretical solutions to theoretical problems.

However, for such scenarios to be realised in practice, one needs to have a computer that runs forever. Such computers do not exist. Programs do not run on non-existing computers.

No, and ...

Posted Jun 5, 2009 9:18 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (4 responses)

Sigh. This is unrewarding. Have it your way: computers are not used for computation, just for running programs. Because computations may either halt, or not.

No, and ...

Posted Jun 5, 2009 10:37 UTC (Fri) by bojan (subscriber, #14302) [Link] (3 responses)

I understand your point. The theory says what it says, there is no question about that.

The original notion put forward was that computer programs are not algorithms, because they do not halt, are not deterministic and are not finite. Sure, when you write your source, it may seem that way (and in theory it is). Once you put that program on a computer with finite resources and execute it, memory gives initial state (whether set by you or not), which then brings your code through a set of well defined states (whether written in your source or not), which then eventually ends in some final state (whether your program ended execution itself or not). The fact that the source may not be sufficient to describe all these states doesn't diminish the fact that all these states indeed happened in a well defined way.

As for your question of whether such beasts should be patentable or not, I am of the opinion that they should not be patentable. Computer programs deal exclusively with information processing and are in that respect very similar to maths. Whether number three is printed on the screen or three sticks are spat out of the machine at the end of processing, the purpose of the computer program was to determine that number, which is a just piece of information.

No, and ...

Posted Jun 5, 2009 18:45 UTC (Fri) by chad.netzer (subscriber, #4257) [Link]

"The original notion put forward was that computer programs are not algorithms"

No it was not! It was suggested that you *could* create a program that would not halt, and that such a program was not an algorithm, if you go by that definition. It was clearly discussing a notion of computational *theory*. You completely sidetracked that discussion for your own banal point, for what purpose I don't know.

/*
* The following is *not* an executing program, it is a
* text representation of an endless loop. It represents
* a computation that doesn't halt, and could be turned
* into a program that is bounded by the computational
* constraints of the universe.
*
* Is it an "algorithm?" bojan says yes, because it can be
* converted into an execution that *must* 'halt' by the
* constraints of physics.
* I say an algorithm is a CONCEPT, and that conceptually,
* this represents a non-halting computation. It's not
* just a "program" executing on a specific computing device.
*/
int main() {for(;;);}

The legal/patent aspect of all this is that the law cares about definitions (whether they intersect with reality or not), and so an unfortunate definition could have bad consequences, particularly in the patent realm that can intertwine with the realm of concepts, and well as physical realities.

No, and ...

Posted Jun 7, 2009 14:19 UTC (Sun) by malor (guest, #2973) [Link]

When thinking about algorithms and programs, the hardware is irrelevant. It's treated as an abstraction. A while(1) loop runs on both ARM and x86, on mainframes and on pocket calculators. In theory, it could probably run on some computational substrate of the Universe itself, a machine that couldn't fail, even in theory.

You're arguing about physical implementations OF algorithms, not algorithms themselves. In the context of this discussion, it's unimportant wankery.

No, and ...

Posted Jun 11, 2009 11:27 UTC (Thu) by malor (guest, #2973) [Link]

Another way of putting it: your claim that hardware realities mean that all algorithms will exit is true, but it's not usefully true. If the hardware fails and crashes, that means it didn't run the algorithm properly, not that the algorithm actually exits.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds