What, again?
What, again?
Posted May 3, 2007 14:13 UTC (Thu) by jschrod (subscriber, #1646)In reply to: What, again? by ncm
Parent article: The Rise of Functional Languages (Linux Journal)
Your rabid polemics simply show that you don't know Common Lisp -- or O'Caml, to prevent your "you are a stupid Lisp fanatic who doesn't know a thing about Real Programming(tm)" bashing that you have so eagerly thrown at the GP and at several other posters.
If one wants them, CLOS has explicit constructors with full control over object lifetime and the ability to manually code resource management as needed. That's what MOP is for.
There are quite some technical and management problems when one wants to use CL or O'Caml for application development, but automatic or explicit resource management ain't one. And this is also not the reason why such languages aren't widely used, either -- reasons to select a programming language are most often non-technical.
Posted May 3, 2007 18:47 UTC (Thu)
by ncm (guest, #165)
[Link] (4 responses)
Nobody has said Lisp makes constructors impossible, or even difficult. Nobody has said Lisp makes manual resource management impossible, or difficult. What is impossible in Lisp, as in other GC-dependent languages, is any equivalent to destructors, and (therefore) the abstraction capabilities they enable. It's a fundamental language-design choice: provide tools so users can build resource management in libraries, or build exactly one kind of resource management into the language core. The latter approach is favored in academic languages meant to impress professors, the former has become essential in a language meant for industrial use.
For easy problems, any language will do -- perhaps badly, but few notice. For large, hard problems, language choice can make the difference between success and failure. Lisp has had very few industrial successes despite a head start measured in decades. The reasons are worth investigating if you want to be responsible for industrial successes. If you don't, you have lots of company.
Posted May 4, 2007 9:13 UTC (Fri)
by dcoutts (guest, #5387)
[Link] (3 responses)
You're saying that if we have some library function which returns an object and that object embed but does not expose some expensive resource (db connection or whatever) then in a normal GC language we can't guarantee that the DB connection gets released in a timely manner.
This is kind of true. Note though that in C++ it relies on the user, the recipient of the object, to free that object it a timely manner. That is, it relies manual memory management everywhere. If the caller is using a local stack allocated variable that's probably going to be ok, but if they're dynamically allocating them then we're relying on the user to know when to free them.
In a sense we cannot abstract this anyway, since the caller really does need to know that the resource is expensive and that they should not hold onto them for too long or have too many allocated at once. Putting expensive objects inside other structures and claiming it's abstract doesn't really help, it's still an expensive object that somehow needs to be treated carefully.
In the same way, in a mostly GC'd language where for this expensive resource we have to use more careful block scoping or explicit resource release methods we get exactly the same lack of abstraction. Any object which embed this expensive object has to also use more careful resource allocation techniques. In either case, this cost is always imposed on the caller, you can't not know about the high cost of the object and get a well performing program.
The difference is that in a GC'd language it looks more explicit because for all the normal objects that do not embed expensive resources we get to use a less obtrusive mechanism. In C++ we have to pay the cost of manual memory management everywhere, so it doesn't look much worse to deal with the expensive object case.
Posted May 4, 2007 16:28 UTC (Fri)
by schaueho (guest, #45025)
[Link] (1 responses)
Posted May 7, 2007 1:06 UTC (Mon)
by ncm (guest, #165)
[Link]
Posted May 7, 2007 0:59 UTC (Mon)
by ncm (guest, #165)
[Link]
In fact, in C++, the recipient of the object does not need to free it in a timely manner. Libraries do not rely on manual memory management everywhere. As I have noted elsewhere in this thread, it has been many years since I coded a "delete" statement. Where is this "cost of manual memory management" I am supposed to be paying everywhere? C++ coders don't pay any such cost.
In fact you can, in C++, abstract management of scarce resources (anyway). It may be necessary for the caller to know an expensive resource is contained, but not for the caller to explicitly call a "close" or "free" function to release it. In fact, most such objects' lifetimes are bounded by some scope, but (and this is important; stop and consider carefully) not typically the scope in which they were created or claimed. It is trivial to arrange for the equivalent of a close to happen automatically in C++, but it remains entirely impossible in any typical GC language, particularly including Lisp and all its descendants.
Let me repeat that: Turing-completeness notwithstanding, this common need in industrial programming is one that Lisp cannot address at all, as a direct result of a fundamental weakness in the core language design. That fundamental weakness cannot be resolved so long as CONS and GC remain in the core language. Other languages that have adopted Lisp's CONS and GC (or their equivalent) suffer the same weakness, whatever their other strengths.
"Rabid polemics"? I don't recall using words like "rabid", "stupid", or "fanatic". Maybe I'm being confused with somebody else, or maybe somebody is taking objective analysis of computer language features personally.What, again?
I think I see the point you've been trying to make and we've mostly all been missing.What, again?
Speaking of running code on object destruction, SBCL includes an extension called sb-ext:finalize, doing the obvious. I would be surprised to find that only SBCL should provide something like that (actually a quick goggle search tells me that there are all sorts of implementations of this, summed up in clocc). Of course, not part of the standard means not part of the language, and as you also point out, it's an additional hassle.What, again?
No, finalization is no substitute. In fact it is actively harmful. You have no way to know when a finalization routine will happen, whether it will ever happen, what thread it will happen in, what order a series of them will be called in, or what else may be going on when they fire off. Competent Java shops typically forbid finalization except for debugging purposes, to help catch missed manual close operations.What, again?
No, I'm afraid you have completely missed the point.
What, again?