gcobol: a native COBOL compiler
There's another answer to Why: because a free Cobol compiler is an essential component to any effort to migrate mainframe applications to what mainframe folks still call "distributed systems". Our goal is a Cobol compiler that will compile mainframe applications on Linux. Not a toy: a full-blooded replacement that solves problems. One that runs fast and whose output runs fast, and has native gdb support.
The developers hope to merge back into GCC after the project has advanced
further.
Posted Mar 15, 2022 17:04 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (14 responses)
Posted Mar 15, 2022 20:05 UTC (Tue)
by k3ninho (subscriber, #50375)
[Link] (9 responses)
k3n.
Posted Mar 16, 2022 19:56 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (8 responses)
There's rumors of two levels of emulation out there (in production code!) but I don't really believe that until I see it.
Posted Mar 16, 2022 20:54 UTC (Wed)
by mpr22 (subscriber, #60784)
[Link] (5 responses)
Posted Mar 16, 2022 21:37 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (4 responses)
Apple Macs are on their fourth CPU architecture by now; too bad nobody's running their M68k emulator on their PowerPC emulator on their x86 emulator on their M1 Mac.
Posted Mar 17, 2022 13:21 UTC (Thu)
by scientes (guest, #83068)
[Link] (3 responses)
Posted Mar 18, 2022 12:30 UTC (Fri)
by khim (subscriber, #9252)
[Link] (2 responses)
Apple never had 64bit PowerPC laptops. Which means there never was the situation when someone may release purely 64bit app, they had to provide dual-mode apps. That's what allowed it to switch, relatively painlessly, to 32bit Intel. And Apple have done it on each transition: 68K Macs have never got 68060, thus PowerPC wasn't a slow-down even for emulated apps, PowerPC never got 64 bit laptops, thus Apple can go back to 32bit Intel… Intel Macs never got a modern GPU thus switch to “Apple silicone” was not a regression. That's why Apple was able to transition form one CPU architecture to some other one, but with PC or Android that's impossible. Where you have full control over the platform that transition happens regularly. Just look on consoles.
Posted Mar 19, 2022 0:42 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Apple supported NVidia and AMD graphics cards, including in MBP laptops.
Posted Mar 19, 2022 11:18 UTC (Sat)
by khim (subscriber, #9252)
[Link]
AMD only recently. And yes, this caused them problems: that part doesn't look as impressive as CPU. Thankfully GPU architecture was never exposed to end-user programs thus they don't need emulators, just drivers. But with CPUs they did their usual trick: they stopped at Cascade Lake which is much less impressive than what AMD offered at the time and then transitioned to ARM two years later. I'm not saying M1 is bad CPU or that Rosetta is less impressive this time. They are good. But the important ingredient of smooth transition is refusal to use latest CPU from old family which creates performance drop from emulation acceptable. To perform that, you need firm and full control over the platform, otherwise someone else would use faster CPU and make you look bad.
Posted Mar 17, 2022 23:42 UTC (Thu)
by jschrod (subscriber, #1646)
[Link]
FTR: Once I programmed Cobol, but I didn't compile. ;-)
Posted Mar 18, 2022 1:32 UTC (Fri)
by ncm (guest, #165)
[Link]
The countervailing force would be that at some point one may expect an emulator to have been coded in a portable language, and the source code for that emulator not yet misplaced; then, an emulation could be moved sideways to a new host rather than itself being wrapped.
The source code for the original program, inner emulations, and specs for the machines and OSes emulated are of course all lost in time, along with will to read them if ever found.
Posted Mar 16, 2022 8:01 UTC (Wed)
by professor (subscriber, #63325)
[Link]
Posted Mar 16, 2022 13:31 UTC (Wed)
by immibis (subscriber, #105511)
[Link] (2 responses)
I think modern mainframes are server clusters running cluster-aware operating systems that emulate old mainframes, but don't tell the admins that...
Posted Mar 18, 2022 17:41 UTC (Fri)
by khim (subscriber, #9252)
[Link] (1 responses)
Nope. Not even close. z15 is fully compatible with IBM/360. Why do you think Linux 390X port is alive and well-supprted? No clusters there, just monster CPUs with almost gigabyte (960MiB) of L4 cache.
Posted Aug 25, 2022 23:21 UTC (Thu)
by scientes (guest, #83068)
[Link]
Posted Mar 15, 2022 19:12 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (40 responses)
Yup, decent Cobol on PCs is a goal worth having, but if the hardware isn't up to snuff (as I understood, mainframe CPUs are woefully underpowered compared to PCs), "real programs" won't be ported across because they'll be too slow.
"whose output runs fast" ... the whole point of a mainframe is it's *hard* to overwhelm the i/o bus. Port your mainframe program to a PC and chances are the i/o bus will collapse under the load.
Cheers,
Posted Mar 15, 2022 19:25 UTC (Tue)
by acarno (subscriber, #123476)
[Link]
Posted Mar 15, 2022 19:53 UTC (Tue)
by pwfxq (subscriber, #84695)
[Link] (6 responses)
As acarno mentioned, looking at the specs of the Z15 CPU, it isn't a slouch. (e.g. 5.2GHz max fequency)
Posted Mar 15, 2022 22:00 UTC (Tue)
by bartoc (guest, #124262)
[Link]
Posted Mar 15, 2022 22:11 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (4 responses)
(AMD chips many moons ago were both more powerful, and slower, than the equivalent Intel. So Intel emphasized how fast their chips were to try and counter the fact that AMD were better. Have you heard of "pipeline stall"? AMD chips might have been slower, but they wasted far less effort on mis-prediction ...)
Okay, I would expect IBM chips to be state-of-the-art, but reading that article it looks like the processors can be assigned to all sorts of tasks. Including driving i/o. I think it still stands that porting programs from mainframe to PC is going to hit trouble on the i/o bus ...
Cheers,
Posted Mar 16, 2022 0:12 UTC (Wed)
by kenmoffat (subscriber, #4807)
[Link] (2 responses)
I used to earn my living as an IBM mainframe application programmer using mostly COBOL. Compared to a modern PC, the processors appear slow - but they could handle a massive amount of I/O. I see claims that farms of NVMe drives can do massive I/O, so perhaps such setups could use this. That depends, of course, on which variant of COBOL you wished to migrate from - ISTR IBM had a lot of extensions, and of course much of the fun stuff was weird:
I recall that we needed to copy files from a VSAM package, and the application had to read the -1 subscript (word-size) of a file definition (probably an ESDS, but it's more than 30 years ago) to find out how long a particular record was.
But tell that to the youth of today and they won't believe you ;-)
Posted Mar 16, 2022 17:26 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
Us old hands cut our teeth when you had to put smarts into the logic, not throw brute force at it. And that technique still pays dividends today ... Why do I spend so much time today watching Excel send a query to Oracle while I'm pining for Pick ...
"We can solve any problem by introducing an extra level of indirection."
Cheers,
Posted Mar 17, 2022 13:21 UTC (Thu)
by scientes (guest, #83068)
[Link]
Posted Mar 16, 2022 19:24 UTC (Wed)
by Sesse (subscriber, #53779)
[Link]
(This is not criticism; IBM is simply targeting a different design space from most other chip manufacturers.)
Posted Mar 16, 2022 7:35 UTC (Wed)
by anton (subscriber, #25547)
[Link] (29 responses)
The advantage of this approach is that the machine language of IBM mainframes is a simpler language than Cobol and that it can be used for programs where the source has been lost, or where the original program was written in assembly language. The advantage of having a Cobol compiler is that you can maintain the result at the Cobol level (if you have Cobol programmers) instead of having to deal with whatever the Cobol compiler in combination with the translator produced.
My impression from the presentation(s?) at the project end was that the customer(s?) were satisfied with the result.
It may be that current IBM mainframe offer capabilities that your run-of-the-mill PC or even PC-based server does not have (although I think that many claims about mainframe superiority are pure bullshit, as is evidenced by the fact that these claims are rarely (if ever) supported by hard benchmark numbers), but a legacy program from the last century does not need the capabilities of current mainframes. And every PC now runs rings around last-century mainframes, including in I/O.
Posted Mar 17, 2022 8:39 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (28 responses)
The big problem here is "Where are the industry benchmarks?". And that is a SERIOUS problem.
As I know from the Pick space, everybody wants their database benchmarks to include JOINs. A concept Pick *does not have*, because joins are both expensive, and for Pick completely un-needed.
Any benchmark which asks "how fast can you do this expensive operation" will unfairly cripple a system which has no need for that particular operation.
(For Pick, the answer to "how fast can you do a join?" is "I don't. I do an indexed lookup. It's physically impossible to do it any faster!")
Cheers,
Posted Mar 17, 2022 12:54 UTC (Thu)
by nix (subscriber, #2304)
[Link] (27 responses)
"I optimize all my queries by hand". OK, OK, that tells me everything I need to know about how you do complex queries (you mostly don't: it's too hard). Meanwhile SQL databases let you say what you want rather than how you want it done, and have the machine do the boring bits of picking the data out. Most of the time, even at enormous scale, this is good enough. I've seen people whip out queries that combine a hundred tables to get results needed just once and do it in a couple of hours. Doing that with Pick sounds like it would take weeks or simply be more or less impossible.
Pick had semi-competitors in the same era that worked the same way, like DBase. They died for a reason: the same reason people don't write whole systems in assembler any more. Computers can now do a good enough job. Not as good as could be done by the whole thing being written by hand, perhaps, but doing that is *so much harder* and takes so much more of the actually expensive human time rather than the nearly-free computer time that it's *obvious* that moving to getting the machine to optimize physical reads for you is the right approach.
Posted Mar 17, 2022 13:50 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (26 responses)
> "I optimize all my queries by hand".
NO I DON'T.
Do you use calculated fields in SQL? I write a one-line calculated field in the table definition, and the database converts it into an indexed lookup.
Which means, in effect, I write the join ONCE, and it's there for any query that wants to use it. Contrast that to SQL, where I have to rewrite the join for every single view, and as I'm learning the hard way, it gives me ample opportunity to screw up every time.
In my new job, I am regularly writing horrible, complex SQL queries. That would be so much easier with a decent query language running on a proper database :-)
Yes it's ages ago, the story refers to a P90, but if it takes a team of consultants six months to get Oracle on a twin Xeon to run faster than Pick on a P90, there's something damn wrong there ...
> I've seen people whip out queries that combine a hundred tables to get results needed just once and do it in a couple of hours. Doing that with Pick sounds like it would take weeks or simply be more or less impossible.
Well, from my example above, I guess it would be more like ten minutes - and from an ORDINARY developer, not some whizz-kid genius (I notice you didn't say YOU could do that ...)
Another poster said here that once people start porting Cobol to PCs, they realise that just maybe a mainframe might be the right tool for the job. Most of the stories I hear about people porting Pick to relational are about how companies went bankrupt because they suddenly discovered that their IT department - despite having plenty of resource to manage the Pick system - was just way too small to cope with the resources to feed the hungry relational monster. And I think pretty much every study I've heard about (admittedly very few) showed that - for the same size company - Pick needed maybe a third (or less?) the resources of the equivalent relational system.
And a few years back some University or someone bought a Pick-style system for their astronomical catalog. Okay the alternative was Snoracle, but they had to seriously cheat - disabling indexes, doing batch updates, whatever whatever - to achieve the target 100K insertions/min. When the - I think it was Cache - system went live, it just breezed through 250K without breaking a sweat.
What's that quote I mentioned elsewhere "just adding another level of indirection will solve any problem ... except the problem of too many levels of indirection".
I'm hoping we'll soon announce an industrial-grade Free Pick system that people can play with, and if you truly approach it with an OPEN mind, I'm sure you'll be blown away by the simplicity and power.
Cheers,
Posted Mar 17, 2022 20:15 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (23 responses)
You mean, like this: https://www.postgresql.org/docs/14/ddl-generated-columns....
Or this: https://www.postgresql.org/docs/14/rules-materializedview...
Or this: https://www.postgresql.org/docs/14/sql-createtrigger.html
?
Posted Mar 17, 2022 22:42 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (22 responses)
> You mean, like this: https://www.postgresql.org/docs/14/ddl-generated-columns....
Exactly. Except all in Pick they're all virtual.
Bear in mind, in Pick the table definition includes, in the definition of a column, the offset into the row. (In Pick, a row is an array.) Relational hides this from the user.
So instead of defining the location as an offset into the row, I can define it as a calculation that is evaluated on viewing. Let's assume I have two normal columns called PRICE and VAT.RATE, I can just define a third column VAT with the location "PRICE * VAT.RATE". Or if I have a column ITEM.CODE, I can define a column ITEM.NAME as "TRANS( ITEMS, ITEM.CODE, ITEM.NAME)" which means "read the table items, look up the item.code, return the item.name". And because it's key/value, it's a single request to disk ... there's absolutely no searching of the ITEMS file. That's why Pick doesn't have joins - anywhere SQL would use a join, Pick just uses a TRANS.
> Or this: https://www.postgresql.org/docs/14/rules-materializedview...
So my Pick table IS your materialised view, with up-to-date data, because I've predefined all items of interest as virtual generated columns. Of course, I can't update them through this view ... but postgresql has the same limitation, and I believe many RDBMSs have difficulty updating views ...
> Or this: https://www.postgresql.org/docs/14/sql-createtrigger.html
Ummm. Pick copied triggers from relational, so it did nick plenty of good ideas :-)
Cheers,
Posted Mar 19, 2022 0:35 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (18 responses)
Perhaps you should just dedicate some time to studying the modern databases?
Posted Mar 19, 2022 1:39 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (17 responses)
Why? (actually, I'd agree that it CAN lead to an unmaintainable mess, because it doesn't enforce structure by default, but ...)
> Perhaps you should just dedicate some time to studying the modern databases?
Did you leave out the word "relational" there? Which is actually the same age as Pick give or take a year.
Or did you mean "study databases that NEED a hundred-line query to do what my database can do in one line"?
Relational maths is good, but as I was arguing with nix, it's maths. You can NOT draw any conclusions about reality from maths. It's easy to prove mathematically that Newtons Laws of Motion are correct. The problem is that the Universe begs to differ. It's easy to prove things in maths that are just plain impossible in reality.
And what I find so utterly frustrating is that - as soon as I make any evidential claims about Pick - that are grounded in reality not maths - nobody is prepared to even try to provide "the exception that proves the rule".
Take that invoice I mentioned to nix - two addresses and ten line items - can you see ANY WAY that a database could retrieve that information in less than thirteen requests to the underlying storage? Apart from my obvious blunder in letting an immutable table (invoices) refer to a mutable table (addresses). And the fix for that would be store the addresses in the invoice table, reducing the needed accesses to 11.
If you can't come up with any theoretical way possible to retrieve that information from a database in fewer accesses, then you have to accept my claim that Pick is "shit off a shovel fast". And no, storing it in a document where it's not structured is not storing it in a database :-) "The exception proves the rule" - and if you can't come up with an exception to my claim, you have to admit its accuracy.
And did you see my refutation of nix's claim that the Relational abstraction was better than the Pick abstraction? I'd never thought of it that way before, but I regularly mention Einstein's statement "make things as simple as possible BUT NO SIMPLER". The second law of thermodynamics proves that the Pick abstraction is superior. It has lower entropy.
Which is why nix's hundred-line query is so easy to express in one line in Pick. Pick is more powerful, more expressive, yes more complex than relational. But because relational is TOO SIMPLE, expressing a simple a query like "get me the latest version" is made excessively complex by relational's lack of even the *concept* of ordinal numbers. How do you even express a requirement like "latest" in a language that doesn't have ordinals? Because Pick has the concept of order, these queries are dead easy to express.
Or in other words, because I can reach BELOW the relational concept of First Normal Form, I can express things far more powerfully. Going back to your comment about an "unmaintainable mess", because I've got more power, I have a far greater opportunity to screw things up. BUT because I have more power, I also have the ability to express things with a much greater TOTAL simplicity. For the same problem, I can come up with a far simpler solution that you are denied.
Think 1984 and Newspeak. If you control the language, you control what the proles are capable of thinking. First Normal Form has no concept of order, of the ordinal numbers. It has no concept of objects, of nouns. I've always thought it weird that it has attributes (adjectives), but it has nothing for those attributes to belong to (nouns). If the concept doesn't exist in your language, you don't realise what you're missing.
So I'd counter your claim I should "study modern database", by saying you should remove your blinkers and try to appreciate what other databases (primarily NoSQL) bring to the table. From what I can make out, most of the genuine NoSQL databases all are very similar in many ways to Pick.
(And as I said to nix, if you insist on accessing Pick as a First Normal Form database, any properly designed Pick Schema can present you with a FNF view just by *discarding* the excess information. You can't go the other way - you can't magic that information back.)
Cheers,
Posted Mar 19, 2022 1:46 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
To the extent I'd describe your typical SQL query as Baroque. Yes, baroque is beautiful. Many of these SQL queries are beautiful. But much of Bach's music is beautiful because he applied all sorts of unnecessary rules and constraints to his composing. Making his music very complex. SQL is the same - it's beautiful and complex because it's hide-bound by all these unnecessary constraints.
Cheers,
Posted Mar 19, 2022 2:14 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (15 responses)
Because you're mixing logic and data. Ideally you would like to keep the data separate, in a normalized form.
> Or did you mean "study databases that NEED a hundred-line query to do what my database can do in one line"?
I'm sorry, but so far I haven't seen you provide such an example. Moreover, I'm actually 100% certain that any modern RDBMS can emulate all Pick's features easily.
Sure, SQL syntax might be a bit weird, but modern RDBMS allow writing extensions in friendlier languages like Python or even C#.
Posted Mar 19, 2022 11:13 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (14 responses)
> Because you're mixing logic and data. Ideally you would like to keep the data separate, in a normalized form.
Ideally you want to keep logic and data separate, true. And in Pick there's nothing to stop you (apart from your own incompetence). The difference is Relational *tries* to stop, Pick gives you plenty of rope to hang yourself.
But I ALSO want to keep data and metadata separate (I define metadata as anything that can be logically derived from the data. So ordinality - first, second, last etc is NOT data).
The act of First Normal-isation muddles data and meta-data such that it is impossible to keep them separate.
> > Or did you mean "study databases that NEED a hundred-line query to do what my database can do in one line"?
> I'm sorry, but so far I haven't seen you provide such an example. Moreover, I'm actually 100% certain that any modern RDBMS can emulate all Pick's features easily.
Well, nobody's provided me with a hundred-line SQL query to emulate. But my personal experience as a Pick programmer was that all my queries were two- or three- line at most. If in all my decades of programming Pick, I've never met a query like that, yet in six months of programming SQL pretty much every query I've ever worked with is 20, 30, 50 lines long, then I think the *evidence* is on my side.
As for emulation, it's noticeable I'm far more into the Science / Reality side of things than the Maths / Theory side. Maths can prove anything it likes. If the Universe begs to differ, sorry I'm on the side of the Universe. Going back to the topic of the article :-) let me pose a question - given hardware of EQUAL ability, which is FASTER? To run your IBM/370 Cobol executable on an IBM/370, or on an IBM 370 machine code emulator? OF COURSE running it on the emulator will be a lot slower.
And I'm not going to argue with your statement that Relational can emulate Pick. It's maths, of course it can be done. But did you see my "Proof by Physics" that - actually - it CAN'T be done EASILY.
My Pick abstraction is a mathematical Proper Superset of your First Normal Form. If I provide a layer (I hesitate to call it an emulation layer) that THROWS AWAY my primary-key, ordinality META-data, I'm not *emulating* FNF, I'm *providing* FNF. And I've been forced to increase my entropy to do so. And so the second law of thermodynamics says that if you want to emulate Pick it requires Work. And lots of it. Because in order to reduce the entropy back to the Pick system's natural level you have to expend work to reduce entropy. (Notice that, in the Physics sense, I did ABSOLUTELY NO WORK AT ALL to give you your Relational model.)
Likewise when I threw my invoice model at you. Let me give you that challenge again. You have an invoice with two addresses and ten line items. What is the *minimum* number of data accesses (in SQL terms rows, in Pick terms RECORDS, in data theory terms I'll call it molecules rather than atoms - atoms bound together such that removing one will destroy the information) required?
Note that you can't be clever and say "I'll group all my line items for the same invoice together on invoice number", because they're part of the General Ledger, and that will want to use different access logic.
What's the speed limit of the data Universe? Can you conceive of ANY POSSIBLE THEORY WHATSOEVER that would allow you to access those 13 molecules of data in less than 13 accesses? Can you see any way of re-organising them into less than thirteen molecules (and yes, I did fix a bug to reduce it to 11 :-)? Because I can do a Gedanken experiment that proves with Pick I need (on average) between 13 and 14 accesses. I can do a real experiment to test that. Because Relational says "don't worry about the man behind the curtain" you can't even do a Gedanken experiment.
I'll state my case very simply - THE SECOND LAW OF THERMODYNAMICS says Relational is - MUST BE - slower and more complicated than Pick in real life.
> Sure, SQL syntax might be a bit weird, but modern RDBMS allow writing extensions in friendlier languages like Python or even C#.
And Pick has its own version of BASIC since before I started programming four decades ago. And it's a very friendly language. :-)
Cheers,
Posted Mar 19, 2022 12:09 UTC (Sat)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
This is good.
But so much of the SQL I write is not "fetch this record".
It's "for the client's customers whose renewal dates are in a certain range, find me all of their payments by a certain method, in a certain status, which were created, modified, or finalized in a certain timestamp range".
Posted Mar 19, 2022 17:22 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
Great. A challenge.
Firstly, I'm going to assume processing costs are much less than disk access costs so I'm going to ignore them. They're lost in the noise. And I'm going to optimise my Pick data layout, but I think the rules I apply will be pretty much the same rules you use. Accessing the same record/row multiple times will be counted as one disk access. I'll only call it out as a win for Pick if I can show that one record access for me equals multiple row accesses for you ...
And I'll assume, as it appears to be a subscription model, that I have a list of all those payments stored in the client record. The physical implementation of which will be an index into the ledger, on client id.
And bear in mind my SELECT command does not return a record set, it returns a list of IDs for me to process further. Which means I can actually save, and retrieve the list again later, for further processing without the cost of having to do a new SELECT and all the costs involved. That doesn't apply here though.
Okay, let's go. My first two commands could easily be combined into one, but it's easier to explain as two. I'm assuming RENEWAL.DATE has been predefined as an index.
select CUSTOMERS where RENEWAL.DATE >= date1 and where RENEWAL.DATE <= date2
This would be a complete index scan. Curiously enough, the more customers there are, the more likely the key and value are to get separated which will increase the efficiency of the scan.
select CUSTOMERS saving LEDGER_ID list.required
The "list.required" keyword will cause the command to terminate if the previous select didn't return a list of customers. Otherwise you'll get a mess reminiscent of screwing up your join type in relational :-)
But this command will now read the CUSTOMER index on the LEDGER file. For each customer renewing, it will take one disk access to retrieve a list of all the ledger entries of interest. LEDGER_ID is defined on the CUSTOMER file as a calculated field which does that retrieval.
select LEDGER where METHOD = this and STATUS = that list.required
I'll actually stop here because I think "timestamp range" is a SQL construct? It splits a huge table into a whole bunch of smaller separate tables which share the same table definition? Pick has a similar construct called "distributed files".
I'll assume it's not feasible to index METHOD and STATUS because the key range is small and number of values is huge, but if I could it would be a simple "read two values from index, intersect all three lists of keys". My worst case is having to go through the list of keys, read every item, and select by comparing the actual values. That's one disk read per ledger id.
list LEDGER list.required
If I haven't already pulled all the records into ram, this pulls all of the ledger records in, so I match the fact that SQL actually creates a record set. Without this, I've merely created a list of all the records I want, and I'm cheating :-)
As for the timestamp stuff, if I've understood this correctly, Pick requires me to store this information in the record key, so I can select the key list and select the records of interest with an in-memory scan of the key list.
So!
As I understand relational, you can't even do that analysis as how much work your query is going to involve? You can't ask "the wizard behind the curtain"?
And let's forget about the database entirely. Use information theory. Is it even possible to retrieve that information with less work? As I said in a previous comment, for every "request for data" Pick makes to the disk, it has an approx 5% miss rate.
Cheers,
Posted Apr 6, 2022 22:59 UTC (Wed)
by nix (subscriber, #2304)
[Link]
This really really does sound like absolute crankery. We know the relevant natural law in this area where computers and information processing are concerned. It's the Bekenstein bound, and we are so *so* far from it that there is literally no way it can possibly apply to anything you have ever done (not unless you're using a black hole at absolute zero as a computer and you hadn't mentioned it).
Posted Apr 6, 2022 23:14 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
No, it doesn't. All the Pick features (multi-dimensional arrays and computed views) are available in stock PostgreSQL and they are implemented in a similar fashion. There's literally nothing in Pick that can't be done better, faster and more flexibly in Postgres.
I'm pretty sure this holds for other large databases like MSSQL and Oracle.
Posted Apr 7, 2022 9:19 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (9 responses)
It's also false, now that I've investigated the theory in depth. Chris Date has shown in some of his textbooks on database systems that, assuming that you get the data into 4NF or higher (for which I've not yet found evidence that you can automate it - DDL translation appears not to be solved here), two things hold true:
So, unless your use case is trivial, a mechanical translation of your DML/DQL to SQL is going to save you work in the long run - you do a small amount up-front to translate to SQL via a compiler, but your SQL database can then do fewer disk accesses on average than Pick does for the same results.
The fun result comes in with domain-key normal form (DKNF); it is possible to represent certain data models in a relational database in DKNF form without duplication, where an MV database (which is equivalent to 4NF) has to duplicate the data and rely on external processes keeping the duplicates in sync on edit.
Posted Apr 7, 2022 12:07 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (8 responses)
How on earth does a SQL database do fewer disk accesses?
Each disk blob is associated with a primary key - which belongs to what I would call an entity. Which contains a 3NF/4NF/whatever representation of ALL my attributes. Those attributes include the primary key(s) of other relevant entities, for example my wife, my car.
Let's assume you want to look me up. The FILE is keyed on NI-number. You run the query on my name and yes, Pick needs an index scan to convert name to key. Then, by requesting JUST ONE disk block (assuming all this data fits in said block) you have my address, my wife's key, all my phone numbers, a list of my employer(s), a list of all my car(s), etc etc.
It takes just one more request for one disk block, and you have the same information about my wife.
(That's assuming I haven't chosen to split any bulky data, eg photo, into a separate FILE)
If this data is stored in FNF in your relational database there is just NO WAY you can be that efficient. If you're relying on AI in your database, are you sure it's going to get it right? With Pick, one disk request for one disk block gets ALL the attributes associated with any entity's primary key (unless it's deliberately been configured otherwise).
And if you want the data on my three cars, you've now already got ALL THREE primary keys - that's three more requests to disk for three more blobs. Try doing THAT with a 1NF data store!
> The fun result comes in with domain-key normal form (DKNF); it is possible to represent certain data models in a relational database in DKNF form without duplication, where an MV database (which is equivalent to 4NF) has to duplicate the data and rely on external processes keeping the duplicates in sync on edit.
Looking at Wikipedia's definition of DKNF, it's clear you don't understand. I would probably store someone's status in a virtual field.
https://en.wikipedia.org/wiki/Domain-key_normal_form
IF WEALTH LT 1,000,000 ELSE IF WEALTH LT 1,000,000,000 THEN "Millionaire" ELSE "Billionaire"
But it's easy to do in many ways. But try doing THIS without storing duplicate data ... going back to the bookstore example, and apologies for the formatting ... bear in mind I'm defining line-item as an *attribute* of order, and not as an entity in its own right ...
ORDER_NO CUST_NAME BOOK QUANTITY PRICE TOTAL.PRICE TOTAL.COST
1234 "Liza Doolittle" "The Importance of Being Earnest" £5 4 £20 £34
Only the first five columns actually exist as data (the last two are calculated, or virtual, columns). And that is ALL the data that is stored - there is only ONE copy of ORDER_NO or CUST_NAME.
Put that into 1NF and you've instantly got three extra copies of ORDER_NO, as an absolute minimum. Yes you could put BOOK, QUANTITY and PRICE into arrays inside cells inside your order table, but does SQL know how to handle it without some horrendous complicated expression? And if you split it into a separate table, then order is "meaningless but preserved" - what extra crud in the query do you need to handle that?
And if I suddenly decide to treat line item as an entity in its own right I could re-organise the data, or I could just treat it as the sub-table it currently is, and access it like any other table.
Cheers,
Posted Apr 7, 2022 14:50 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (7 responses)
You really need to read something like An Introduction to Database Systems by Chris Date to make your arguments.
In the case you've described, both systems indeed do the same number of accesses. However, you're doing point lookups - lookups on a single entity. Where relational wins is when I want to do a more complex query - e.g. how, given the database layout you have shown, do I get all people that have been paid by a given list of employers without rearranging the data as stored? In a multivalue database, I have to do more work than in 4NF.
And I understand perfectly - Wikipedia's definition of DKNF is incomplete as compared to the one you'll find in a textbook on database design, and you're not actually putting your data into DKNF. Your description of putting it into 1NF is correct, but that's not how you are supposed to use relational - at a minimum, you need BCNF for optimal results, and there are cases where BCNF is insufficient and you need 4NF or 5NF.
To convince me, you need to demonstrate why Mr Date's cited proof in the textbook given above that relational with 4NF is equivalent to or better than multivalue is wrong. So far, all you've been able to show is that you don't understand relational to the level of a student who's taken a course in database systems, which is why you're coming across as a crank - of course multivalue beats relational if you hobble relational by insisting on only going to 1NF, just as relational would beat multivalue if you insisted that you cannot have more than one value for a single attribute (hence hobbling multivalue by taking away the thing it does well).
Posted Apr 8, 2022 0:36 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (6 responses)
Why on earth would I want to re-organise the database? I don't remember any mention of an employee database (my favourite example is invoices and I did that bookstore one). If I have an employer database I would have all the employer data in one FILE, all the people data in another. What would I want to move, and where would I want to move it? This shows you clearly have no clue about the Pick data structure, sorry.
And I don't know what you mean by "where relational wins is when I want to do a more complex query". So far, "more complex" seems to me to mean "using a join", which simply shows you have no clue WHY JOINS ARE UNNECESSARY in Pick. The whole point of Pick, is EVERYTHING is a point lookup on a single entity, apart from the initial search. And if you happen to know the initial primary key you don't even need that search. That's why it's so fast.
Let's take your example. I guess you have a table "payments". I'll have a FILE "payments". Chances are my FILE will be in 1NF because I think all the attributes are single-value. Random GUID as primary key, at least two attributes "payer" and "payee" which are the primary keys of the "employer" and "person".
SELECT PAYMENTS WITH EMPLOYER_ID EQ {list of employers} SAVING UNIQUE PERSON_ID
Is that REALLY more complicated than your SQL?
If the PAYMENTS file has an index on EMPLOYER_ID I haven't at any point done a table scan - EVERYTHING has been a point lookup. Even if I was mad enough to store the payments in the person table, all I need is an index on EMPLOYER_ID. If I stored the payments in the employer file I think it would be even easier.
> To convince me, you need to demonstrate why Mr Date's cited proof in the textbook given above that relational with 4NF is equivalent to or better than multivalue is wrong. So far, all you've been able to show is that you don't understand relational to the level of a student who's taken a course in database systems, which is why you're coming across as a crank - of course multivalue beats relational if you hobble relational by insisting on only going to 1NF, just as relational would beat multivalue if you insisted that you cannot have more than one value for a single attribute (hence hobbling multivalue by taking away the thing it does well).
So you point me to a £150 text book, and tell me Wikipedia is wrong ...
Oh - and at no point whatsoever have you provided any evidence that you understand what's going on "under the bonnet" of your favourite relational system. If the data is not stored on disk in 1NF, how is it stored? I'll freely admit I don't know, but I get the impression you don't either.
And I'm left with the impression that pretty much any half-decent EAR analysis makes all your normal forms just fall out in the wash. If I store all of an entity's attributes together in a blob, with a DICTionary that makes breaking it down into 1NF easy (think an XML DTD), then you can build all your normal forms on top of it. But you cannot beat Pick for actually getting that data off of backing storage into the database.
I'm left with the impression that you know a lot about relational theory, but your knowledge stops when it hits the database. And it also stops with Relational, without understanding the advantages and disadvantages of other databases and theories. Whereas I know far more about what - Pick at least - actually DOES with that data to stash it away and retrieve it. Relational explicitly hides how it stores the data "in case we find a better way". Pick says "no better way is possible", and I'm afraid I agree with Pick.
Cheers,
Posted Apr 8, 2022 8:13 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (5 responses)
I'm not telling you Wikipedia is wrong - I'm telling you that, as an encyclopedia, it's abbreviated and incomplete compared to two chapters of one of the definitive texts on database theory.
And data is not stored in 1NF in a relational database - it's stored in the tables that you've defined. If you model the data in 1NF, then those tables are in 1NF; if you model it in 6NF, then those tables are in 6NF. See PostgresQL documentation on storage for how PostgreSQL stores that data on disk. In 4NF, where the proof of beating relational exists, I'd have multiple tables to match your Pick file.
At this point, I'm done. You're refusing to read the definitive text on databases, which covers multivalue as well as relational, you're ignoring 50 years of well-studied theory, and you're making assertions about how relational works that are simply dishonest and false.
Posted Apr 8, 2022 8:41 UTC (Fri)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Awkward typo there :)
Posted Apr 8, 2022 9:46 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
You're right - I meant to write "proof of relational beating multivalue". Still annoyed by this thread, which Wol has chosen to ignore, in which I showed that my mental model of PostgreSQL was accurate, and it beat Pick on Wol's chosen metric.
But, of course, Wol chose to dismiss it on the basis that having made a prediction, I validated it against reality, and my prediction turned out to be accurate - it's only "scientific" if it fits his prejudices, not if it can be verified against what the database engine does.
Posted Apr 8, 2022 11:53 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (2 responses)
I'm REALLY confused now ...
I have just one FILE.
The documentation tells me PostGreSQL stores data, in rows, in tables.
So you can read MULTIPLE rows, from MULTIPLE tables, faster than I can read ONE record from one file? Truly?
Cheers,
Posted Apr 8, 2022 12:38 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (1 responses)
Yes, because I can find the right place to read from the tables faster than you can find the right place to read in your file. We had a worked example in another thread, where you indicated a total of 10 reads to output the right data, while in the relational example you were trying to beat, 5 reads was enough for PostgreSQL, and 6 reads were needed by MySQL because MySQL doesn't have useful search indexes for one of the cases.
And, of course, I can store multiple tables in a single file under the relational world, and thus just have to read one record from one file, too.
Recall your basic database history: we start with record-oriented workflows, where each entry is a single record, one value per column. We then add indexes that allows you to go from value to matching records for a column quickly. There are two directions that theory takes from here:
For complex reasons (please find a decent textbook in a library, like Christopher Date's one that I've already recommended), if you store your data in tables that are in 4NF or higher, then a mechanical translation of a multivalue insert or read to a sequence of relational inserts or a single join in relational will never involve more disk access than the corresponding change in a multivalue database. The extra tables and indexes that relational has to handle impose a cost, but so does multivalue's handling of multiple values for a single column, and once you're in 4NF or above, relational's cost is equal to or less than multivalue's cost.
Posted Apr 8, 2022 22:16 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
That's a new claim ... My files are hashed - if I know the primary key I know exactly where to look with almost no effort whatsoever. And the whole point of Pick is that it's all primary-key based.
I think I remember the example about 10 reads - is that because the example contained 10 table rows? I don't remember you saying PostGreSQL only needed 5 reads. One for the order details and four for the 10 rows? How does PostgreSQL retrieve ten distinct rows in only four disk blocks? The answer is obvious, they happen to cluster. But is that natural clustering because the table is small, or because PostgreSQL is intelligent enough to recognise they are related and cluster them together? Do you know? Because Pick will benefit just as much from natural clustering. But if I stored those ten rows as a sub-table in the order details I have intelligent clustering and the number of reads specifically to retrieve those ten rows is ZERO - they've come along with the order details.
(I've carefully avoided claiming - or taking into account - anywhere I think that outside influences benefit or hinder either database equally - like I talk about calls into the OS, but I assume the OS treats us both alike ...)
> Multivalue, where a single column can now have multiple values in a single record. This makes the records bigger, and the indexes more complex (thus more expensive to read)
????????????????
Single value index: value->primary key
Either you don't have a clue what multi-value is, or your definition of multi-value is completely different from mine.
Likewise your definition of hashed file seems to be completely different from mine - the whole point of hashed files is that if you know the key, you can compute the location of the data you are looking for in no time flat. How does PostgreSQL find the location of the rows FASTER than a hash-compute? Do you know, or are you just blindly believing EXPLAIN without understanding how it does it? Because I would LOVE to know how it does it.
I'll give you that Pick records are far bigger than relational rows. But the statistical probability that you WANT that extra data is high. Which means the information/noise ratio PER BLOCK READ is also high. What's the probability that you want the extra rows dragged in with each relational read? Low? So the information/noise ratio decays much faster with Relational as database size grows, precisely because each "unit of data" - your row to my record - is much smaller. Bigger records means less noise in the retrieved blocks.
Cheers,
Posted Mar 19, 2022 4:09 UTC (Sat)
by ssmith32 (subscriber, #72404)
[Link] (2 responses)
For the record, I believe a few modern relational DB have some sort key/value store backing them (e.g. cockroachdb, IIRC).
And as far as index joins being fast, once you get to non-trivial data sizes, many dbs are column oriented with no indexing available. I'm guessing because storing indexes for that much data becomes unmanageable. Unless you count bitmaps as an "index".
Of course, this is all true.. until hardware makes your non-trivial data trivial again, many features are rediscovered, and the circle is complete.
Posted Mar 19, 2022 12:02 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
> For the record, I believe a few modern relational DB have some sort key/value store backing them (e.g. cockroachdb, IIRC).
And, we use Toad at work to access Oracle. Which gives me the IMPRESSION that Oracle is one of them.
> And as far as index joins being fast, once you get to non-trivial data sizes, many dbs are column oriented with no indexing available. I'm guessing because storing indexes for that much data becomes unmanageable. Unless you count bitmaps as an "index".
Interesting. I would have thought that's a solved problem. Certainly in Pick it is. We have the concept of a data segment, in which ALL data is stored, an index is just another data segment. It's managed by the DB, but you can create a data-pointer to it and it looks to all the world just like any other data file. It just contains your index field and your record id inverted. Oh and because we have the concept of calculated columns, you can create ONE index on MULTIPLE columns dead easy - just index a virtual field! Just don't create your virtual index using other virtual fields, as Pick won't spot the other table has changed, and will leave you with a corrupt index.
But in other words, if your index is too big, then surely your data is even more too big ...
> Of course, this is all true.. until hardware makes your non-trivial data trivial again, many features are rediscovered, and the circle is complete.
Exactly the "throw hardware at the problem" attitude I hate, rather than using smarts to avoid wasting energy and throwing brute force at it.
Cheers,
Posted Apr 6, 2022 23:06 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Only as an add-on extension that is, uh, rarely used: the database proper is relational like everyone else. (Disclaimer: I work for Oracle, but not on anything remotely related to this stuff -- and my knowledge is ten years out of date now. But back then, use of this stuff was rare, and I've seen nothing to suggest it is much more common now.)
Posted Mar 20, 2022 13:39 UTC (Sun)
by lbt (subscriber, #29672)
[Link] (1 responses)
You tease!
Posted Mar 20, 2022 13:59 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
Search for ScarletDME on github ... and be aware it has "features"!!! You know - the sort of thing you would call a bug if only you could find the time to fix it!
Reach out to me off-line. Join the mailing list. Just be aware you'll be jumping into a hotbed of people who've drunk Getafix's magic potion, and we'll be trying to convert you :-) If you're serious, you will be inundated with help ... just be aware, it is quite a mind-jolt to a very different way of looking at things ...
Cheers,
Posted Mar 16, 2022 9:10 UTC (Wed)
by joib (subscriber, #8541)
[Link]
Maybe the mainframes have more PCIe lanes etc. than you'd find even in the highest end x86 server due to the market they're targeting, but that's a difference in degree not in kind. I'd guess there's lots of mainframe workloads that'd run just fine on a slightly lower end platform.
Posted Mar 16, 2022 18:03 UTC (Wed)
by mmaug (subscriber, #61003)
[Link]
But a lot of the I/O load on the mainframe was memory management. Emulate the section swapping making it a no-op and your left with an I/O load that any JS code written by a script kiddie could make look pedestrian. We had constraints that we had to design and program for, those are not the constraints we have in this world...
Posted Mar 15, 2022 22:13 UTC (Tue)
by bartoc (guest, #124262)
[Link]
It will be nice to see what kind of nifty code they can generate for the kinds of programs Cobol is suited to. Maybe it will make filing my taxes easier some day (afaict the US IRS electronic filing system uses COBOL quite extensively).
Posted Mar 16, 2022 9:29 UTC (Wed)
by sdalley (subscriber, #18550)
[Link] (2 responses)
> Eventually, if the gcc maintainers are interested,
It would be interesting to hear whether they considered using LLVM as a foundation rather than gcc, as the Rust developers did, since it is reputedly more modular with better documented pieces. Was it because gcc has a lot more available back-ends? Do they have more of an inside track with the gcc developers and/or familiarity with gcc internals and can thus get more quickly up to speed?
James did say:
> I am happy to debate the lunacy of this project...
so I just thought I'd ask. At the end of the day it's absolutely their project and they can of course do what they wish.
Posted Mar 17, 2022 17:20 UTC (Thu)
by jklowden (guest, #107637)
[Link] (1 responses)
We didn't know anyone working on GCC or have any prior experience extending GCC. My "read" on the project is that it would be amenable, culturally and technically, to adding new languages, as evidenced by the several new additions in recent years.
I do have some prior negative experience working with LLVM. Upon a time, I wanted to use the clang "toolkit" to produce a code-analysis database for C++ projects. Experience in large C++ shops proved that existing tools to analyze and navigate C++ projects are either based on C (or not even that), and thus can't distinguish A::foo from B::foo, or fall over when the corpus approaches 1 million SLOC.
For example: show all functions calling A::f(int) and their antecedents, back to main(). Show all calls to A::f(int) that provide an rvalue for the argument and ignore the returned value. There used to be tools to do that kind of thing in C (Digital's Source Code Analyzer, for one) but never a free one, and I've never heard of any for C++ at any price.
Clang at the time was thinly documented and already very complicated. I was unable to get off the ground with it. Because of that experience -- and, yes, because of the wide adoption and deployment of GCC -- we opted to base our project on GCC.
FTR: my experience is just my experience. Time has gone by, and the same clang river is not even there to step into again. You asked how we decided, and that's the answer.
Posted Mar 17, 2022 18:24 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
I would interested in a blog post or two discussing this aspect of the journey, how long did it take to be productive, what you found easy or difficult, what do you consider the positives of the architecture etc.
Posted Mar 16, 2022 23:48 UTC (Wed)
by professor (subscriber, #63325)
[Link] (1 responses)
IBM, please make the platform available for people not having $999999999 in their pocket please!.
I think this thing is good since it can get new people to be interested in COBOL and when they finally are in their finest conversion cloud they realize that "oh, this crap actually runs best on the IBM Z/Mainframe anyway so why are we not doing it? Why GOTO Java, crap, etc."
Posted Mar 17, 2022 8:41 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Universities don't have mainframes ...
(Didn't some ISP somewhere buy a bunch of mainframes as the cheapest way to provision/sell loads of managed servers for geeks like us running linux?)
Cheers,
Posted Mar 18, 2022 0:00 UTC (Fri)
by jccleaver (guest, #127418)
[Link] (3 responses)
Posted Mar 18, 2022 19:56 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
(When the z800 was released, some - was it Finnish? - ISP bought one on the basis that it only had to sell 15000 virtual servers to break even, and it was capable of running a lot more and making a very nice profit thank you.)
Cheers,
Posted Mar 18, 2022 21:22 UTC (Fri)
by mpr22 (subscriber, #60784)
[Link]
"Betting"? We can go and look at their product offerings.
IBM Cloud offers IBM Z, x86-64, and Power hosting.
Microsoft Azure offers x86-64 Windows or Linux hosting (either Intel and AMD depending on exactly which service you order) with explicitly stated CPU model numbers and clock speeds.
AWS offers x86-64 (again, either Intel or AMD depending on exactly which service you order) and ARM hosting, again with explicitly stated CPU model numbers and clock speeds.
Now, the business administration and billing systems? Those might be running on IBM mainframes, I guess (though perhaps not at Microsoft, given history ;)
Posted Mar 18, 2022 21:34 UTC (Fri)
by excors (subscriber, #95769)
[Link]
Like https://aws.amazon.com/mainframe-modernization/ (with some more concrete details in https://aws.amazon.com/blogs/apn/transitioning-mainframe-... etc)?
They say:
> At launch, Mainframe Modernization supports two main migration patterns: 1) automated refactoring to transform COBOL mainframe code into Java, and 2) replatforming with middleware emulation to a mainframe-compatible runtime environment. The goal of replatforming is to reduce code changes as much as possible to decrease risk and accelerate migration timelines. Both patterns facilitate service decomposition toward creating macroservices or microservices, and you can use both patterns on a single project based on business objectives.
i.e. you either translate your COBOL to Java, or run it in an emulator, using EC2 servers and a standard cloud database service (like Amazon Aurora), and it sounds like there are various tools to help automate that process. Then you can incrementally split out components and redesign them as loosely-coupled microservices running on separate servers with their own smaller databases (to improve performance and scalability and reduce cost etc, because running the original mainframe-like application design on cloud servers is probably very inefficient and is just a temporary stage during migration), until you end up with a modern cloud-based system architecture.
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
> I think modern mainframes are server clusters running cluster-aware operating systems that emulate old mainframes, but don't tell the admins that...
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
"…except for the problem of too many levels of indirection."
Wol`
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
I have listened to several presentations (and I think for more than one project) about porting IBM mainframe programs to cheaper hardware by translating the binary to, e.g., C and then compiling that C program on a cheaper machine.
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
"Swallows and Amazons" £2 2 £4
"War and Peace" £10 1 £10
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
LIST PERSON
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
You have multiple tables.
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Multi-value index: value-> primary key
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
> And, we use Toad at work to access Oracle. Which gives me the IMPRESSION that Oracle is one of them.
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
> we would like to pursue full integration with gcc.
> For the moment, we have questions
> we're hoping can be answered here
> by those who ran the gauntlet before us.
> Given the state of the internals documentation,
> that seems like our best option.
> We've been rummaging around in the odd sock
> drawer for too long.
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler
Wol
gcobol: a native COBOL compiler
gcobol: a native COBOL compiler