|
|
Subscribe / Log in / New account

Consider the web in 1994

Consider the web in 1994

Posted Jul 24, 2024 6:36 UTC (Wed) by b7j0c (guest, #27559)
Parent article: Imitation, not artificial, intelligence

People seem very hung up on this "its just a parrot" thing.

If it produces utility, who cares? The reductionist jabs definitely feel like a midwit reflex at this point.

People also seem hung up on damning AI because of hallucinations.

Think of this as the web in 1994. Totally not ready for primetime but absolutely fertile ground for development.

AI isn't essential technology for anyone today, but it is already yielding tangible benefits that you should not underestimate. For example, my son is a college student working at an AI company as an intern. He is relatively sharp but is unskilled. He gets a general idea of what he wants to do (build something based on Docker, for example) and uses Anthropic Claude to guide him from knowing nothing to having tangible bits he can use towards building solutions within a couple of days. Claude basically erased the demand for deep experience in this case.

The next decade is going to be interesting.


to post comments

Consider the web in 1994

Posted Jul 24, 2024 7:15 UTC (Wed) by LtWorf (subscriber, #124958) [Link] (50 responses)

Just going to point out that tutorials were available before.

Making information more accessible

Posted Jul 24, 2024 7:43 UTC (Wed) by karath (subscriber, #19025) [Link] (21 responses)

Many great tutorials are surprisingly hard to find, buried in a miasma of outdated material and misleading and plain wrong answers given on random forums. Something that helps a beginner to more quickly find the good stuff means not only do they solve the immediate issue, they can also develop the critical mindset to filter the junk

Making information more accessible

Posted Jul 24, 2024 8:48 UTC (Wed) by Wol (subscriber, #4433) [Link] (5 responses)

> Many great tutorials are surprisingly hard to find, buried in a miasma of outdated material and misleading and plain wrong answers given on random forums.

Most great tutorials are never written !!!

In FLOSS especially, we're great at writing manuals. Which unless you already know what it's trying to tell you are written in incomprehensible jargon (nothing wrong with jargon, it's just specialist language. But if you're not a specialist you can't understand it).

We're pretty poor at writing cookbooks - my favourite method of learning. Look at the raid wiki for the article on building a home server - I would hope anybody who follows it will understand WHAT I did, WHY I did it, and by following it will easily end up with pretty much the same system I did. That might not be what they want, but it will build their understanding and enable them to change bits to get closer to what they want.

But no way is that a proper - or even good - tutorial. But it wouldn't surprise me if, as a *tutorial* it's the best most people will be able to find for building a raid server.

Cheers,
Wol

Making information more accessible

Posted Jul 24, 2024 12:44 UTC (Wed) by LtWorf (subscriber, #124958) [Link]

Well AI learnt what it knows from the documentation that did get written. It won't know more than that.

Making information more accessible

Posted Jul 25, 2024 9:09 UTC (Thu) by khim (subscriber, #9252) [Link] (3 responses)

> Which unless you already know what it's trying to tell you are written in incomprehensible jargon (nothing wrong with jargon, it's just specialist language.

Try to read any tutorial written in XIX century and you would have the same issue. Except maybe actual cooking tutorials since cooking haven't changed that radically in 200 years.

Tutorials are never a replacement for expertise and the only way to learn something is either by experimenting or from someone who already know what they are doing. Good old apprenticeship.

What we have these days just creates ignoramuses that have simple, straightforward “program” in their head without any “error handling”.

These people know what to do in the “happy case”, when everything works fine, but are ever less capable of handling any exceptional cases.

AI is just another step on that degradation road: instead of having a way to handle the “happy case” and no knowledge of error conditions it gives us worker who could only handle some “happy cases” and doesn't even know errors may exist.

Making information more accessible

Posted Jul 29, 2024 0:17 UTC (Mon) by Paf (subscriber, #91811) [Link] (2 responses)

It’s really obvious from your comment you haven’t actually tried these tools. Frontier LLMs are in fact quite good about including error handling in code, especially if you ask them.

Making information more accessible

Posted Jul 29, 2024 0:46 UTC (Mon) by khim (subscriber, #9252) [Link] (1 responses)

> It’s really obvious from your comment you haven’t actually tried these tools.

On the contrary: at my $DAYJOB they have enabled that crap and now I'm looking on the idiotic attempts of these LLMs to create something every time I write comment during code review.

Sometimes, when comment is about something trivial, like “we should use string_view here, not string, see totw #1” it even generates sensible code. But that's rarity. Most of the time it generates patent nonsense, because it doesn't understand what it does. It couldn't, there are no brain behind what it does.

> Frontier LLMs are in fact quite good about including error handling in code, especially if you ask them.

They are pretty good at generating crap that looks sensible but doesn't work!

To the tune that when I see that change proposed is crap I just know that I have to contact privately to ask submitter to stop using that nonsense and write things by hand.

Unfortunately not every company have the rule that unreviewed code couldn't be accepted, I shudder to think about what this crazyness would lead to in companies that accept code without reviewing it.

Making information more accessible

Posted Jul 29, 2024 10:15 UTC (Mon) by paulj (subscriber, #341) [Link]

I for one encourage my competitors to make full use of LLM code reviewers!

(Though, there will undoubtedly be regular collateral damage to society soon enough).

Making information more accessible

Posted Jul 24, 2024 13:45 UTC (Wed) by paulj (subscriber, #341) [Link] (14 responses)

But the AI depends on the tutorials and example code to be able to provide the assistance you describe, does it not? If as you say, it's a miasma of outdated material and misleading and plain wrong answers... guess what the AI is going to spit out?

(And hell, even if fed on a body of high quality tutorials and example code it will still bullshit! See the "ChatGPT is bullshit" paper, which another commenter pointed to in the Meta LLM open source article).

Truly understanding programming

Posted Jul 24, 2024 15:40 UTC (Wed) by epa (subscriber, #39769) [Link] (13 responses)

I guess the true test of intelligence would be if you can tell the AI about a new programming language it doesn’t know (or has only heard of) and it can understand enough to write programs in that language. Like if you fed it the spec for Haskell or Scheme and asked it to write a program to print a circle in ASCII art.

Truly understanding programming

Posted Jul 24, 2024 16:26 UTC (Wed) by somlo (subscriber, #92421) [Link] (11 responses)

> I guess the true test of intelligence would be ...

Careful with the "No True Scotsman" iterations on the Turing test there... :)

Unlike Turing's mathematical work, the imitation game was merely a thought experiment we all sort of rolled with over the years. The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.

It is concievable that in the future, a more complex (but still finite) state machine might be able to fake understanding how to program based on some rules (if you think about it, Searle's Chinese Room is just another Turing machine: i/o + state machine, that is, in this case, able to fake understanding Chinese to a Chinese speaker standing outside the room).

However, true intelligence isn't when the Turing machine can make *you* believe it's intelligent. Rather, it happens when the machine thinks to *itself* that it is intelligent -- the whole Descartes "cogito ergo sum" thing). As to how *we* get to test for that, I don't think science has that figured out yet... :)

Truly understanding programming

Posted Jul 24, 2024 18:27 UTC (Wed) by k3ninho (subscriber, #50375) [Link]

>However, true intelligence isn't when the Turing machine can make *you* believe it's intelligent. Rather, it happens when the machine thinks to *itself* that it is intelligent -- the whole Descartes "cogito ergo sum" thing).

I'm glad you're happy with that state of play; I'm not, a web page has a reference to itself and almost any program more complicated will do too, so almost any more-complicated machine intelligence will parrot "cogito ergo sum" so I figure this is another case of "GOTO considered harmful." The more I'm asked to credit machinery and animals with smarts, the more I find: social and tool-using corvids who also train their young to recognise helpful and harmful humans, orcas in pods training and sustaining their young while attacking, ants collectively finding optimal paths, primates (and dogs and cats) learning sign language... there's more intelligence in the world than our culture can give credit to. Which brings to the next point, if we're not giving credit to animal intelligence, how are we going to spot machine intelligence emerging from our computer systems?

>As to how *we* get to test for that, I don't think science has that figured out yet... :)
I think this is backwards. The question is one that the applicant for personhood asks us and our scientists: "What do I have to do so you will treat me like a person?"

K3n.

Truly understanding programming

Posted Jul 24, 2024 21:46 UTC (Wed) by Wol (subscriber, #4433) [Link]

> However, true intelligence isn't when the Turing machine can make *you* believe it's intelligent. Rather, it happens when the machine thinks to *itself* that it is intelligent -- the whole Descartes "cogito ergo sum" thing). As to how *we* get to test for that, I don't think science has that figured out yet... :)

And something nature has figured out, but we haven't, is that in order to be intelligent, we need special-purpose hardware. For example, most people have special-purpose face recognition hardware. That is also hard-wired to name-association.

There's a whole bunch of other hardwired special-purpose systems. All of which are in their own small way doing their best to make sure that the General Purpose Brain is kept firmly in step with the "reality" outside. The thing about these LLMs is they are mathematical models that don't care about that reality. So intelligence is impossible as they retreat inside their model and disappear deeper and deeper into their alternative reality.

Who was it wrote that story about traumatised soldiers who kept on disappearing and reappearing? And when the Generals discovered they were (apparently) disappearing into the past and things like that they thought it would be brilliant - as a weapon of war - to go back in history and rewrite it. Until they discovered that the Roman Army wore wristwatches ...

AI will very soon (if it isn't already) be living in that world, not ours, until we find some way of telling the AI watches didn't exist back then...

Cheers,
Wol

Truly understanding programming

Posted Jul 24, 2024 22:16 UTC (Wed) by epa (subscriber, #39769) [Link] (5 responses)

Sorry, I wasn’t referring to the Turing test or to general intelligence, but only to “programming intelligence”, whatever that may be. Can it really program by learning the rules of the language (another example might be an assembly language for a made-up chip, augmented with an instruction to print a character) and work out the code implementing a simple algorithm? I should have avoided the word “test”.

Truly understanding programming

Posted Jul 25, 2024 9:18 UTC (Thu) by khim (subscriber, #9252) [Link] (4 responses)

Right now these models couldn't even correctly implement algorithm they describe in any language they already know.

The only way for them to do that correctly if they saw full solution somewhere, which makes them perfect for cheating and frauds, but not all that suitable for real work. Except maybe for some repeatable work that we have been trying to eliminate for decades.

Which is cool, but more of evolutionary progress on the road that IDEs started on decades ago than something entirely new.

Truly understanding programming

Posted Jul 25, 2024 10:14 UTC (Thu) by paulj (subscriber, #341) [Link]

The interesting thing is at least some NN architectures are equivalent to Turing machines (AIUI). In principle a model on such an architecture could be 'trained' to implement any extant algorithm and even /discover/ new algorithms (provided you know what output you need, so you can train it).

Given that (if that claim about Turing equivalence is true, as I've read), then there is a way to take that algorithm and describe it - both in terms of a more "normal" computer language and human language.

Truly understanding programming

Posted Jul 26, 2024 4:09 UTC (Fri) by raven667 (subscriber, #5198) [Link] (2 responses)

> Which is cool, but more of evolutionary progress on the road that IDEs started on decades ago than something entirely new

Which is a funny point because they could extend IDE auto-complete functionality and be a useful addition but the insistence that _all_ the work needs to be done by the LLM and the lack of feedback makes it worse because it's output is not being cross referenced with all that the LSP knows about the code in your editor, letting it create sample code which references methods/arguments/etc. which don't exist. Similar to how LLMs can't actually perform math calculations, because the developers don't build in an escape hatch to just run a calculator when a math expression exists in the input, they are so fixated and proud of what the LLM appears to do that they choose not to build in the proper feedback mechanisms to use a more appropriate tool when appropriate.

Truly understanding programming

Posted Jul 26, 2024 7:35 UTC (Fri) by khim (subscriber, #9252) [Link]

Actually that's what they have tried to do and are still trying to do in most companies but then OpenAI released ChatGPT and the hype train started.

After that they just had no time to properly integrate anything, everyone else just had to show that they can do cool things, too. Even if they couldn't, just yet.

Truly understanding programming

Posted Jul 26, 2024 14:34 UTC (Fri) by atnot (subscriber, #124910) [Link]

> Which is a funny point because they could extend IDE auto-complete functionality and be a useful addition but the insistence that _all_ the work needs to be done by the LLM and the lack of feedback makes it worse

Yes, I've been thinking this ever since I've seen people using github copilot. Our editors are already pretty darn good at finding out what the next valid thing can be. And they're basically guaranteed to be correct too, no "hallucination" nonsense. They're just not particularly good at sorting those options by relevance right now. There's no need to train these huge models to output code from nothing to get these advantages? Surely you could make some sort of model to score autocomplete suggestions by surrounding context that would be at least as good if not better than what microsoft is offering. And it surely wouldn't require the energy consumption of a small nation either. I'd use that in a heartbeat. But nobody seems to be interested in that sort of thing because it's not going to "replace programmers" or whatever.

Truly understanding programming

Posted Jul 25, 2024 4:20 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link] (2 responses)

The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.

That's not actually what Turing said, though. The original "imitation game" was intended to be played with a skeptical judge doing their best to tell which interviewee was a human and which was a machine. LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine. It's not just that their language doesn't sound quite right; they just can't hold up a conversation for long without drifting into nonsense. Their creators have tried to mask this by limiting the length of their output so they can get new human input to respond to, but it still shows up.

Not that passing a Turing test was ever intended to be the one true test of intelligence. It was intended to be an extremely stringent test that should satisfy even people who were most skeptical of the idea of machine intelligence. After all, if we really can't distinguish a machine from a human, it's hard to argue that the machine hasn't achieved human-level intelligence.

Truly understanding programming

Posted Jul 25, 2024 9:56 UTC (Thu) by khim (subscriber, #9252) [Link] (1 responses)

> LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine.

Yes, yes, they would. I doubt Turing ever anticipated that his test would be passed in such a crazy way, but that's what happened: “new generation” acts and talks more and more like LLMs do!

It's not clear whether that's just lack of experience and they would outgrow the threshold that the likes of ChatGPT 4 set or not, but the truth is out there: it's more-or-less impossible to distinguish gibberish that ChatGPT produces from gibberish that many humans are producing!

And if you start raising the bar to exclude “idiots” from the test then it's no longer a Turing test, but something different.

> It's not just that their language doesn't sound quite right; they just can't hold up a conversation for long without drifting into nonsense.

True for some humans as well when you are asking them to talk about something they are not experts in. Especially older ones.

LLMs, today, are in a very strange place: they couldn't do anything well but they can do many things badly. But most human workers are also performing badly, the expert ones are very-very rare novadays!

Truly understanding programming

Posted Jul 26, 2024 14:43 UTC (Fri) by atnot (subscriber, #124910) [Link]

> it's more-or-less impossible to distinguish gibberish that ChatGPT produces from gibberish that many humans are producing!

This seems like a silly criteria. If you can't reliably distinguish the gibberish created by a brick falling onto a keyboard from the gibberish created by a cat sitting on one, that doesn't make the brick as intelligent as a cat.

Truly understanding programming

Posted Jul 24, 2024 21:49 UTC (Wed) by Wol (subscriber, #4433) [Link]

> I guess the true test of intelligence would be if you can tell the AI about a new programming language it doesn’t know (or has only heard of) and it can understand enough to write programs in that language.

Try asking it to write a program in DataBASIC :-) The only reports I've heard about people trying that is that it's been a complete disaster competing for the "how many pages of printout can you generate from one syntax error" prize (something my fellow students did at school - they were programming in FORTRAN).

Cheers,
Wol

Consider the web in 1994

Posted Jul 24, 2024 7:47 UTC (Wed) by b7j0c (guest, #27559) [Link] (17 responses)

True, but AI can be utilized more like a tutor, accelerating the learning process.

AI will eventually become deeply contextual, so progress and learning over time will be reflected.

Having an expert at your side at all times will change industries.

Consider the web in 1994

Posted Jul 24, 2024 7:58 UTC (Wed) by rgb (subscriber, #57129) [Link] (10 responses)

LLMs are not experts! At best they are a bayesian database that can be queried using human language.

Consider the web in 1994

Posted Jul 24, 2024 8:00 UTC (Wed) by b7j0c (guest, #27559) [Link] (9 responses)

What's the difference? This is a reductionist take that has no value

My car is a an explosion box. My house is a sand pile. etc etc

Putting a label on something does not remove its utility.

Consider the web in 1994

Posted Jul 24, 2024 8:18 UTC (Wed) by rgb (subscriber, #57129) [Link] (4 responses)

The distinction might seem petty to you, but it is actually crucial. A CEO or middle management hears that this LLM is actually an "expert". Why should they still employ human experts then? This is a pitty for the human experts who are losing their jobs of course. But it is an even bigger pitty for the rest of us who are now putting their fate into the hands of bayesian parrots.

Consider the web in 1994

Posted Jul 24, 2024 8:29 UTC (Wed) by b7j0c (guest, #27559) [Link] (3 responses)

CEOs will play the risk/reward game like they always have

if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market

if a CEO clings to humans when automation is ready, the company could lose in the market

in a competitive landscape, society gets closer to a state is prefers through its choices

no different than any tech since the industrial revolution

Consider the web in 1994

Posted Jul 24, 2024 12:06 UTC (Wed) by pizza (subscriber, #46) [Link]

> CEOs will play the risk/reward game like they always have

More accurately: They will go with the cheapest option they think they can get away with.

> if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market

More accurately: All that matters is that the financials look good *now* and any negative consequences come "later"

The economy is screwed

Posted Jul 26, 2024 15:42 UTC (Fri) by immibis (subscriber, #105511) [Link] (1 responses)

We don't have a competitive landscape any more. We have a hierarchical landscape where certain people are in charge and have locked themselves into the position of being in charge. It doesn't matter if the company will eventually fold 5 years after making a wrong decision (and it's questionable whether wrong decisions make companies more likely to fold than right decisions), because we still have to deal with the fallout of the wrong decision immediately and for the next 5 years, and then for another year until a sufficient replacement is available.

Look at all the world's information being locked on Reddit, which has just closed its doors (to every consumer of that information except for Google, who pays a lot of money to not be excluded). Reddit will surely go bankrupt due to this... in some years. It's already been over a year since Reddit started down this track. Reddit has almost never been profitable, so you could argue it's been making wrong decisions for ~20 years and still not failed. Meanwhile Facebook is occupied by almost exclusively Russian bots, and still has a high stock valuation. The markets are 99.9% disconnected from reality.

The economy is screwed

Posted Jul 26, 2024 17:11 UTC (Fri) by LtWorf (subscriber, #124958) [Link]

AFAIK reddit is there to allow the USA government to run their bots. It appears that it isn't just russia doing that.

Remember the deleted post that stated that a USA airbase was the most reddit addicted city?

Consider the web in 1994

Posted Jul 25, 2024 4:01 UTC (Thu) by raven667 (subscriber, #5198) [Link] (3 responses)

> This is a reductionist take that has no value

I think a reductionist view has value because an LLM is not an intelligence, it is a thoroughly lobotomized parody of speech and writing, that only works because people are _very_ willing to extend the theory of mind to anything making vaguely human-sounding noises, even eliza was taken to be intelligent to some people, LLMs are only a little more sophisticated, but with just great gobs of training data the LLM can spit out something plausible sounding given a wide variety of inputs.

An LLM can't train someone in programming because it doesn't know how to program and it doesn't know how to teach and it has no way to model the state of the learner and adjust a pedagogy based on a feedback loop consulting hundreds or thousands of different neural networks in your brain. An LLM hardly has any feedback loops of any kind and barely one network.

An LLM can spit out text real good but it doesn't have enough varied systems to have the intelligence of a fruit fly. All it has are the strings of symbols that people have used to communicate with each other and with the computer and all it can do is combine those symbols in abstract and pleasing ways. People will invent a theory of mind when talking to it, but people are capable of anthropomorphism far less animate objects than computers.

It may well be that some day an artificial mind is created, it's not like brains are made out of magic unobtainium, they obey and rely on the physics of the same universe that allows computers to function, but there is just nowhere near the complexity and interlocking learning and feedback systems present in modern "AI" that are present in _any_ vertebrate, let alone human intelligence.

Consider the web in 1994

Posted Jul 25, 2024 16:33 UTC (Thu) by rgb (subscriber, #57129) [Link]

Well said. There really isn't enough pushback against this delirious narcissistic AGI drivel.
BTW here is a related piece I couldn't agree more with:
https://www.theatlantic.com/technology/archive/2024/07/op...

Consider the web in 1994

Posted Jul 25, 2024 19:47 UTC (Thu) by Wol (subscriber, #4433) [Link]

> but there is just nowhere near the complexity and interlocking learning and feedback systems present in modern "AI" that are present in _any_ vertebrate, let alone human intelligence.

The scary thing is there isn't the feedback in modern AI that is present in very basic lifeforms - even the humble fruitfly. And even the human brain probably has less raw processing power than a 6502! (Dunno how much we do have, but I know we achieve much more, with much less resource, than a computer).

Cheers,
Wol

Consider the web in 1994

Posted Jul 26, 2024 15:43 UTC (Fri) by immibis (subscriber, #105511) [Link]

IIRC, ELIZA was taken to be intelligent by *most* people, including people who understood how it worked. Its creator found this quite shocking.

Consider the web in 1994

Posted Jul 24, 2024 13:47 UTC (Wed) by paulj (subscriber, #341) [Link] (5 responses)

LLMs are not experts, they're bullshitting machines. They use statistical inference to create plausible looking stuff. E.g.:

https://web.archive.org/web/20240101002023/https://www.ny...

(Example given in the "ChatGPT is bullshit" paper, which someone else had linked to in the Zuck / Meta LLaMa article).

Consider the web in 1994

Posted Jul 24, 2024 14:32 UTC (Wed) by Paf (subscriber, #91811) [Link] (4 responses)

They’re incredibly, absurdly useful for this stuff. They do not need to be 100% accurate to be incredibly useful.

Most tutorials online are also a bit wrong or out of date. The LLM synthesizes from many and is generally, in my experience, stronger and more accurate than most individual articles.

It’s easy to say things like Bayesian parrot, but whatever label you attach they are in practice *really good* at this stuff. That’s from substantial personal experience.

Consider the web in 1994

Posted Jul 24, 2024 17:10 UTC (Wed) by legoktm (subscriber, #111994) [Link] (2 responses)

I'll echo this. LLMs are, to my surprise, quite good at generating code, and crucially, code is something we have extensive tooling and practices to verify the correctness of.

I feel people are falling into the AI effect trap: "The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not 'real' intelligence." No, LLMs are not anywhere close to human intelligence, that doesn't stop them from being quite useful regardless.

Consider the web in 1994

Posted Jul 25, 2024 4:08 UTC (Thu) by raven667 (subscriber, #5198) [Link] (1 responses)

There may be some kinds of repetitive boilerplate code which is well represented in a training data set which can then be reproduced by an LLM more or less correctly, but for anything else the LLM isn't going to be able to understand the requirements or the goals and may struggle to make a response that is syntactically valid, let alone solving your problem. Even when the response superficially seems correct, you will end up maintaining and refactoring the code produced, that you didn't write and may not really understand, when you have to find the bugs when the output is incorrect. This may _feel_ very productive in the beginning and generate great gobs of code but once the tech debt bill comes due it may not be that productive after all.

Consider the web in 1994

Posted Jul 29, 2024 0:27 UTC (Mon) by Paf (subscriber, #91811) [Link]

I strongly encourage you to actually *try* these tools. It is very apparent from your comments about boilerplate code that you have not. Seriously - I’m a programmer with 12 years of pretty successful work experience, largely on an out of tree distributed file system. I wondered if they might be limited to boilerplate. While they are *extremely* good at boilerplate, they absolutely are not limited to it. Sure, they can’t really do much kernel work of length, but they are wildly good at even moderately complex scripting and helping you use unfamiliar APIs. Yes, the API part is sort of boilerplate, but it doesn’t have to be common stuff - it can be obscure ones, including kernel APIs.

Consider the web in 1994

Posted Jul 25, 2024 10:02 UTC (Thu) by paulj (subscriber, #341) [Link]

Yes, I'm sure they're useful for stuff like code, and other fields where it's often required to produce volumes of material that fit to already established patterns in the field, and tailor parts of them to the current task. IF you get the AI to produce the material for an expert to review, correct, and tweak; then you can save the time of that expert having to do the initial trawl and production. Sure, that's a time save.

It's a big IF though. Cause some people who are not experts will use it to produce reams of material that /look plausible/. Worse, people who are experts but are a bit lazy (or are doing their "homework" a bit too late) may /fail/ to do the review, correct and tweak step and instead present the AI generated bullshit as the work of their own expertise! (The article I linked to being a case in point - how much money did the defendant have to expend on legal fees to prove it was bullshit! Had they not had those resources, they may have had to fold and settle or lost the case - and then been on the hook for the costs!).

So yes, useful. With caveats. And the problem is some will be ignorant of or just ignore the caveats - like in the article!

Consider the web in 1994

Posted Jul 27, 2024 16:15 UTC (Sat) by DanilaBerezin (guest, #168271) [Link] (9 responses)

As someone who used to rely quite a bit on tutorials when I first got into programming, you'd be surprised by how large of a portion of the tutorials out there are outdated or misleading "slop" as the speaker says.

Consider the web in 1994

Posted Jul 27, 2024 16:51 UTC (Sat) by LtWorf (subscriber, #124958) [Link] (8 responses)

On what do you think llm are trained?

Consider the web in 1994

Posted Jul 27, 2024 17:48 UTC (Sat) by DanilaBerezin (guest, #168271) [Link] (5 responses)

For programming specific questions? Hopefully on documentation, which is generally up-to-date and not as misleading as random youtube videos. ChatGPT can absolutely be misleading and most LLMs today are, but I think the main point is that an LLM done right for this particular use case would be trained on documentation and be able to transform the information on the fly into an easily digestible format a la tutorial style. This would enable beginners to gain all the benefits of tutorials without any of the costs.

Consider the web in 1994

Posted Jul 27, 2024 19:14 UTC (Sat) by atnot (subscriber, #124910) [Link] (4 responses)

> an LLM done right for this particular use case would be trained on documentation and be able to transform the information on the fly into an easily digestible format a la tutorial style

That seems like wishful thinking to me for a few reasons:

The whole idea of LLMs is that bigger models and more data is supposed to get you some kind of intelligence, but the current crop of LLMs has already been trained on pretty much the sum total of human text output available today and it's still crap. In fact one reason why they're not getting better is that improvements require exponentially more data and we're already out of data to train on. Getting superior results with a fraction of that with the same technology seems unlikely.

Second, effective teaching requires both understanding the subject matter and good knowledge and experience with how humans tend to think an learn. LLMs can't remotely do either of things at all, let alone for a complex subject like programming. Summarization, sure. That just requires pattern matching the phrases humans tend to use to indicate asides and writing conventions like the three part essay structure. But that's very different.

Consider the web in 1994

Posted Jul 27, 2024 20:29 UTC (Sat) by Wol (subscriber, #4433) [Link] (1 responses)

Third - successful learning requires feedback from outside that you've actually got it right.

LLMs spew rubbish because they have no feedback to tell them it IS rubbish.

Cheers,
Wol

Consider the web in 1994

Posted Jul 29, 2024 14:28 UTC (Mon) by anselm (subscriber, #2796) [Link]

because they have no feedback to tell them it IS rubbish

That's what the underpaid peons in places like Kenya are for. (One problem is that they rely on LLMs for their fact-checking because otherwise they couldn't keep up with the pressure.)

https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

Consider the web in 1994

Posted Jul 27, 2024 21:33 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

> The whole idea of LLMs is that bigger models and more data is supposed to get you some kind of intelligence

That's not quite right. The current crop of LLMs is inherently limited because they have a relatively small "working memory" and even less ability to learn on the fly. The current work is aimed at fixing this.

Consider the web in 1994

Posted Jul 27, 2024 21:48 UTC (Sat) by DanilaBerezin (guest, #168271) [Link]

As mentioned in the article, even though LLMs are trained on generic data, they can be fed specific data and instructed to base their responses off that specific data, effectively allowing you to create a specialist LLM. So if you want an LLM that specifically explains things about Python, you can take your favorite generic model, feed it python documentation, and then prompt it to answer all your python related questions. And this generally works. I've already seen it employed at workplaces where models are locally trained on relevant codebases/documentation and employees can utilize it to answer questions that would otherwise require talking to another employee to answer or reading through everything yourself.

Consider the web in 1994

Posted Jul 29, 2024 0:30 UTC (Mon) by Paf (subscriber, #91811) [Link] (1 responses)

I encourage you to try them - they are generally extremely good at synthesizing from sets of the partly outdated tutorials - which is what’s found online, largely -and providing something that works. Not unlike what a person might do.

Consider the web in 1994

Posted Jul 29, 2024 1:15 UTC (Mon) by songmaster (subscriber, #1748) [Link]

It sounds like it might be worth asking an LLM to update those outdated tutorials based on the changes that have been made to the code since the version that was documented. We all hate to have to update our documentation, if it works maybe that’s something we can accept automated help with?

Consider the web in 1994

Posted Jul 24, 2024 7:54 UTC (Wed) by rgb (subscriber, #57129) [Link]

There are at least as many people who think that AI can already do anything or will soon be able to. They obviously don't know what they are talking about, but are sadly frequently in a position of power. Naive applications of LLMs are a real danger to society because of their raw incompetence. In contrast these parrot arguments are first of all true and also necessary to be heard.

Consider the web in 1994

Posted Jul 24, 2024 8:19 UTC (Wed) by atnot (subscriber, #124910) [Link] (21 responses)

> Think of this as the web in 1994.

The tech industry has been saying this about every useless new thing they've hyped over the last decade and it's never been true for anything. Not even the *actual* web in 1994, as opposed to their self serving mythology of it.

It increasingly just feels like a desperate attempt to conjure up another dotcom bubble by people who didn't think they got rich enough in the first one.

The only thing that rhetoric makes makes me think of now is blockchains and NFTs and the metaverse.

Consider the web in 1994

Posted Jul 24, 2024 8:41 UTC (Wed) by b7j0c (guest, #27559) [Link] (20 responses)

> The tech industry has been saying this about every useless new thing

it isn't useless, people are using AI right now

in addition to the example of my son, my wife works in travel and uses AI to prep text and imagery for promotional campaigns. She is making money from it. Without AI, she would have to hire a photographer, graphics designer, writer, etc...there isn't enough money in what she does to pay all those people so the campaigns just wouldn't happen.

she isn't getting obsessed with some internal dialog about the innate "intelligence" or lack thereof in these tools, she is just turning them into revenue

Consider the web in 1994

Posted Jul 24, 2024 8:53 UTC (Wed) by Wol (subscriber, #4433) [Link] (19 responses)

> she isn't getting obsessed with some internal dialog about the innate "intelligence" or lack thereof in these tools, she is just turning them into revenue

And then she'll make a Virgin Trains-style blunder (Virgin is a very big brand name in the UK) who used a castle in Kent to advertise a city in Yorkshire (imagine some PR agency using a famous New York attraction to advertise Florida - it was that big a cock-up).

Cheers,
Wol

Consider the web in 1994

Posted Jul 25, 2024 10:36 UTC (Thu) by khim (subscriber, #9252) [Link] (18 responses)

Does it matter, in the end, if people are buying it?

Currently AI and LLMs are very bad at doing things where quality of the result matters, but it's surprising how many things where it doesn't matter all that much are happening in today's economy.

One may ask whether all that activity is even needed at all, but as long as it's profitable… AI have a niche.

Consider the web in 1994

Posted Jul 25, 2024 11:21 UTC (Thu) by kpfleming (subscriber, #23250) [Link] (17 responses)

So producing low-quality content is fine as long as it's profitable? I sincerely hope this isn't the world we've created for ourselves.

Consider the web in 1994

Posted Jul 25, 2024 11:41 UTC (Thu) by khim (subscriber, #9252) [Link] (16 responses)

Why should content be any different from many other things?

If you compare modern knife or modern table to knife or table made 200 or 300 years ago then modern one loses on most fronts. But it could be about 10 or 100 times cheaper, which justifies all these defects.

Why should industries that are creating content treated any differently?

P.S. And, similarly to knives and tables, we have the “foundation layers” that are, unquestionably, better then what we had before. Steel and raw wood, today, are much better than what masters had 200 or 300 years before. That's how even low-quality knives or tables are still usable today. And ad networks are much better at delivering ads than newspapers of 200 or 300 years before — and that's how even low-quality ads work.

Consider the web in 1994

Posted Jul 25, 2024 16:03 UTC (Thu) by JGR (subscriber, #93631) [Link] (15 responses)

> Why should content be any different from many other things?

> If you compare modern knife or modern table to knife or table made 200 or 300 years ago then modern one loses on most fronts. But it could be about 10 or 100 times cheaper, which justifies all these defects.

This seems like survivorship bias? The poor quality knives, tables and so on of 200 - 300 years are unlikely to have survived or been worth preserving long enough to be compared to today's poor quality offerings.

Consider the web in 1994

Posted Jul 25, 2024 16:50 UTC (Thu) by khim (subscriber, #9252) [Link] (14 responses)

> The poor quality knives, tables and so on of 200 - 300 years are unlikely to have survived or been worth preserving long enough to be compared to today's poor quality offerings.

Sure, that bias also exist, but the fact is: if you would apply modern technologies to materials available 200-300 years ago then such knife or table would just crumble before you may even use it!

We actually know these technologies that were used to make them usable with poor materials, we just don't use them, because they are expensive.

Consider the web in 1994

Posted Jul 26, 2024 12:18 UTC (Fri) by paulj (subscriber, #341) [Link] (13 responses)

I'm not sure it's entirely survivorship bias. We have furniture made by my great-grandfather. He was a furniture/cabinet maker. The stuff he made was built to last, in a labour intensive way. Carefully crafted joinery that is strong and robust. Stuff like table and cabinets today are made with large flat surfaces against each other, with a few holes and bolts to hold it together - just fundamentally less robust, even if made with strong wood. But it's cheaper to make. And most cabinets and wardrobes aren't made with decent wood, but fibreboard.

Yes, if you look carefully, you can still find furniture makers who make things the older ways. And you'll pay a _tonne_ of money today for that. But 100+ years ago, that was just the /normal/ way stuff got made. That was just _ordinary_ furniture. In another 100 years, the modern bolted together stuff will be gone - the high tension in the bolts need with the large flat mating surfaces will have caused degradation that made them wobbly or flimsy, even with strong wood, and they'll have been thrown out. Fibreboard stuff definitely will be gone.

My great-granddad's stuff will still be here, barring woodworm.

Consider the web in 1994

Posted Jul 26, 2024 12:22 UTC (Fri) by paulj (subscriber, #341) [Link] (1 responses)

And it's fundamentally the technology. The old furniture had mating surfaces with many more angles, distributing loads over more surface area, in more directions.

Today, it's a big flat contact area onto another, and just relying on a strong metal bolt to tension them together. Combined with a softer wood - pine.

And why, cause the latter is easy for a machine to cut, and takes a human a minute to bolt together. Whereas the former requires skill and time - and we don't want to pay for skilled labour. Cause that doesn't make money for large corporations.

Consider the web in 1994

Posted Jul 26, 2024 12:45 UTC (Fri) by khim (subscriber, #9252) [Link]

> The old furniture had mating surfaces with many more angles, distributing loads over more surface area, in more directions.

More importantly: it made it possible to use less-precisely cut pieces of wood. Human-made.

Machines may cut wood much more precisely, but it's easier to make them cut large flat surfaces, rather than more complicated pieces used before.

> Combined with a softer wood - pine.

That also helps and, again, it's much harder to create something very precisely uniform from soft wood by hand.

Thus to make the finishing part less expensive and produce less robust result we need much more stable and robust “foundational technology”.

It's the same thing everywhere: ENIAC did 5000 operations per second and it's longest operational time between failures was 116 hours. That's about 3 billion operations. Not enough for any modern app to even reach the point where it would be ready to accept input from the user.

Before we could start using “why write ten lines of code if we could plug framework with million lines of code and then only add one line on top of that mess” modern approach hardware designers had to design extremely robust hardware.

Consider the web in 1994

Posted Jul 26, 2024 12:47 UTC (Fri) by malmedal (subscriber, #56172) [Link] (6 responses)

I wasn't alive 100 years ago, but I have traveled a fair bit in developing countries. There exists a large amount of furniture today that are of far worse quality than the cheapest stuff you can buy in the west. I assume that the same was the case in the past.

Consider the web in 1994

Posted Jul 26, 2024 13:14 UTC (Fri) by khim (subscriber, #9252) [Link] (3 responses)

> There exists a large amount of furniture today that are of far worse quality than the cheapest stuff you can buy in the west.

Indigenous? Made in these developing countries locally? Hard to believe.

All the indigenous furniture that I ever saw in developing countries were much more structurally sound and sophisticated, even if they were made from tufts of straw or gnarly pieces of wood. They have to made that way, if you would try to build something from these flimsy pieces using steel bolt and tension it would fall apart at the first attempt to use it!

> I assume that the same was the case in the past..

I assume the same only I know that these awful piles of Chineese-made plastic pieces haven't existed back then. They are very much also products of modern technology, just even cheaper and worse ones than what western people may afford.

But these carefully crafted things made from straws and gnarly woods? They had to exist for centuries, for people that are making these usually are not advanced enough to invent something like this from scratch (and do that uniformly in different villages, to boot)!

Consider the web in 1994

Posted Jul 26, 2024 13:41 UTC (Fri) by Wol (subscriber, #4433) [Link] (1 responses)

> > There exists a large amount of furniture today that are of far worse quality than the cheapest stuff you can buy in the west.

> Indigenous? Made in these developing countries locally? Hard to believe.

Locally made furniture, for local people - the economics are against crappy stuff. Get a bad reputation, you've lost your job, you go hungry. The incentive is to make things as GOOD as you can, for the cheapest materials and least time. But the sweet spot is not the crap spot. And if the customer can source it cheaply he'll make sure you have the best available materials.

Crap is only possible when market economics have destroyed the local craft industry, and all the brand names are competing to get to the bottom as fast as possible.

Cheers,
Wol

Meanwhile, back in Linuxland

Posted Jul 26, 2024 13:52 UTC (Fri) by corbet (editor, #1) [Link]

I think we're getting pretty far afield here ... again. Can we try to keep the focus on Linux and free software, please?

Consider the web in 1994

Posted Jul 26, 2024 14:57 UTC (Fri) by malmedal (subscriber, #56172) [Link]

Locally produced furniture range a very wide spectrum, from the most beautifully woven rattan armchair, to a bench that consists of three bamboo trunks only loosely tied down, so that if your neighbour moves your bottom gets VERY PAINFULLY pinched. The former is what the locals show off to tourists, the latter is what the ubiquitous plastic chairs are replacing.

Consider the web in 1994

Posted Jul 26, 2024 13:56 UTC (Fri) by paulj (subscriber, #341) [Link] (1 responses)

You're comparing the developing world today to the developed world 100+ years ago. Not quite sure it's a fair comparison.

I don't doubt there was crap back then. For sure, people were also doing things like using old tea boxes for furniture. But the middle-class market for furniture, the quality between then and now, the difference is huge. My mother still has a lot of furniture from her mother (other side of the family), and I fully expect my children will have some of it. We bought a book case ourselves ten years ago. The best quality one - and best value *by far* - we found was from a second-hand shop. The book case looks to be circa 1930s / 1940s (by comparison to my gran's furniture of that era).

There's a cottage industry of second hand furniture from pre-WWII (60s latest) cause the quality and value of that furniture is far above what is made now.

And it stands to reason: How many people in the last 40 years became expert carpenters and furniture makers, compared to 100 years ago?

Consider the web in 1994

Posted Jul 26, 2024 20:59 UTC (Fri) by malmedal (subscriber, #56172) [Link]

I suppose it depends on what you mean by furniture. My grandparents had several nice-looking old cabinets, which they kept. However they threw out old sofas and chairs(or demoted them to the cabin), they wanted the stuff that actually got used to be nice and comfortable.

Surviving old furniture and inflation

Posted Jul 26, 2024 12:54 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

There's also the matter of pricing and inflation. Inflation-adjusting prices suggests (assuming your great-grandfather is comparable to mine in working era) that £1 in your great-grandfather's day is equivalent to £100 today. If he charged £20 for a cabinet, then the equivalent day item should be expected to cost £2,000 - and yet you're probably comparing to mass-market items from places like Ikea that cost 1/10th of that.

And if you look into history, what you find is that the market for furniture back then was limited to the people who today are happy to pay a tonne of money for furniture - it was sufficiently expensive to buy anything that most people got by with far less than we have today.

Surviving old furniture and inflation

Posted Jul 26, 2024 13:16 UTC (Fri) by pizza (subscriber, #46) [Link]

> And if you look into history, what you find is that the market for furniture back then was limited to the people who today are happy to pay a tonne of money for furniture - it was sufficiently expensive to buy anything that most people got by with far less than we have today.

Look no further than the word "cupboard". Today it refers to an enclosed cabinet with a door, but its origin is literally a "cup board", ie a flat piece of wood you store your cups on.

Surviving old furniture and inflation

Posted Jul 26, 2024 14:01 UTC (Fri) by paulj (subscriber, #341) [Link] (1 responses)

Good points, though I would make 1 counter point: The labour available to make quality furniture today is tiny, compared to 100 years ago.

The supply of expert labour has diminished to near 0, along with the demand for their well made furniture having been destroyed by cheap, hastily bolted together, (mostly fibreboard, a minority in pine, smaller amount again in better wood) stuff.

Surviving old furniture and inflation

Posted Jul 26, 2024 20:07 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

> Good points, though I would make 1 counter point: The labour available to make quality furniture today is tiny, compared to 100 years ago.

But is it? Carpenters can be much more productive today with modern power tools, and even computer-controlled tools. Look at CNC routers, they are downright magic.

I commissioned several custom wooden products (fireplace holder and custom cabinets), I fully expect them to outlast the house. And the amount of money I paid for them is probably still less than 100 years ago.

On the other hand, there's another dimension: practicality. I grew up in a house where we had an actual solid oak wooden table and chairs. They got left behind when this house got demolished because they were completely impractical. It took several people to move the table, and the chairs were uncomfortable and also heavy. I _can_ buy solid oak wood chairs, but I much prefer IKEA chairs made of lightweight pine and birch tree. They won't last, but then they are so cheap, I can replace them without even thinking about it.

It's always about trade-offs.

Consider the web in 1994

Posted Jul 24, 2024 9:27 UTC (Wed) by frankie (subscriber, #13593) [Link] (1 responses)

I agree, absolutely. People in the 90s had exactly the same considerations about the Internet, even some tech-minded ones.

Consider the web in 1994

Posted Jul 24, 2024 12:49 UTC (Wed) by LtWorf (subscriber, #124958) [Link]

Tech people in the 90s were used to connecting to bbs with modems… If they did it, surely they saw some purpose to doing it?

Consider the web in 1994

Posted Jul 24, 2024 13:18 UTC (Wed) by geofft (subscriber, #59789) [Link] (1 responses)

Not that I disagree with you, but, I don't think this particularly responds to the article. The talk was overall quite positive on AI and specifically encouraged the audience to build good things with it, and the speaker made the point multiple times that he (who actually is an expert in many things!) was able to build software quickly that would have been impractical for him previously because it would take too long to learn. See the parts about Tkinter and speech recognition.

It was just realistic about its limitations and downsides. (It was, if anything, rather soft on downsides, in that it didn't address the energy consumption of AI training and didn't talk about generating images/video at all, and it didn't talk very much about how training a high-quality model from scratch is only within the reach of a few rich corporations.)

Do not consider the web in 1994

Posted Jul 24, 2024 14:45 UTC (Wed) by atnot (subscriber, #124910) [Link]

Yes, agreed. Even with those glaring omissions, I found it the most compelling argument for LLMs I've read yet. Exactly *because* it was missing all of the usual "it's early" apologism, was mostly honest about the problems and didn't engage in any of the "imagine what it will be worth in ten years *wink wink*" book-talking boosterism.

Especially the idea of being able to just have a 4-20GB large file on my computer that has a large portion of the written internet in a somewhat queriable form offline seems pretty compelling. Although I do have reservations about how many useful queries you could get out of one battery charge. I also suspect a compressed text-only crawl of code documentation sites may end up more useful per byte and watt, if someone made one of those. But you never know what you're going to need and it seems like a good idea to at least have it laying around if you have the space and travel regularly.

The code generation stuff seems much less compelling to me. Yeah, I had my fun playing around with it when it launched too and I used it to write stuff in a language I was unfamiliar with at the time which was pretty cool. But then the novelty wore off and I really haven't found it that useful since. When you can usually just search "[thing] examples" or so to get a more reliable answer with just a few more clicks[1]. But I can see the utility, especially offline.

The portrayal of so-called "hallucination", prompt injection and safety as a temporary wrinkle that will soon be ironed out is indeed pretty soft though. Since they are an inherent part of the technology that can not be solved.

Which would be fine if it was just a thing you used to generate example code sometimes. I think this article convinced me of the utility as such a thing better than any other. But it can't just be a minor productivity tool anymore, because it's cost too many millions to make. It has to be the future and upend everything.

[1] Especially after a project I maintain started getting a steady stream of people with very confidently wrong understandings of how things worked wasting our time asking us why methods that did not exist were throwing errors.

Consider the web in 1994

Posted Jul 25, 2024 9:01 UTC (Thu) by khim (subscriber, #9252) [Link] (13 responses)

> The next decade is going to be interesting.

Sure, but not because of AI, but because the whole western civilization will collapse under its weight.

> Claude basically erased the demand for deep experience in this case.

No. It created an illusion that deep expertise is no longer needed.

But that's an illusion: without people with expertise incidents like very fresh CrownStike one would first become a rarity, then a routine, then norm… and at the final stage complex web built by people with expertise who are slowly leaving us (some retire, some get an obituary) there would be no one who may keep stuff going.

AI doesn't change the trajectory of that development, it just accelerates it.

I'm actually very glad that this AI craze happens: if we would have started real, materially different sixth technological paradigm then collapse of Western Civilization could have caused complete collapse of all technological civilizations in the world (because we would have had no one to keep sixth paradigm going and with it being read it would have been the staple of civilization), but since gimmicks like smartphones and AI were used to “stretch” fifth paradigm further and give erzats of work to people who couldn't do real work… we would just lose some things that are not critical and only the most advanced (and most unsupportable) parts of the world would experience deep crash.

Consider the web in 1994

Posted Jul 25, 2024 13:29 UTC (Thu) by Wol (subscriber, #4433) [Link]

> > Claude basically erased the demand for deep experience in this case.

> No. It created an illusion that deep expertise is no longer needed.

Which is (quite literally) a disaster ...

> But that's an illusion: without people with expertise incidents like very fresh CrownStike one would first become a rarity, then a routine, then norm… and at the final stage complex web built by people with expertise who are slowly leaving us (some retire, some get an obituary) there would be no one who may keep stuff going.

Health & Safety 101 - seeking to eliminate accidents merely means you get fewer but far more serious accidents.

By emphasising how to deal with accidents, any potential incident gets smothered in no time flat. Someone might suffer a minor injury, but that's it. Prevent those minor incidents and when (not if) something goes wrong, people act like headless chickens and what should have been nothing turns into life-changing (if not life-snuffing) incidents.

Cheers,
Wol

Consider the web in 1994

Posted Jul 25, 2024 15:51 UTC (Thu) by intelfx (subscriber, #130118) [Link]

> Sure, but not because of AI, but because the whole western civilization will collapse under its weight.

Still going with your "western civilisation collapse" fantasies?

I'd venture a guess that I probably represent the majority of the audience here when I say that it would be great if you could keep those off LWN.

Consider the web in 1994

Posted Jul 25, 2024 18:18 UTC (Thu) by malmedal (subscriber, #56172) [Link] (10 responses)

> Sure, but not because of AI, but because the whole western civilization will collapse under its weight.

No, collapse implies that there is something unsustainable about western civilisation, there's not, it has never been in better shape.

A bunch of ketamine-addled billionaires are however trying to destroy it. I don't think they'll succeed but if they do it will be a murder not a collapse.

If the billionaires do succeed they will come to regret it. I have this image in my head of the billionaires as tropical orchids, in a greenhouse, midwinter, trying to smash the glass because they think it's holding them back.

Consider the web in 1994

Posted Jul 25, 2024 19:43 UTC (Thu) by Wol (subscriber, #4433) [Link] (9 responses)

> No, collapse implies that there is something unsustainable about western civilisation, there's not, it has never been in better shape.

I'm really not so sure ...

All these climate events are driving up the cost of insurance. How much does fire and storm insurance cost in America nowadays? I bet in Storm Alley it's gone up significantly, also in those places with wildfires.

Here in Europe we've had all these fires in Spain and Greece where we've never had fires before. UK certainly and I bet also in Germany flood insurance is going through the roof ...

If insurance becomes unaffordable for the man in the street we're in trouble. Not that I like the insurance industry, but if it shrinks dramatically, or worse collapses under some extreme event, we're in BIG trouble.

And I know people probably think I'm mad but I seriously think - God's promise otherwise notwithstanding - we're in for an event similar to Noah's flood, and it's going to happen a lot quicker than people think. The Antartic had a heat wave last summer - 40 degrees above normal. Dunno whether it was F or C, don't think it matters, but if the ice sheet defrosts we're in for a world of hurt. Good bye London, New York, Florida Keys, Hawaii, The Netherlands, the list goes on. And I seriously think it will be on a Noah's flood timescale too - a couple of months.

Noah's flood was "real", inasmuch as a storification of a real historical event can be real, but The Land of Eden is probably now the bed of the Black Sea. We've plenty of evidence of a thriving Neolithic Doggerland, now the bed of the North Sea. And loads of evidence of Atlantis - probably the bed of the Mediterranean ... I hope I'm not around to see it. But I probably will be ...

Cheers,
Wol

Consider the web in 1994

Posted Jul 26, 2024 12:31 UTC (Fri) by paulj (subscriber, #341) [Link] (8 responses)

The Netherlands would probably survive, simply cause they already have hundreds of years of experience of living on land that is at or even below mean sea level. It's the ones without that experience who will have more problems.

Consider the web in 1994

Posted Jul 26, 2024 13:48 UTC (Fri) by Wol (subscriber, #4433) [Link] (7 responses)

Are you sure? Are their defences capable of coping with a rise in mean sea level of one metre or more, in the space of a few months.

That's why the tsunami caused the nuclear disaster in Japan - their defences were designed to stop a 10m wave. But the defenses were on the plate that dropped, which is why a 9m wave went straight over the top of it ...

I've seen the Victoria Embankment with the Thames almost to the top. That was a long time ago, before the Thames Barrier. The Barrier's estimated lifetime has been about halved. If we have a rise of a meter, I think the Thames will simply flow *round* the barrier, and if the Embankment goes - well - the Strand is so called because it was the strand - the beach on the banks of The Thames. The water will probably go a LOT further than that ...

Cheers,
Wol

The Netherlands will be fine

Posted Jul 26, 2024 21:12 UTC (Fri) by kleptog (subscriber, #1183) [Link] (6 responses)

> Are you sure? Are their defences capable of coping with a rise in mean sea level of one metre or more, in the space of a few months.

A metre is nothing. Tidal variation is ~2.5m and the North Sea gets way bigger waves than that. The sea defences are not the problem. Sand dunes rise together with the sea. The problem lies elsewhere.

Here the main canal that drains rainwater is at NAP-0.4m. Almost every day the low tide mark is about NAP-1m allowing collected rainwater to drain out for a few hours per day. Add (not even) a metre to the sea level and you have to start pumping. You already get situations where a high tide combined with consistant NW winds conspire to prevent draining for weeks, requiring alternative storage. Higher sea levels mean all the rivers are higher too, and this problem replicates across the country. It's all solvable, but will be very very expensive. There are ideas to pump the entire volume of the Rhine up to a higher sea level, but the energy requirements are enormous.

But even the worst case projections don't go that fast. The thermal mass of the ice sheet and the ocean are huge, even compared to the sun's output. Put the Antarctic ice sheet in full 24 hour sun and it would still take decades to melt appreciably.

We work with acceptable failure rates of "one flood every 10,000 years" so the current safety margins are more than sufficient for the time being. It's probably true that nowhere else in the world are there such large safety margins. Certainly the Americans who came to learn after hurricane Katrina thought we were nuts. Then again, we don't get hurricanes.

This is going rather off-topic though...

The Netherlands will be fine

Posted Jul 26, 2024 21:58 UTC (Fri) by Wol (subscriber, #4433) [Link] (5 responses)

> But even the worst case projections don't go that fast. The thermal mass of the ice sheet and the ocean are huge, even compared to the sun's output. Put the Antarctic ice sheet in full 24 hour sun and it would still take decades to melt appreciably.

WRONG PHYSICS.

Sorry yes we are getting a bit off topic (a bit?). But how far back do you have to go to find worst-case predictions saying we will still have an Arctic ice sheet? Just ten years? We don't like change. We consistently under-estimate it.

Rising temperatures increase the plasticity of ice. Melting isn't the problem, it's flow. And if the sea starts getting under the Weddel Ice Shelf and starts lifting and dropping it, then flow will be a *major* problem. It's happened before - it's almost certainly going to happen again - and I can see the shelf disappearing in a summer. If it does, glacier flow will be SCARY ...

At the moment, most of the Antartic ice is above sea level. It doesn't have to melt to raise sea levels - it just has to fall in.

Cheers,
Wol

The Netherlands will be fine

Posted Jul 26, 2024 22:04 UTC (Fri) by paulj (subscriber, #341) [Link] (4 responses)

Potential sea level rise from ice melt is fairly limited, less than a metre I thought. There's a lot of ice, but there's way, _way_ more sea.

The predictions of many metres of sea level rise are based on thermal expansion of the oceans. Which won't happen suddenly, but slowly (OTOH, trying to reverse such warming would be equally slow).

The Netherlands will be fine

Posted Jul 26, 2024 22:30 UTC (Fri) by Wol (subscriber, #4433) [Link]

> Potential sea level rise from ice melt is fairly limited, less than a metre I thought.

Which is why I've been saying a metre ... the danger lies in the fact that that metre rise is expected to take a century. We were shocked at how fast the Arctic melted ... and Antartica doesn't even need to melt - all it needs is to slide into the sea, and the rate of flow will only increase. Let's hope it doesn't accelerate faster than expected ... Greenland is a poster child here - the glaciers are flowing much faster now the ice shelf has gone ...

Cheers,
Wol

The Netherlands will be fine

Posted Jul 26, 2024 23:12 UTC (Fri) by malmedal (subscriber, #56172) [Link] (1 responses)

My understanding is that sea-level rise if all ice melts is about 70 meters, and that the models say we have already emitted enough CO2 to get 3 to 10 meters. That will take hundreds of years however, so there is time to mitigate.

The scariest realistic scenario is Thwaite's glacier, which, when it collapses, can give us 65 cm all by itself in the span of maybe ten years. Those ten years could start tomorrow or in a hundred years.

The conservative IPCC estimate is only about 60 cm by 2100, but even that will be problematic.

Consider; the daily tidal forces on the ocean only amounts to about 50cm due to the moon and 25cm from the sun, so in theory a max of 75cm when they are in sync.

However the actually observed tides can be more than ten meters the Bay of Fundy and almost nothing in the Caribbean due to variations in geography.

Similarly even if we only get 60cm it will be unevenly spread, some locations will get a lot, some nothing, some may even see a decrease.

I don't think we have models good enough to confidently predict who will win and loose. If I were to guess, I'd say the biggest victim would be Bangladesh.

The Netherlands will be fine

Posted Jul 27, 2024 20:26 UTC (Sat) by Wol (subscriber, #4433) [Link]

> The conservative IPCC estimate is only about 60 cm by 2100, but even that will be problematic.

And I seriously expect (a) that estimate is wrong, and (b) it'll be wrong as in too low.

As I said, we consistently underestimate change. Go back to 1980, rampant population growth, the EXTREMELY OPTIMISTIC forecasts of Y2K population said 8Bn. We undershot by over 2Bn I think. (The "we think it'll actually be this" estimate was 12Bn!)

Thwaites glacier, 10 years? Let me guess it'll actually be five. Quite likely less.

Cheers,
Wol

The Netherlands will be fine

Posted Jul 28, 2024 10:05 UTC (Sun) by Wol (subscriber, #4433) [Link]

> The predictions of many metres of sea level rise are based on thermal expansion of the oceans. Which won't happen suddenly, but slowly (OTOH, trying to reverse such warming would be equally slow).

Sorry, wrong physics again. Warming the oceans will use (very slow) conduction. However, both the troposphere and the oceans have very efficient cooling mechanisms. For every century it takes to rise, it'll take maybe a decade to cool?

As soon as we stop filling the stratosphere with greenhouse gases and turning it into a blanket, these mechanisms will bring temperatures down quickly.

The maximum surface temperature of the ocean is about 38C. At this point the cooling mechanism called a tropical storm kicks in. That's why, as temperatures rise, storms have been getting more frequent, more severe, and moving further away from the tropics.

And the cooling mechanism for the oceans themselves is called an ocean gyre. The one I know is the North Atlantic gyre, composed of the Humboldt current taking cold Arctic water down to the equator, and the Gulf Stream bringing warm equatorial water to the Arctic.

Once the stratosphere is dumping heat, not radiating it back to the surface, we should start getting polar blizzards recreating the ice caps, and the gyres reasserting themselves (the North Altlantic gyre is in serious trouble thanks to the loss of the Arctic ice cap). At which point we could find ourselves heading rapidly into another ice age. Or not as the case may be.

Cheers,
Wol


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds