Consider the web in 1994
Consider the web in 1994
Posted Jul 24, 2024 6:36 UTC (Wed) by b7j0c (guest, #27559)Parent article: Imitation, not artificial, intelligence
If it produces utility, who cares? The reductionist jabs definitely feel like a midwit reflex at this point.
People also seem hung up on damning AI because of hallucinations.
Think of this as the web in 1994. Totally not ready for primetime but absolutely fertile ground for development.
AI isn't essential technology for anyone today, but it is already yielding tangible benefits that you should not underestimate. For example, my son is a college student working at an AI company as an intern. He is relatively sharp but is unskilled. He gets a general idea of what he wants to do (build something based on Docker, for example) and uses Anthropic Claude to guide him from knowing nothing to having tangible bits he can use towards building solutions within a couple of days. Claude basically erased the demand for deep experience in this case.
The next decade is going to be interesting.
Posted Jul 24, 2024 7:15 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link] (50 responses)
Posted Jul 24, 2024 7:43 UTC (Wed)
by karath (subscriber, #19025)
[Link] (21 responses)
Posted Jul 24, 2024 8:48 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (5 responses)
Most great tutorials are never written !!!
In FLOSS especially, we're great at writing manuals. Which unless you already know what it's trying to tell you are written in incomprehensible jargon (nothing wrong with jargon, it's just specialist language. But if you're not a specialist you can't understand it).
We're pretty poor at writing cookbooks - my favourite method of learning. Look at the raid wiki for the article on building a home server - I would hope anybody who follows it will understand WHAT I did, WHY I did it, and by following it will easily end up with pretty much the same system I did. That might not be what they want, but it will build their understanding and enable them to change bits to get closer to what they want.
But no way is that a proper - or even good - tutorial. But it wouldn't surprise me if, as a *tutorial* it's the best most people will be able to find for building a raid server.
Cheers,
Posted Jul 24, 2024 12:44 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link]
Posted Jul 25, 2024 9:09 UTC (Thu)
by khim (subscriber, #9252)
[Link] (3 responses)
Try to read any tutorial written in XIX century and you would have the same issue. Except maybe actual cooking tutorials since cooking haven't changed that radically in 200 years. Tutorials are never a replacement for expertise and the only way to learn something is either by experimenting or from someone who already know what they are doing. Good old apprenticeship. What we have these days just creates ignoramuses that have simple, straightforward “program” in their head without any “error handling”. These people know what to do in the “happy case”, when everything works fine, but are ever less capable of handling any exceptional cases. AI is just another step on that degradation road: instead of having a way to handle the “happy case” and no knowledge of error conditions it gives us worker who could only handle some “happy cases” and doesn't even know errors may exist.
Posted Jul 29, 2024 0:17 UTC (Mon)
by Paf (subscriber, #91811)
[Link] (2 responses)
Posted Jul 29, 2024 0:46 UTC (Mon)
by khim (subscriber, #9252)
[Link] (1 responses)
On the contrary: at my $DAYJOB they have enabled that crap and now I'm looking on the idiotic attempts of these LLMs to create something every time I write comment during code review. Sometimes, when comment is about something trivial, like “we should use They are pretty good at generating crap that looks sensible but doesn't work! To the tune that when I see that change proposed is crap I just know that I have to contact privately to ask submitter to stop using that nonsense and write things by hand. Unfortunately not every company have the rule that unreviewed code couldn't be accepted, I shudder to think about what this crazyness would lead to in companies that accept code without reviewing it.
Posted Jul 29, 2024 10:15 UTC (Mon)
by paulj (subscriber, #341)
[Link]
(Though, there will undoubtedly be regular collateral damage to society soon enough).
Posted Jul 24, 2024 13:45 UTC (Wed)
by paulj (subscriber, #341)
[Link] (14 responses)
(And hell, even if fed on a body of high quality tutorials and example code it will still bullshit! See the "ChatGPT is bullshit" paper, which another commenter pointed to in the Meta LLM open source article).
Posted Jul 24, 2024 15:40 UTC (Wed)
by epa (subscriber, #39769)
[Link] (13 responses)
Posted Jul 24, 2024 16:26 UTC (Wed)
by somlo (subscriber, #92421)
[Link] (11 responses)
Careful with the "No True Scotsman" iterations on the Turing test there... :)
Unlike Turing's mathematical work, the imitation game was merely a thought experiment we all sort of rolled with over the years. The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.
It is concievable that in the future, a more complex (but still finite) state machine might be able to fake understanding how to program based on some rules (if you think about it, Searle's Chinese Room is just another Turing machine: i/o + state machine, that is, in this case, able to fake understanding Chinese to a Chinese speaker standing outside the room).
However, true intelligence isn't when the Turing machine can make *you* believe it's intelligent. Rather, it happens when the machine thinks to *itself* that it is intelligent -- the whole Descartes "cogito ergo sum" thing). As to how *we* get to test for that, I don't think science has that figured out yet... :)
Posted Jul 24, 2024 18:27 UTC (Wed)
by k3ninho (subscriber, #50375)
[Link]
I'm glad you're happy with that state of play; I'm not, a web page has a reference to itself and almost any program more complicated will do too, so almost any more-complicated machine intelligence will parrot "cogito ergo sum" so I figure this is another case of "GOTO considered harmful." The more I'm asked to credit machinery and animals with smarts, the more I find: social and tool-using corvids who also train their young to recognise helpful and harmful humans, orcas in pods training and sustaining their young while attacking, ants collectively finding optimal paths, primates (and dogs and cats) learning sign language... there's more intelligence in the world than our culture can give credit to. Which brings to the next point, if we're not giving credit to animal intelligence, how are we going to spot machine intelligence emerging from our computer systems?
>As to how *we* get to test for that, I don't think science has that figured out yet... :)
K3n.
Posted Jul 24, 2024 21:46 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
And something nature has figured out, but we haven't, is that in order to be intelligent, we need special-purpose hardware. For example, most people have special-purpose face recognition hardware. That is also hard-wired to name-association.
There's a whole bunch of other hardwired special-purpose systems. All of which are in their own small way doing their best to make sure that the General Purpose Brain is kept firmly in step with the "reality" outside. The thing about these LLMs is they are mathematical models that don't care about that reality. So intelligence is impossible as they retreat inside their model and disappear deeper and deeper into their alternative reality.
Who was it wrote that story about traumatised soldiers who kept on disappearing and reappearing? And when the Generals discovered they were (apparently) disappearing into the past and things like that they thought it would be brilliant - as a weapon of war - to go back in history and rewrite it. Until they discovered that the Roman Army wore wristwatches ...
AI will very soon (if it isn't already) be living in that world, not ours, until we find some way of telling the AI watches didn't exist back then...
Cheers,
Posted Jul 24, 2024 22:16 UTC (Wed)
by epa (subscriber, #39769)
[Link] (5 responses)
Posted Jul 25, 2024 9:18 UTC (Thu)
by khim (subscriber, #9252)
[Link] (4 responses)
Right now these models couldn't even correctly implement algorithm they describe in any language they already know. The only way for them to do that correctly if they saw full solution somewhere, which makes them perfect for cheating and frauds, but not all that suitable for real work. Except maybe for some repeatable work that we have been trying to eliminate for decades. Which is cool, but more of evolutionary progress on the road that IDEs started on decades ago than something entirely new.
Posted Jul 25, 2024 10:14 UTC (Thu)
by paulj (subscriber, #341)
[Link]
Given that (if that claim about Turing equivalence is true, as I've read), then there is a way to take that algorithm and describe it - both in terms of a more "normal" computer language and human language.
Posted Jul 26, 2024 4:09 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Which is a funny point because they could extend IDE auto-complete functionality and be a useful addition but the insistence that _all_ the work needs to be done by the LLM and the lack of feedback makes it worse because it's output is not being cross referenced with all that the LSP knows about the code in your editor, letting it create sample code which references methods/arguments/etc. which don't exist. Similar to how LLMs can't actually perform math calculations, because the developers don't build in an escape hatch to just run a calculator when a math expression exists in the input, they are so fixated and proud of what the LLM appears to do that they choose not to build in the proper feedback mechanisms to use a more appropriate tool when appropriate.
Posted Jul 26, 2024 7:35 UTC (Fri)
by khim (subscriber, #9252)
[Link]
Actually that's what they have tried to do and are still trying to do in most companies but then OpenAI released ChatGPT and the hype train started. After that they just had no time to properly integrate anything, everyone else just had to show that they can do cool things, too. Even if they couldn't, just yet.
Posted Jul 26, 2024 14:34 UTC (Fri)
by atnot (subscriber, #124910)
[Link]
Yes, I've been thinking this ever since I've seen people using github copilot. Our editors are already pretty darn good at finding out what the next valid thing can be. And they're basically guaranteed to be correct too, no "hallucination" nonsense. They're just not particularly good at sorting those options by relevance right now. There's no need to train these huge models to output code from nothing to get these advantages? Surely you could make some sort of model to score autocomplete suggestions by surrounding context that would be at least as good if not better than what microsoft is offering. And it surely wouldn't require the energy consumption of a small nation either. I'd use that in a heartbeat. But nobody seems to be interested in that sort of thing because it's not going to "replace programmers" or whatever.
Posted Jul 25, 2024 4:20 UTC (Thu)
by rgmoore (✭ supporter ✭, #75)
[Link] (2 responses)
That's not actually what Turing said, though. The original "imitation game" was intended to be played with a skeptical judge doing their best to tell which interviewee was a human and which was a machine. LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine. It's not just that their language doesn't sound quite right; they just can't hold up a conversation for long without drifting into nonsense. Their creators have tried to mask this by limiting the length of their output so they can get new human input to respond to, but it still shows up.
Not that passing a Turing test was ever intended to be the one true test of intelligence. It was intended to be an extremely stringent test that should satisfy even people who were most skeptical of the idea of machine intelligence. After all, if we really can't distinguish a machine from a human, it's hard to argue that the machine hasn't achieved human-level intelligence.
Posted Jul 25, 2024 9:56 UTC (Thu)
by khim (subscriber, #9252)
[Link] (1 responses)
Yes, yes, they would. I doubt Turing ever anticipated that his test would be passed in such a crazy way, but that's what happened: “new generation” acts and talks more and more like LLMs do! It's not clear whether that's just lack of experience and they would outgrow the threshold that the likes of ChatGPT 4 set or not, but the truth is out there: it's more-or-less impossible to distinguish gibberish that ChatGPT produces from gibberish that many humans are producing! And if you start raising the bar to exclude “idiots” from the test then it's no longer a Turing test, but something different. True for some humans as well when you are asking them to talk about something they are not experts in. Especially older ones. LLMs, today, are in a very strange place: they couldn't do anything well but they can do many things badly. But most human workers are also performing badly, the expert ones are very-very rare novadays!
Posted Jul 26, 2024 14:43 UTC (Fri)
by atnot (subscriber, #124910)
[Link]
This seems like a silly criteria. If you can't reliably distinguish the gibberish created by a brick falling onto a keyboard from the gibberish created by a cat sitting on one, that doesn't make the brick as intelligent as a cat.
Posted Jul 24, 2024 21:49 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
Try asking it to write a program in DataBASIC :-) The only reports I've heard about people trying that is that it's been a complete disaster competing for the "how many pages of printout can you generate from one syntax error" prize (something my fellow students did at school - they were programming in FORTRAN).
Cheers,
Posted Jul 24, 2024 7:47 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (17 responses)
AI will eventually become deeply contextual, so progress and learning over time will be reflected.
Having an expert at your side at all times will change industries.
Posted Jul 24, 2024 7:58 UTC (Wed)
by rgb (subscriber, #57129)
[Link] (10 responses)
Posted Jul 24, 2024 8:00 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (9 responses)
My car is a an explosion box. My house is a sand pile. etc etc
Putting a label on something does not remove its utility.
Posted Jul 24, 2024 8:18 UTC (Wed)
by rgb (subscriber, #57129)
[Link] (4 responses)
Posted Jul 24, 2024 8:29 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (3 responses)
if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market
if a CEO clings to humans when automation is ready, the company could lose in the market
in a competitive landscape, society gets closer to a state is prefers through its choices
no different than any tech since the industrial revolution
Posted Jul 24, 2024 12:06 UTC (Wed)
by pizza (subscriber, #46)
[Link]
More accurately: They will go with the cheapest option they think they can get away with.
> if a CEO prematurely replaces humans with AI tech that isn't ready, the company could lose in the market
More accurately: All that matters is that the financials look good *now* and any negative consequences come "later"
Posted Jul 26, 2024 15:42 UTC (Fri)
by immibis (subscriber, #105511)
[Link] (1 responses)
Look at all the world's information being locked on Reddit, which has just closed its doors (to every consumer of that information except for Google, who pays a lot of money to not be excluded). Reddit will surely go bankrupt due to this... in some years. It's already been over a year since Reddit started down this track. Reddit has almost never been profitable, so you could argue it's been making wrong decisions for ~20 years and still not failed. Meanwhile Facebook is occupied by almost exclusively Russian bots, and still has a high stock valuation. The markets are 99.9% disconnected from reality.
Posted Jul 26, 2024 17:11 UTC (Fri)
by LtWorf (subscriber, #124958)
[Link]
Remember the deleted post that stated that a USA airbase was the most reddit addicted city?
Posted Jul 25, 2024 4:01 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (3 responses)
I think a reductionist view has value because an LLM is not an intelligence, it is a thoroughly lobotomized parody of speech and writing, that only works because people are _very_ willing to extend the theory of mind to anything making vaguely human-sounding noises, even eliza was taken to be intelligent to some people, LLMs are only a little more sophisticated, but with just great gobs of training data the LLM can spit out something plausible sounding given a wide variety of inputs.
An LLM can't train someone in programming because it doesn't know how to program and it doesn't know how to teach and it has no way to model the state of the learner and adjust a pedagogy based on a feedback loop consulting hundreds or thousands of different neural networks in your brain. An LLM hardly has any feedback loops of any kind and barely one network.
An LLM can spit out text real good but it doesn't have enough varied systems to have the intelligence of a fruit fly. All it has are the strings of symbols that people have used to communicate with each other and with the computer and all it can do is combine those symbols in abstract and pleasing ways. People will invent a theory of mind when talking to it, but people are capable of anthropomorphism far less animate objects than computers.
It may well be that some day an artificial mind is created, it's not like brains are made out of magic unobtainium, they obey and rely on the physics of the same universe that allows computers to function, but there is just nowhere near the complexity and interlocking learning and feedback systems present in modern "AI" that are present in _any_ vertebrate, let alone human intelligence.
Posted Jul 25, 2024 16:33 UTC (Thu)
by rgb (subscriber, #57129)
[Link]
Posted Jul 25, 2024 19:47 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
The scary thing is there isn't the feedback in modern AI that is present in very basic lifeforms - even the humble fruitfly. And even the human brain probably has less raw processing power than a 6502! (Dunno how much we do have, but I know we achieve much more, with much less resource, than a computer).
Cheers,
Posted Jul 26, 2024 15:43 UTC (Fri)
by immibis (subscriber, #105511)
[Link]
Posted Jul 24, 2024 13:47 UTC (Wed)
by paulj (subscriber, #341)
[Link] (5 responses)
https://web.archive.org/web/20240101002023/https://www.ny...
(Example given in the "ChatGPT is bullshit" paper, which someone else had linked to in the Zuck / Meta LLaMa article).
Posted Jul 24, 2024 14:32 UTC (Wed)
by Paf (subscriber, #91811)
[Link] (4 responses)
Most tutorials online are also a bit wrong or out of date. The LLM synthesizes from many and is generally, in my experience, stronger and more accurate than most individual articles.
It’s easy to say things like Bayesian parrot, but whatever label you attach they are in practice *really good* at this stuff. That’s from substantial personal experience.
Posted Jul 24, 2024 17:10 UTC (Wed)
by legoktm (subscriber, #111994)
[Link] (2 responses)
Posted Jul 25, 2024 4:08 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (1 responses)
Posted Jul 29, 2024 0:27 UTC (Mon)
by Paf (subscriber, #91811)
[Link]
Posted Jul 25, 2024 10:02 UTC (Thu)
by paulj (subscriber, #341)
[Link]
It's a big IF though. Cause some people who are not experts will use it to produce reams of material that /look plausible/. Worse, people who are experts but are a bit lazy (or are doing their "homework" a bit too late) may /fail/ to do the review, correct and tweak step and instead present the AI generated bullshit as the work of their own expertise! (The article I linked to being a case in point - how much money did the defendant have to expend on legal fees to prove it was bullshit! Had they not had those resources, they may have had to fold and settle or lost the case - and then been on the hook for the costs!).
So yes, useful. With caveats. And the problem is some will be ignorant of or just ignore the caveats - like in the article!
Posted Jul 27, 2024 16:15 UTC (Sat)
by DanilaBerezin (guest, #168271)
[Link] (9 responses)
Posted Jul 27, 2024 16:51 UTC (Sat)
by LtWorf (subscriber, #124958)
[Link] (8 responses)
Posted Jul 27, 2024 17:48 UTC (Sat)
by DanilaBerezin (guest, #168271)
[Link] (5 responses)
Posted Jul 27, 2024 19:14 UTC (Sat)
by atnot (subscriber, #124910)
[Link] (4 responses)
That seems like wishful thinking to me for a few reasons:
The whole idea of LLMs is that bigger models and more data is supposed to get you some kind of intelligence, but the current crop of LLMs has already been trained on pretty much the sum total of human text output available today and it's still crap. In fact one reason why they're not getting better is that improvements require exponentially more data and we're already out of data to train on. Getting superior results with a fraction of that with the same technology seems unlikely.
Second, effective teaching requires both understanding the subject matter and good knowledge and experience with how humans tend to think an learn. LLMs can't remotely do either of things at all, let alone for a complex subject like programming. Summarization, sure. That just requires pattern matching the phrases humans tend to use to indicate asides and writing conventions like the three part essay structure. But that's very different.
Posted Jul 27, 2024 20:29 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
LLMs spew rubbish because they have no feedback to tell them it IS rubbish.
Cheers,
Posted Jul 29, 2024 14:28 UTC (Mon)
by anselm (subscriber, #2796)
[Link]
That's what the underpaid peons in places like Kenya are for. (One problem is that they rely on LLMs for their fact-checking because otherwise they couldn't keep up with the pressure.)
https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/
Posted Jul 27, 2024 21:33 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
That's not quite right. The current crop of LLMs is inherently limited because they have a relatively small "working memory" and even less ability to learn on the fly. The current work is aimed at fixing this.
Posted Jul 27, 2024 21:48 UTC (Sat)
by DanilaBerezin (guest, #168271)
[Link]
Posted Jul 29, 2024 0:30 UTC (Mon)
by Paf (subscriber, #91811)
[Link] (1 responses)
Posted Jul 29, 2024 1:15 UTC (Mon)
by songmaster (subscriber, #1748)
[Link]
Posted Jul 24, 2024 7:54 UTC (Wed)
by rgb (subscriber, #57129)
[Link]
Posted Jul 24, 2024 8:19 UTC (Wed)
by atnot (subscriber, #124910)
[Link] (21 responses)
The tech industry has been saying this about every useless new thing they've hyped over the last decade and it's never been true for anything. Not even the *actual* web in 1994, as opposed to their self serving mythology of it.
It increasingly just feels like a desperate attempt to conjure up another dotcom bubble by people who didn't think they got rich enough in the first one.
The only thing that rhetoric makes makes me think of now is blockchains and NFTs and the metaverse.
Posted Jul 24, 2024 8:41 UTC (Wed)
by b7j0c (guest, #27559)
[Link] (20 responses)
it isn't useless, people are using AI right now
in addition to the example of my son, my wife works in travel and uses AI to prep text and imagery for promotional campaigns. She is making money from it. Without AI, she would have to hire a photographer, graphics designer, writer, etc...there isn't enough money in what she does to pay all those people so the campaigns just wouldn't happen.
she isn't getting obsessed with some internal dialog about the innate "intelligence" or lack thereof in these tools, she is just turning them into revenue
Posted Jul 24, 2024 8:53 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (19 responses)
And then she'll make a Virgin Trains-style blunder (Virgin is a very big brand name in the UK) who used a castle in Kent to advertise a city in Yorkshire (imagine some PR agency using a famous New York attraction to advertise Florida - it was that big a cock-up).
Cheers,
Posted Jul 25, 2024 10:36 UTC (Thu)
by khim (subscriber, #9252)
[Link] (18 responses)
Does it matter, in the end, if people are buying it? Currently AI and LLMs are very bad at doing things where quality of the result matters, but it's surprising how many things where it doesn't matter all that much are happening in today's economy. One may ask whether all that activity is even needed at all, but as long as it's profitable… AI have a niche.
Posted Jul 25, 2024 11:21 UTC (Thu)
by kpfleming (subscriber, #23250)
[Link] (17 responses)
Posted Jul 25, 2024 11:41 UTC (Thu)
by khim (subscriber, #9252)
[Link] (16 responses)
Why should content be any different from many other things? If you compare modern knife or modern table to knife or table made 200 or 300 years ago then modern one loses on most fronts. But it could be about 10 or 100 times cheaper, which justifies all these defects. Why should industries that are creating content treated any differently? P.S. And, similarly to knives and tables, we have the “foundation layers” that are, unquestionably, better then what we had before. Steel and raw wood, today, are much better than what masters had 200 or 300 years before. That's how even low-quality knives or tables are still usable today. And ad networks are much better at delivering ads than newspapers of 200 or 300 years before — and that's how even low-quality ads work.
Posted Jul 25, 2024 16:03 UTC (Thu)
by JGR (subscriber, #93631)
[Link] (15 responses)
> If you compare modern knife or modern table to knife or table made 200 or 300 years ago then modern one loses on most fronts. But it could be about 10 or 100 times cheaper, which justifies all these defects.
This seems like survivorship bias? The poor quality knives, tables and so on of 200 - 300 years are unlikely to have survived or been worth preserving long enough to be compared to today's poor quality offerings.
Posted Jul 25, 2024 16:50 UTC (Thu)
by khim (subscriber, #9252)
[Link] (14 responses)
Sure, that bias also exist, but the fact is: if you would apply modern technologies to materials available 200-300 years ago then such knife or table would just crumble before you may even use it! We actually know these technologies that were used to make them usable with poor materials, we just don't use them, because they are expensive.
Posted Jul 26, 2024 12:18 UTC (Fri)
by paulj (subscriber, #341)
[Link] (13 responses)
Yes, if you look carefully, you can still find furniture makers who make things the older ways. And you'll pay a _tonne_ of money today for that. But 100+ years ago, that was just the /normal/ way stuff got made. That was just _ordinary_ furniture. In another 100 years, the modern bolted together stuff will be gone - the high tension in the bolts need with the large flat mating surfaces will have caused degradation that made them wobbly or flimsy, even with strong wood, and they'll have been thrown out. Fibreboard stuff definitely will be gone.
My great-granddad's stuff will still be here, barring woodworm.
Posted Jul 26, 2024 12:22 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
Today, it's a big flat contact area onto another, and just relying on a strong metal bolt to tension them together. Combined with a softer wood - pine.
And why, cause the latter is easy for a machine to cut, and takes a human a minute to bolt together. Whereas the former requires skill and time - and we don't want to pay for skilled labour. Cause that doesn't make money for large corporations.
Posted Jul 26, 2024 12:45 UTC (Fri)
by khim (subscriber, #9252)
[Link]
More importantly: it made it possible to use less-precisely cut pieces of wood. Human-made. Machines may cut wood much more precisely, but it's easier to make them cut large flat surfaces, rather than more complicated pieces used before. That also helps and, again, it's much harder to create something very precisely uniform from soft wood by hand. Thus to make the finishing part less expensive and produce less robust result we need much more stable and robust “foundational technology”. It's the same thing everywhere: ENIAC did 5000 operations per second and it's longest operational time between failures was 116 hours. That's about 3 billion operations. Not enough for any modern app to even reach the point where it would be ready to accept input from the user. Before we could start using “why write ten lines of code if we could plug framework with million lines of code and then only add one line on top of that mess” modern approach hardware designers had to design extremely robust hardware.
Posted Jul 26, 2024 12:47 UTC (Fri)
by malmedal (subscriber, #56172)
[Link] (6 responses)
Posted Jul 26, 2024 13:14 UTC (Fri)
by khim (subscriber, #9252)
[Link] (3 responses)
Indigenous? Made in these developing countries locally? Hard to believe. All the indigenous furniture that I ever saw in developing countries were much more structurally sound and sophisticated, even if they were made from tufts of straw or gnarly pieces of wood. They have to made that way, if you would try to build something from these flimsy pieces using steel bolt and tension it would fall apart at the first attempt to use it! I assume the same only I know that these awful piles of Chineese-made plastic pieces haven't existed back then. They are very much also products of modern technology, just even cheaper and worse ones than what western people may afford. But these carefully crafted things made from straws and gnarly woods? They had to exist for centuries, for people that are making these usually are not advanced enough to invent something like this from scratch (and do that uniformly in different villages, to boot)!
Posted Jul 26, 2024 13:41 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
> Indigenous? Made in these developing countries locally? Hard to believe.
Locally made furniture, for local people - the economics are against crappy stuff. Get a bad reputation, you've lost your job, you go hungry. The incentive is to make things as GOOD as you can, for the cheapest materials and least time. But the sweet spot is not the crap spot. And if the customer can source it cheaply he'll make sure you have the best available materials.
Crap is only possible when market economics have destroyed the local craft industry, and all the brand names are competing to get to the bottom as fast as possible.
Cheers,
Posted Jul 26, 2024 13:52 UTC (Fri)
by corbet (editor, #1)
[Link]
Posted Jul 26, 2024 14:57 UTC (Fri)
by malmedal (subscriber, #56172)
[Link]
Posted Jul 26, 2024 13:56 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
I don't doubt there was crap back then. For sure, people were also doing things like using old tea boxes for furniture. But the middle-class market for furniture, the quality between then and now, the difference is huge. My mother still has a lot of furniture from her mother (other side of the family), and I fully expect my children will have some of it. We bought a book case ourselves ten years ago. The best quality one - and best value *by far* - we found was from a second-hand shop. The book case looks to be circa 1930s / 1940s (by comparison to my gran's furniture of that era).
There's a cottage industry of second hand furniture from pre-WWII (60s latest) cause the quality and value of that furniture is far above what is made now.
And it stands to reason: How many people in the last 40 years became expert carpenters and furniture makers, compared to 100 years ago?
Posted Jul 26, 2024 20:59 UTC (Fri)
by malmedal (subscriber, #56172)
[Link]
Posted Jul 26, 2024 12:54 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (3 responses)
There's also the matter of pricing and inflation. Inflation-adjusting prices suggests (assuming your great-grandfather is comparable to mine in working era) that £1 in your great-grandfather's day is equivalent to £100 today. If he charged £20 for a cabinet, then the equivalent day item should be expected to cost £2,000 - and yet you're probably comparing to mass-market items from places like Ikea that cost 1/10th of that.
And if you look into history, what you find is that the market for furniture back then was limited to the people who today are happy to pay a tonne of money for furniture - it was sufficiently expensive to buy anything that most people got by with far less than we have today.
Posted Jul 26, 2024 13:16 UTC (Fri)
by pizza (subscriber, #46)
[Link]
Look no further than the word "cupboard". Today it refers to an enclosed cabinet with a door, but its origin is literally a "cup board", ie a flat piece of wood you store your cups on.
Posted Jul 26, 2024 14:01 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
The supply of expert labour has diminished to near 0, along with the demand for their well made furniture having been destroyed by cheap, hastily bolted together, (mostly fibreboard, a minority in pine, smaller amount again in better wood) stuff.
Posted Jul 26, 2024 20:07 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
But is it? Carpenters can be much more productive today with modern power tools, and even computer-controlled tools. Look at CNC routers, they are downright magic.
I commissioned several custom wooden products (fireplace holder and custom cabinets), I fully expect them to outlast the house. And the amount of money I paid for them is probably still less than 100 years ago.
On the other hand, there's another dimension: practicality. I grew up in a house where we had an actual solid oak wooden table and chairs. They got left behind when this house got demolished because they were completely impractical. It took several people to move the table, and the chairs were uncomfortable and also heavy. I _can_ buy solid oak wood chairs, but I much prefer IKEA chairs made of lightweight pine and birch tree. They won't last, but then they are so cheap, I can replace them without even thinking about it.
It's always about trade-offs.
Posted Jul 24, 2024 9:27 UTC (Wed)
by frankie (subscriber, #13593)
[Link] (1 responses)
Posted Jul 24, 2024 12:49 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link]
Posted Jul 24, 2024 13:18 UTC (Wed)
by geofft (subscriber, #59789)
[Link] (1 responses)
It was just realistic about its limitations and downsides. (It was, if anything, rather soft on downsides, in that it didn't address the energy consumption of AI training and didn't talk about generating images/video at all, and it didn't talk very much about how training a high-quality model from scratch is only within the reach of a few rich corporations.)
Posted Jul 24, 2024 14:45 UTC (Wed)
by atnot (subscriber, #124910)
[Link]
Especially the idea of being able to just have a 4-20GB large file on my computer that has a large portion of the written internet in a somewhat queriable form offline seems pretty compelling. Although I do have reservations about how many useful queries you could get out of one battery charge. I also suspect a compressed text-only crawl of code documentation sites may end up more useful per byte and watt, if someone made one of those. But you never know what you're going to need and it seems like a good idea to at least have it laying around if you have the space and travel regularly.
The code generation stuff seems much less compelling to me. Yeah, I had my fun playing around with it when it launched too and I used it to write stuff in a language I was unfamiliar with at the time which was pretty cool. But then the novelty wore off and I really haven't found it that useful since. When you can usually just search "[thing] examples" or so to get a more reliable answer with just a few more clicks[1]. But I can see the utility, especially offline.
The portrayal of so-called "hallucination", prompt injection and safety as a temporary wrinkle that will soon be ironed out is indeed pretty soft though. Since they are an inherent part of the technology that can not be solved.
Which would be fine if it was just a thing you used to generate example code sometimes. I think this article convinced me of the utility as such a thing better than any other. But it can't just be a minor productivity tool anymore, because it's cost too many millions to make. It has to be the future and upend everything.
[1] Especially after a project I maintain started getting a steady stream of people with very confidently wrong understandings of how things worked wasting our time asking us why methods that did not exist were throwing errors.
Posted Jul 25, 2024 9:01 UTC (Thu)
by khim (subscriber, #9252)
[Link] (13 responses)
Sure, but not because of AI, but because the whole western civilization will collapse under its weight. No. It created an illusion that deep expertise is no longer needed. But that's an illusion: without people with expertise incidents like very fresh CrownStike one would first become a rarity, then a routine, then norm… and at the final stage complex web built by people with expertise who are slowly leaving us (some retire, some get an obituary) there would be no one who may keep stuff going. AI doesn't change the trajectory of that development, it just accelerates it. I'm actually very glad that this AI craze happens: if we would have started real, materially different sixth technological paradigm then collapse of Western Civilization could have caused complete collapse of all technological civilizations in the world (because we would have had no one to keep sixth paradigm going and with it being read it would have been the staple of civilization), but since gimmicks like smartphones and AI were used to “stretch” fifth paradigm further and give erzats of work to people who couldn't do real work… we would just lose some things that are not critical and only the most advanced (and most unsupportable) parts of the world would experience deep crash.
Posted Jul 25, 2024 13:29 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
> No. It created an illusion that deep expertise is no longer needed.
Which is (quite literally) a disaster ...
> But that's an illusion: without people with expertise incidents like very fresh CrownStike one would first become a rarity, then a routine, then norm… and at the final stage complex web built by people with expertise who are slowly leaving us (some retire, some get an obituary) there would be no one who may keep stuff going.
Health & Safety 101 - seeking to eliminate accidents merely means you get fewer but far more serious accidents.
By emphasising how to deal with accidents, any potential incident gets smothered in no time flat. Someone might suffer a minor injury, but that's it. Prevent those minor incidents and when (not if) something goes wrong, people act like headless chickens and what should have been nothing turns into life-changing (if not life-snuffing) incidents.
Cheers,
Posted Jul 25, 2024 15:51 UTC (Thu)
by intelfx (subscriber, #130118)
[Link]
Still going with your "western civilisation collapse" fantasies?
I'd venture a guess that I probably represent the majority of the audience here when I say that it would be great if you could keep those off LWN.
Posted Jul 25, 2024 18:18 UTC (Thu)
by malmedal (subscriber, #56172)
[Link] (10 responses)
No, collapse implies that there is something unsustainable about western civilisation, there's not, it has never been in better shape.
A bunch of ketamine-addled billionaires are however trying to destroy it. I don't think they'll succeed but if they do it will be a murder not a collapse.
If the billionaires do succeed they will come to regret it. I have this image in my head of the billionaires as tropical orchids, in a greenhouse, midwinter, trying to smash the glass because they think it's holding them back.
Posted Jul 25, 2024 19:43 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (9 responses)
I'm really not so sure ...
All these climate events are driving up the cost of insurance. How much does fire and storm insurance cost in America nowadays? I bet in Storm Alley it's gone up significantly, also in those places with wildfires.
Here in Europe we've had all these fires in Spain and Greece where we've never had fires before. UK certainly and I bet also in Germany flood insurance is going through the roof ...
If insurance becomes unaffordable for the man in the street we're in trouble. Not that I like the insurance industry, but if it shrinks dramatically, or worse collapses under some extreme event, we're in BIG trouble.
And I know people probably think I'm mad but I seriously think - God's promise otherwise notwithstanding - we're in for an event similar to Noah's flood, and it's going to happen a lot quicker than people think. The Antartic had a heat wave last summer - 40 degrees above normal. Dunno whether it was F or C, don't think it matters, but if the ice sheet defrosts we're in for a world of hurt. Good bye London, New York, Florida Keys, Hawaii, The Netherlands, the list goes on. And I seriously think it will be on a Noah's flood timescale too - a couple of months.
Noah's flood was "real", inasmuch as a storification of a real historical event can be real, but The Land of Eden is probably now the bed of the Black Sea. We've plenty of evidence of a thriving Neolithic Doggerland, now the bed of the North Sea. And loads of evidence of Atlantis - probably the bed of the Mediterranean ... I hope I'm not around to see it. But I probably will be ...
Cheers,
Posted Jul 26, 2024 12:31 UTC (Fri)
by paulj (subscriber, #341)
[Link] (8 responses)
Posted Jul 26, 2024 13:48 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (7 responses)
That's why the tsunami caused the nuclear disaster in Japan - their defences were designed to stop a 10m wave. But the defenses were on the plate that dropped, which is why a 9m wave went straight over the top of it ...
I've seen the Victoria Embankment with the Thames almost to the top. That was a long time ago, before the Thames Barrier. The Barrier's estimated lifetime has been about halved. If we have a rise of a meter, I think the Thames will simply flow *round* the barrier, and if the Embankment goes - well - the Strand is so called because it was the strand - the beach on the banks of The Thames. The water will probably go a LOT further than that ...
Cheers,
Posted Jul 26, 2024 21:12 UTC (Fri)
by kleptog (subscriber, #1183)
[Link] (6 responses)
A metre is nothing. Tidal variation is ~2.5m and the North Sea gets way bigger waves than that. The sea defences are not the problem. Sand dunes rise together with the sea. The problem lies elsewhere.
Here the main canal that drains rainwater is at NAP-0.4m. Almost every day the low tide mark is about NAP-1m allowing collected rainwater to drain out for a few hours per day. Add (not even) a metre to the sea level and you have to start pumping. You already get situations where a high tide combined with consistant NW winds conspire to prevent draining for weeks, requiring alternative storage. Higher sea levels mean all the rivers are higher too, and this problem replicates across the country. It's all solvable, but will be very very expensive. There are ideas to pump the entire volume of the Rhine up to a higher sea level, but the energy requirements are enormous.
But even the worst case projections don't go that fast. The thermal mass of the ice sheet and the ocean are huge, even compared to the sun's output. Put the Antarctic ice sheet in full 24 hour sun and it would still take decades to melt appreciably.
We work with acceptable failure rates of "one flood every 10,000 years" so the current safety margins are more than sufficient for the time being. It's probably true that nowhere else in the world are there such large safety margins. Certainly the Americans who came to learn after hurricane Katrina thought we were nuts. Then again, we don't get hurricanes.
This is going rather off-topic though...
Posted Jul 26, 2024 21:58 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (5 responses)
WRONG PHYSICS.
Sorry yes we are getting a bit off topic (a bit?). But how far back do you have to go to find worst-case predictions saying we will still have an Arctic ice sheet? Just ten years? We don't like change. We consistently under-estimate it.
Rising temperatures increase the plasticity of ice. Melting isn't the problem, it's flow. And if the sea starts getting under the Weddel Ice Shelf and starts lifting and dropping it, then flow will be a *major* problem. It's happened before - it's almost certainly going to happen again - and I can see the shelf disappearing in a summer. If it does, glacier flow will be SCARY ...
At the moment, most of the Antartic ice is above sea level. It doesn't have to melt to raise sea levels - it just has to fall in.
Cheers,
Posted Jul 26, 2024 22:04 UTC (Fri)
by paulj (subscriber, #341)
[Link] (4 responses)
The predictions of many metres of sea level rise are based on thermal expansion of the oceans. Which won't happen suddenly, but slowly (OTOH, trying to reverse such warming would be equally slow).
Posted Jul 26, 2024 22:30 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
Which is why I've been saying a metre ... the danger lies in the fact that that metre rise is expected to take a century. We were shocked at how fast the Arctic melted ... and Antartica doesn't even need to melt - all it needs is to slide into the sea, and the rate of flow will only increase. Let's hope it doesn't accelerate faster than expected ... Greenland is a poster child here - the glaciers are flowing much faster now the ice shelf has gone ...
Cheers,
Posted Jul 26, 2024 23:12 UTC (Fri)
by malmedal (subscriber, #56172)
[Link] (1 responses)
The scariest realistic scenario is Thwaite's glacier, which, when it collapses, can give us 65 cm all by itself in the span of maybe ten years. Those ten years could start tomorrow or in a hundred years.
The conservative IPCC estimate is only about 60 cm by 2100, but even that will be problematic.
Consider; the daily tidal forces on the ocean only amounts to about 50cm due to the moon and 25cm from the sun, so in theory a max of 75cm when they are in sync.
However the actually observed tides can be more than ten meters the Bay of Fundy and almost nothing in the Caribbean due to variations in geography.
Similarly even if we only get 60cm it will be unevenly spread, some locations will get a lot, some nothing, some may even see a decrease.
I don't think we have models good enough to confidently predict who will win and loose. If I were to guess, I'd say the biggest victim would be Bangladesh.
Posted Jul 27, 2024 20:26 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
And I seriously expect (a) that estimate is wrong, and (b) it'll be wrong as in too low.
As I said, we consistently underestimate change. Go back to 1980, rampant population growth, the EXTREMELY OPTIMISTIC forecasts of Y2K population said 8Bn. We undershot by over 2Bn I think. (The "we think it'll actually be this" estimate was 12Bn!)
Thwaites glacier, 10 years? Let me guess it'll actually be five. Quite likely less.
Cheers,
Posted Jul 28, 2024 10:05 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
Sorry, wrong physics again. Warming the oceans will use (very slow) conduction. However, both the troposphere and the oceans have very efficient cooling mechanisms. For every century it takes to rise, it'll take maybe a decade to cool?
As soon as we stop filling the stratosphere with greenhouse gases and turning it into a blanket, these mechanisms will bring temperatures down quickly.
The maximum surface temperature of the ocean is about 38C. At this point the cooling mechanism called a tropical storm kicks in. That's why, as temperatures rise, storms have been getting more frequent, more severe, and moving further away from the tropics.
And the cooling mechanism for the oceans themselves is called an ocean gyre. The one I know is the North Atlantic gyre, composed of the Humboldt current taking cold Arctic water down to the equator, and the Gulf Stream bringing warm equatorial water to the Arctic.
Once the stratosphere is dumping heat, not radiating it back to the surface, we should start getting polar blizzards recreating the ice caps, and the gyres reasserting themselves (the North Altlantic gyre is in serious trouble thanks to the loss of the Arctic ice cap). At which point we could find ourselves heading rapidly into another ice age. Or not as the case may be.
Cheers,
Consider the web in 1994
Making information more accessible
Making information more accessible
Wol
Making information more accessible
> Which unless you already know what it's trying to tell you are written in incomprehensible jargon (nothing wrong with jargon, it's just specialist language.
Making information more accessible
Making information more accessible
> It’s really obvious from your comment you haven’t actually tried these tools.
Making information more accessible
string_view here, not string, see totw #1” it even generates sensible code. But that's rarity. Most of the time it generates patent nonsense, because it doesn't understand what it does. It couldn't, there are no brain behind what it does.Making information more accessible
Making information more accessible
Truly understanding programming
Truly understanding programming
Truly understanding programming
I think this is backwards. The question is one that the applicant for personhood asks us and our scientists: "What do I have to do so you will treat me like a person?"
Truly understanding programming
Wol
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
Truly understanding programming
The idea is to build a Turing machine (i/o + finite state machine) that would be able to *fake* it when judged by some "average person" interacting with it, and we can say LLMs have mostly succeeded in that regard.
> LLMs are good now, but they won't fool anyone who's making a serious effort to determine if their interlocutor is human or machine.
Truly understanding programming
Truly understanding programming
Truly understanding programming
Wol
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
The economy is screwed
The economy is screwed
Consider the web in 1994
Consider the web in 1994
BTW here is a related piece I couldn't agree more with:
https://www.theatlantic.com/technology/archive/2024/07/op...
Consider the web in 1994
Wol
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
I'll echo this. LLMs are, to my surprise, quite good at generating code, and crucially, code is something we have extensive tooling and practices to verify the correctness of.
Consider the web in 1994
I feel people are falling into the AI effect trap: "The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not 'real' intelligence." No, LLMs are not anywhere close to human intelligence, that doesn't stop them from being quite useful regardless.
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Wol
Consider the web in 1994
because they have no feedback to tell them it IS rubbish
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Wol
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
> The poor quality knives, tables and so on of 200 - 300 years are unlikely to have survived or been worth preserving long enough to be compared to today's poor quality offerings.
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
> The old furniture had mating surfaces with many more angles, distributing loads over more surface area, in more directions.
Consider the web in 1994
Consider the web in 1994
> There exists a large amount of furniture today that are of far worse quality than the cheapest stuff you can buy in the west.
Consider the web in 1994
Consider the web in 1994
Wol
I think we're getting pretty far afield here ... again. Can we try to keep the focus on Linux and free software, please?
Meanwhile, back in Linuxland
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Surviving old furniture and inflation
Surviving old furniture and inflation
Surviving old furniture and inflation
Surviving old furniture and inflation
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Do not consider the web in 1994
> The next decade is going to be interesting.
Consider the web in 1994
Consider the web in 1994
Wol
Consider the web in 1994
Consider the web in 1994
Consider the web in 1994
Wol
Consider the web in 1994
Consider the web in 1994
Wol
The Netherlands will be fine
The Netherlands will be fine
Wol
The Netherlands will be fine
The Netherlands will be fine
Wol
The Netherlands will be fine
The Netherlands will be fine
Wol
The Netherlands will be fine
Wol
