|
|
Log in / Subscribe / Register

Security quotes of the week

Fuzzing is fantastic for finding bugs, but for security to improve, those bugs also need to be patched. It's long been an industry-wide struggle to find the engineering hours needed to patch open bugs at the pace that they are uncovered, and triaging and fixing bugs is a significant manual toll on project maintainers. With continued improvements in using LLMs to find more bugs, we need to keep pace in creating similarly automated solutions to help fix those bugs. We recently announced an experiment doing exactly that: building an automated pipeline that intakes vulnerabilities (such as those caught by fuzzing), and prompts LLMs to generate fixes and test them before selecting the best for human review.

This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers. The potential of this technology should apply to most or all categories throughout the software development process. We're optimistic that this research marks a promising step towards harnessing AI to help ensure more secure and reliable software.

Dongge Liu, Oliver Chang, Jan Nowakowski, and Jan Keller on the Google security blog

Today's chatbots perform best when instructed with a level of precision that would be appallingly rude in human conversation, stripped of any conversational pleasantries that the model could misinterpret: "Draft a 250-word paragraph in my typical writing style, detailing three examples to support the following point and cite your sources." Not even the most detached corporate CEO would likely talk this way to their assistant, but it's common with chatbots.

If chatbots truly become the dominant daily conversation partner for some people, there is an acute risk that these users will adopt a lexicon of AI commands even when talking to other humans. Rather than speaking with empathy, subtlety, and nuance, we'll be trained to speak with the cold precision of a programmer talking to a computer. The colorful aphorisms and anecdotes that give conversations their inherently human quality, but that often confound large language models, could begin to vanish from the human discourse.

[...] Of course, history is replete with people claiming that the digital sky is falling, bemoaning each new invention as the end of civilization as we know it. In the end, LLMs may be little more than the word processor of tomorrow, a handy innovation that makes things a little easier while leaving most of our lives untouched. Which path we take depends on how we train the chatbots of tomorrow, but it also depends on whether we invest in strengthening the bonds of civil society today.

Bruce Schneier and Albert Fox Cahn

The odds that there's a human being beta-testing [Elon] Musk's neural interface with the only brain they will ever have aren't zero. But I give it the same odds as the Raelians' claim to have cloned a human being.
Cory Doctorow

to post comments

Security quotes of the week

Posted Feb 1, 2024 1:17 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> The odds that there's a human being beta-testing [Elon] Musk's neural interface with the only brain they will ever have aren't zero.

This is fucking insensitive. Plenty of paralyzed humans would love to beta-test a device that could allow them to speak or to perform at least some activities on their own.

Security quotes of the week

Posted Feb 1, 2024 7:43 UTC (Thu) by smurf (subscriber, #17840) [Link] (1 responses)

I have to agree. Doctorow is throwing out the kid with the bath water here.

There's plenty to criticize all around, of course, and Musk is a relentless optimist (as well as being a newly-minted right-wing *censored*, but that minting has a reason which is, well, willfully ignored by everybody … different topic), but what does Mr. Doctorow expect here? Robots and brain interfaces and spaceships and whatnot don't spontaneously come into being without people trying things and, yes, sometimes failing.

Those Cruise remote drivers are transitional, their number (or at least the people/cars relation) will go down as the tech gets better. One of quite a few counterpoints to that article I could make.

Doctorow is about as firmly in the criticize-anythign-AI-related-no-matter-what bubble as Musk is in his techno-optimism right-wing-free-speech antisemitism bubbles.

Security quotes of the week

Posted Feb 1, 2024 8:15 UTC (Thu) by Wol (subscriber, #4433) [Link]

The problem is we have far too many Optimistic Elons and Pessimistic Corys. Can you name any Skeptics? (As in "I'm quite happy to believe IFF you show me the evidence".)

I'm in the Skeptic category, but as far as I can tell, everybody's so busy ignoring the lessons of history that falling flat on our face is pretty much inevitable.

Cheers,
Wol

Security quotes of the week

Posted Feb 1, 2024 10:41 UTC (Thu) by excors (subscriber, #95769) [Link]

In particular, this trial was for people with "quadriplegia (limited function in all 4 limbs) due to spinal cord injury or amyotrophic lateral sclerosis (ALS)" (https://neuralink.com/pdfs/PRIME-Study-Brochure.pdf). Reportedly a very high proportion of spinal cord injury survivors are "glad to be alive" (https://www.theatlantic.com/health/archive/2013/12/reconc...), but there are occasional news reports of people seeking assisted dying because of their disability, so it doesn't seem hard to imagine a few volunteers who are willing to take substantial risks in the hopes of improving their quality of life and helping to improve the technology for others.

I've seen very little information about this trial that wasn't sourced from Neuralink or Musk, but it was independently reported last May that "The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients" after they had addressed its previous objections (https://www.reuters.com/science/elon-musks-neuralink-gets...), so I think it's silly to claim Neuralink is fabricating the whole thing.

And other companies have already spent many years running clinical trials on brain-computer interfaces in people with paralysis and are now aiming to commercialise it (e.g. https://blackrockneurotech.com/insights/blackrock-neurote...) - what Neuralink is doing is far from unbelievable or unprecedented, except in its level of hype.

Security quotes of the week

Posted Feb 1, 2024 18:01 UTC (Thu) by Phantom_Hoover (subscriber, #167627) [Link]

‘Insensitive’ to Elon’s notoriously fragile ego, perhaps. Doctorow is completely right to cast aspersions on any company with Elon’s reprehensibly irresponsible and reckless approach to risk being allowed to test anything on human brains.

Security quotes of the week

Posted Feb 3, 2024 2:42 UTC (Sat) by roc (subscriber, #30627) [Link]

Doctorow is also insensitive about globalization. Apparently he thinks poor people in foreign countries are being forced to work in call centers when they'd rather be doing back-breaking subsistence agriculture.

Security quotes of the week

Posted Feb 1, 2024 3:18 UTC (Thu) by ejr (subscriber, #51652) [Link] (2 responses)

I have heard a few CEOs speak quite like that quote.

Security quotes of the week

Posted Feb 1, 2024 9:27 UTC (Thu) by anton (subscriber, #25547) [Link] (1 responses)

To me it sounds like a homework assignment (apart from the "my typical writing style").

I don't think that is accidential. Homework assignments require precision, because students usually do not know what is required (whereas a long-term assistant of a CEO usually can fill in the missing parts, or ask back), and they can be confused by cushioning the assignment with additional verbiage (what is the requirement, what is additional verbiage?). They also tend to be roughly as incompetent as LLMs currently are, that's why we train them (the students) with homework assignments (the LLMs don't learn that well when we tell them what we found wrong with their output).

Security quotes of the week

Posted Feb 1, 2024 9:37 UTC (Thu) by Funcan (guest, #44209) [Link]

There are strong arguments to be made that we train children like that so that they learn to follow instructions without question and on a schedule, to better fit into Victorian era factory workforce norms, not because doing so had any particular educational benefit

Security quotes of the week

Posted Feb 3, 2024 19:23 UTC (Sat) by BeetleBug (guest, #142379) [Link]

Word processors actually ended up causing some harm. People started putting too much faith in spellcheckers knowing everything, and student vocabulary shrunk because they figured if there were red squiggles under a word, it must be wrong so they should stop using it.

Security quotes of the week

Posted Feb 8, 2024 17:56 UTC (Thu) by calumapplepie (guest, #143655) [Link] (1 responses)

I'm... pretty cautious about Google's AI bug fixer. AI's are already notorious for false confidence; claiming to fix things that they haven't. I'm sure that they have a system design which will run test cases against the bug, and that they based their 15% on those cases; but if no human ever actually understands the bug, how can we be sure that the fix isn't just papering over a deeper problem? A bug involving a null pointer dereference might be fixed by a null check, but if the pointer shouldn't be null, it might be used again.

Security quotes of the week

Posted Feb 8, 2024 23:49 UTC (Thu) by Wol (subscriber, #4433) [Link]

And given all the kerfuffle about optimising compilers, what's the betting the AI screws up the null check, invokes undefined behaviour, and gets optimised into thin air :-)

Cheers,
Wol


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds