Calibre adds AI "discussion" feature
Version 8.16.0 of the calibre ebook-management software, released on December 4, includes a "Discuss with AI" feature that can be used to query various AI/LLM services or local models about books, and ask for recommendations on what to read next. The feature has sparked discussion among human users of calibre as well, and more than a few are upset about the intrusion of AI into the software. After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.
Amir Tehrani proposed adding an LLM query feature directly to calibre in August 2025:
I have developed and tested a new feature that integrates Google's Gemini API (which can be abstracted to any compatible LLM) directly into the Calibre E-book Viewer. My aim is to empower users with in-context AI tools, removing the need to leave the reading environment. The results: capability of instant text summarization, clarification of complex topics, grammar correction, translation, and more, enhancing the reading and research experience.
Kovid Goyal, creator and maintainer of calibre, quickly voiced approval. He dismissed the idea that it might bother some calibre users and suggested that Tehrani submit a pull request for the feature. On August 10, Tehrani submitted the patches, and Goyal later merged them into mainline after refactoring the code. He provided a description of the additional LLM features he had in mind as well:
There are likely going to be new APIs added to all backends to support things like generating covers, finding what to read next, TTS [text-to-speech], grammar and style fixing in the editor and possibly metadata download.
Goyal did promise
that calibre would "never ever use any third party service without
explicit opt-in
".
Discuss removing the feature
It did not take long after the Discuss feature was released for users to start asking for its removal. User "msr" on the Mobileread forum started a thread to ask if there was a way to block or hide all AI features:
I generally find the AI-push to be morally repugnant (among other things, I am an author whose work has been stolen for training) and I hate to see these features creep into software I use. I have zero interest in ever using so-called AI for anything.
Goyal replied
that the features do nothing unless they are enabled. "The worst
you get is a few menu entries. Simply ignore them.
"
Other users echoed the anti-AI sentiment. "Quoth" said
they would not update calibre until the feature was scrapped. "It's
a thin end of a wedge and encouraging people to use these over-hyped
LLMs, even though off by default.
" Goyal replied
that it is in calibre to stay:
It's not going to be scrapped, so good bye, I guess. You are more than welcome to not use AI if you don't want to. calibre very nicely makes that easy for you by having it off by default to the extent that the AI code is not even loaded unless you enable it. What you DO NOT get to do is try to make that choice for other people.
What's added so far
The feature is displayed in the calibre user interface by default; it shows up in the View menu as "Discuss selected books with AI". The naming is unfortunate on its own. Calling the process of sending queries to an LLM provider a discussion encourages people to anthropomorphize the tools and furthers the misconception that these tools "think" in the way that people do. Whatever value the responses may have, they do not reflect actual thought.
As Goyal pointed out, though, the Discuss feature does not work until an LLM provider is configured. If a user attempts to use it without doing so, calibre displays a dialog that directs the user to configure a provider first. Each provider is supplied as a separate plugin. Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.
The Discuss feature shows up as a plugin as well. It is located in the calibre preferences in the "User interface action" category. However, it is a plugin that cannot be disabled or removed; nor can any of the other alleged plugins in that category. It seems fair to question whether something is actually a "plugin" if it cannot be unplugged. The separate provider plugins, in the "AI provider" category, can be disabled or removed, though. The provider plugins are enabled by default, but they do nothing until a user supplies credentials of some kind.
Users do not need to worry about accidentally enabling a feature that sends data off to a provider, because it is impossible to accidentally configure the plugins. For example, the GitHub AI provider requires an access token before it will work, and Google's AI provider needs an API key to function. Using a local provider requires the user to actually have LM Studio or Ollama set up, and then jump through a couple of hoops to enable them.
Even if a user wants to query an LLM about a book, they may encounter problems. I tried setting calibre up to use GitHub AI, but even after appearing to have successfully configured it as provider with the token, I had no luck. I could send queries, but received no reply. I was able to get calibre working with Ollama, though the experience was not particularly compelling.
Responses from GitHub AI or Ollama about books are of little interest to me; a model may have ingested a million or more books as it was trained, but it hasn't read a single one, nor had any life experience that could spark an insight or reaction. Thoughtful discussions of books with well-read people with real perspectives, on the other hand, would be delightful—but beyond calibre's capabilities to provide.
Hide AI
Despite dismissing complaints about the addition of AI, Goyal has grudgingly accepted
a pull
request to hide AI features. He said that anyone offended by a few
menu entries is not worth worrying about but, "I don't particularly
mind having a tweak just to hide the menu entries, but that is all it
should do
". He added that someone would need to supply patches to
hide additional AI functionality in the future. "That someone
isn't going to be me as I don't have the patience to waste my time
catering to insanity.
"
A "remove slop" pull request from "Ember-ruby" that would have stripped out AI features from calibre was rejected without comment. The calibre forked repository with those patches may be of interest, however, to those interested in forking calibre.
At least two forks have been announced so far; one seems to have
only gotten so far as the name, clbre
"because the AI is stripped out
". To date the only work that
has shown up in that repository is to update the
README. Xandra Granade announced
rereading on
December 9; that project is currently working on a fork called arcalibre,
but its goals are limited to a snapshot of calibre "with all AI
antifeatures removed
" that can be used for future forks of
calibre. No new features are planned for arcalibre.
The rereading draft charter suggests that the project will develop additional applications based on arcalibre. It is, of course, far too early to say whether the project will produce anything interesting in the long term. Any future forkers should note that the name "Excalibre" is right there for the taking.
Resistance seems futile
No doubt part of calibre's audience is pleased to see the feature; but it has proven to be an unwelcome addition for some of calibre's users. It is not surprising that those users have asked for it to be removed or changed in such a way that it can be hidden.
It has been a disappointing year overall for Linux and open-source enthusiasts who object to the seemingly relentless AI-ification of everything. It is fairly commonplace at this point to see companies shoving AI features into proprietary software whether the features actually make sense or not. However, an open-source project like calibre has no shareholders to please by ticking off the "AI inside" box, so few people would have had "adds AI" to their calibre bingo card for 2025.
An AI feature landing in calibre seems a fitting coda to the recurrent theme of AI and open source in 2025; whether users want to engage with AI or not, it is seemingly inescapable. One might wonder if AI has come to calibre, a project with no commercial incentive to add it, is there no refuge to be had from it at all?
Bitwarden, which makes an open-source password manager and server, is now accepting AI-generated contributions, as is the KeePassXC password-manager project. Even projects like Fedora and the Linux kernel are accepting or leaning toward accepting LLM-assisted contributions; Mozilla is all-in on AI and pushing it into Firefox as well. This is not an exhaustive list of AI-friendly projects, of course; it would be exhausting to try to compile one at this point.
In most cases, though, users still have options without LLM features. When it comes to calibre, there is no alternative to turn to. Then again, there was no real alternative to calibre before it adopted "Discuss with AI", either. There are many open-source programs that handle reading ebooks; that is well-covered territory. Some, like Foliate, are arguably better than calibre at that task.
But there is no other ebook-management software (open source or
otherwise) that has all of
calibre's conversion features and support for exporting to such a
wide variety of ebook readers. Evan Buss attempted a calibre
alternative, called 22,
in 2019. Buss threw in the towel after learning "ebook managers are
much more difficult to get right than I had previously imagined
",
and maintaining compatibility with calibre "proved near
impossible
". Phil Denhoff started the Citadel
project in late 2023. It looked like a promising calibre-compatible
ebook-library manager, but its last
release was in October 2024. Denhoff continues to make commits to the
repository, though, so one might still hold out hope for the
project.
While the lack of alternatives is frustrating for some, it is not Goyal's fault. The fact that the open-source community, to date, has not produced anything else that can fill in for calibre is not his problem. It is not his responsibility to take the program in any particular direction, nor is he obliged to entertain user complaints. Whether users love or loathe seeing calibre adding LLM features, it's up to its maintainer to decide what gets in and what doesn't.
For now, the AI-objectors on Linux have a few options. One is to live with lurking LLM features, or stick with calibre versions before 8.16.0. Goyal has made it easy to revert to an older version; the download.calibre.com site seems to have all prior releases of calibre going back to its pre-1.0 days. The Download for Linux page has instructions on reverting to previous versions, too. Those who get calibre from their Linux distribution may be LLM-free for some time without taking any action. Debian 13 ("trixie") users, for example, should be on the 8.5.0 branch for the remainder of the release's lifetime. Fedora 42 is still on the 8.0 branch, and Fedora 43 is on 8.14. Rawhide has 8.16.2, though, so users are likely to get the Discuss feature in Fedora 44.
The strong reaction against calibre's Discuss feature may seem more emotional than logical. It is also understandable. Books are a human endeavor, even those that are in electronic format. AI models have often been trained by plundering a corpus of books, without respect for the author's wishes or copyright. Suggesting that the readers now turn to the technologies that seek to replace humans to supplement their reading experience is, for some at least, deeply offensive. It is a little puzzling that Goyal, who has catered to a large audience of book lovers for nearly 20 years, seems not to understand that.
Posted Dec 15, 2025 19:57 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (7 responses)
As an AI hater myself, I agree with that sentiment, and I'm delighted to see that they've limited it to a couple of menu entries that are easily ignored.
What I HATE is tools that push AI at you, the pop-ups are there every time you open the tool, etc etc. I'm an ANALYST for heavens sake, my job is to DIG INTO THE DETAIL, why on earth do I need Google Sheets taking every opportunity to say "do you want a summary of what you're looking at?".
I know people have said I'm using a crap AI, but why do "they" think pushing it at me (with offers I find a *nuisance*!) is going to change my opinion?
Cheers,
Posted Dec 16, 2025 1:37 UTC (Tue)
by gerdesj (subscriber, #5446)
[Link] (6 responses)
I doubt you actual are an AI hater per se. I suspect we could agree that AI is the wrong term for an LLM, for starters. I might also insinuate that you are a fan of appropriate use of technology. So, once we dispense with the hyperbole and get down to brass tacks, what have we got?
You mention Google Sheets. I would dump that if you don't want other things thrown at you. Either go self hosted: Libre Office, Excel or whatever. Excel will wander off for a fag behind the bike sheds with CoPilot, I'm afraid.
We can whine and whinge about change or embrace it or work around it or whatever. I had you pegged as an engineer so please behave like one!
I think (let's dump the AI moniker for starters) that LLMs are a tool.
A panel saw will rip your thumb off if you leave it across the pencil line you carefully scribed. You allow for the thickness of the saw and your perception and even the thickness of your pencil - craftmanship is hard!
Remember the first time you learned how to spell BOOBS on a calculator?
Remember stirring your tea with the slide on a slide rule? Oh and thinking how cool your mental maths is, which it was.
A LLM is not intelligent but I think you do need to get to grips with them. You can spin up a small model on a 16GB Nvidia and scare yourself silly with what comes across as an encyclopedia with fairly decent thinking powers.
Tools mate, for the use of.
Posted Dec 16, 2025 10:23 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (5 responses)
Work is dumping Excel for Sheets. Good decision, bad decision, I don't know. Using a spreadsheet when you need a database is certainly a dumb decision ...
From choice (aka my home laptop) I run linux. Work, family support I don't get that choice :-( And I'm afraid I'm a great believer in the power of a good vent. I'm turning into a grumpy old man with a disabled wife, and it keeps me sane :-) I hate people who don't listen, and AI/LLM ime fall firmly into that category. At the moment, I can't see any need for me to use that technology, which is why I find the constant pushing extremely annoying (and why I'm so pleased to see Calibre being sensible, it's a great breath of fresh air!)
I'm not QUITE old enough to have used a slide rule in anger, but I remember playing with my Dad's :-)
And no, actually, Engineer is never a word I would have used to describe myself with :-) Natural Philosopher, Polymath, Chemist, sort-of-medic maybe. At work it's always been some sort of Analyst Programmer, except now it's Business Analyst (where my programming skills are heavily used).
Cheers,
Posted Dec 21, 2025 17:44 UTC (Sun)
by nelzas (subscriber, #4427)
[Link]
You wrote "people who don't listen, and AI/LLM ime fall firmly into that category."
:-)
Posted Dec 25, 2025 10:40 UTC (Thu)
by davidgerard (guest, #100304)
[Link] (3 responses)
What they both do superlatively is office sharing and collaboration.
Also, you know that the number one use of Sheets is as a one-table flat file database.
At BMJ we had Google Apps everywhere and MS Office for the specific people who had a need for a proper spreadsheet.
Posted Dec 26, 2025 16:21 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (2 responses)
I think you'll find they didn't want Excel. What they wanted was a proper non-flat-file non-2-D database. :-) :-)
Which is why I'm desperate to ditch Excel, and just chuck json or similar into Sheets or Looker or some other stuff for reporting.
My big problem with the tools I have available, is how on earth you run a production system using reporting tools, and what are the best tools for live updates on the fly, when 99% of my colleagues don't even understand the problem (because they just pull data for analysis, they don't understand "hey the warehouse just broke what do we need to do to fix it ... ???"). Last year we had we had an absolute classic - we needed to fix Christmas and there was absolutely *nothing* we could do - headlines all over the press (we weren't the only ones :-) "Supermarket cancelled my Christmas Dinner" ...
Cheers,
Posted Dec 26, 2025 17:31 UTC (Fri)
by amacater (subscriber, #790)
[Link] (1 responses)
Posted Dec 28, 2025 12:50 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
The epitomy of modern databases - too much data, not enough information :-)
Cheers,
Posted Dec 15, 2025 20:29 UTC (Mon)
by dvdeug (subscriber, #10998)
[Link] (16 responses)
People have been "conversing" and even "discussing" with Eliza for 50 years now. Besides which, what do you call a process where you engage in a series of sending a message, and receiving a message back on that topic in the context? The tools don't "think" the way that we do, but trying to draw this line around "discussion" seems to be making an arbitrary division.
Posted Dec 15, 2025 20:55 UTC (Mon)
by dskoll (subscriber, #1630)
[Link] (2 responses)
People have been "conversing" and even "discussing" with Eliza for 50 years now
Really? My experiences with Eliza were that after about 60 seconds, the novelty wore off and it was transparently obvious that you were not really conversing with anyone or anything.
ChatGPT, last time I used it, was certainly more realistic, though also obviously non-human and in fact quite irritating to interact with. It came across as a slippery politician, unwilling to give a straight answer to difficult questions.
Posted Dec 15, 2025 21:23 UTC (Mon)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
Posted Dec 16, 2025 2:12 UTC (Tue)
by interalia (subscriber, #26615)
[Link]
I don't think LLMs can do that (feel free to correct me), so it seems much closer to a query-response model with natural language queries. It seems a better description with more apt connotations about the experience.
Posted Dec 15, 2025 21:43 UTC (Mon)
by jzb (editor, #7867)
[Link] (12 responses)
I'm pretty sure you could find complaints about the use of the terms "conversing" with ELIZA as well going back almost as far as ELIZA. I'm not the first person to be pedantic about language, after all. :-) But I also suspect it seemed less important to draw a distinction there—ELIZA was never being adopted or pushed the way that LLM-driven tools are. There was little danger that people would take it that seriously or that it could even go too deep into a topic that users would mistake it for real intelligence on any scale. I'd describe it as querying, just as one would query a database. Granted, the syntax is much more approachable than (say) SQL, and what is returned is not data from a table but the result of a model, but I'd suggest that "querying" is more accurate than "discussing" with something that has no consciousness. Perhaps it seems arbitrary, but the words we use for things shape the way we think about them. So I'd suggest that we not use terms that anthropomorphize the technology or over-estimate its ability. But, that's all my viewpoint: it's not like I can stop anybody from calling it "discussion," I can only make the appeal/argument and hope it gets some traction.
Posted Dec 16, 2025 3:51 UTC (Tue)
by dvdeug (subscriber, #10998)
[Link] (11 responses)
SELECT title FROM movies WHERE director = "Rob Reiner";
and I notice that a certain movie isn't there, I can't nudge SQL. I can nudge ChatGPT and it might say "Yes, I did miss Alex & Emmy as a movie he directed." or "Rob Reiner was an actor in Throw Momma from the Train, but it was directed by Danny DeVito." or "While Some Kind of Wonderful does have similarities to some movies directed by Rob Reiner, like The Sure Thing, it was directed by Howard Deutch."
> So I'd suggest that we not use terms that anthropomorphize the technology or over-estimate its ability.
Why is query more acceptable than discuss? A query and response was, pre-computer, exclusively a human activity, and even now when we have a complex question, we ask a person or an LLM. The problem as I see is that what LLMs do was exclusively the province of humans before, so we're going to be using human terms for it.
I get the feeling that sometimes the desire to not anthropomorphize them is instead a desire to downplay their power or significance. It's like when Deep Blue beat Kasparov at chess; LLMs are a movement into an area that humans thought was theirs. Are they human? No. Are they able to write a competent essay on basically any subject under the sun? Yes. Can they answer general questions better than just about any one human? Yes.
Their problems are not really discussion; their big problem is confabulation. If you think in terms of a query like SQL or Prolog, you're going to get misled, and you won't make best use of it. If you think in terms of a discussion, where you have to respond and push back against certain things to get the best results, LLMs work better.
Posted Dec 16, 2025 9:03 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Dec 16, 2025 12:38 UTC (Tue)
by ptime (subscriber, #168171)
[Link] (9 responses)
Posted Dec 17, 2025 4:27 UTC (Wed)
by dvdeug (subscriber, #10998)
[Link] (8 responses)
Posted Dec 17, 2025 8:37 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Dec 17, 2025 11:32 UTC (Wed)
by dvdeug (subscriber, #10998)
[Link]
For one, LLMs have been advancing quickly; a lot of my experience is with ChatGPT 5.x over the past few weeks.
For another, a lot of the coding claims made for them are simple nonsense, but using ChatGPT 5.x, I've had success writing small functions and programs to transform data formats. Other people have commended them for quickly writing test code at the function and module level. For non-coding applications, a Debian upgrade told me vdpau was disabled on MPlayer, so I had it explain what vdpau was, how it might affect my hardware, and ended figuring out how to turn on hardware-accelerated video on my system. No, it wasn't perfect on all the details of the output of mpv, but it stepped me through enough of it.
I've had problems with it; I spent some time trying to find a German Christmas poem that it apparently made up, for example. But while I've had to pick out a few errors, it's been useful enough that I'd go back to it.
Posted Dec 17, 2025 13:37 UTC (Wed)
by ptime (subscriber, #168171)
[Link] (5 responses)
Posted Dec 17, 2025 14:43 UTC (Wed)
by dvdeug (subscriber, #10998)
[Link] (4 responses)
In this case, why ask the question of what Rob Reiner directed if you're just going to ignore the result? You can force it to tell you that Rob Reiner directed Titanic, but that won't make it true.
Posted Dec 18, 2025 1:15 UTC (Thu)
by ptime (subscriber, #168171)
[Link] (2 responses)
Posted Dec 18, 2025 9:53 UTC (Thu)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
That's unusual. Much of the early development on computers was to produce answers that humans would have difficulty producing, sometimes even being impossible to produce, and that continues to be a driving force today, things like huge physics simulations or summing every sale a store did last year and producing the correct total. It seems like a statement that advances an argument more than it reflects reality.
> I wouldn’t ask the question if I already know the answer, which is presupposed by your “but you can argue with an LLM until it maybe agrees with you” scenario.
I said
It's literally about not knowing an answer. If you have a good database, then yes, look it up. It's not the greatest of examples.
I use questions like this all the time with humans; I have a different understanding or impression, and I'd like to know if I'm wrong or if you can enlighten me about aspects I might not have thought about. An LLM can provide answers as to why you're right or wrong, about why you may have thought Some Kind of Wonderful was directed by Rob Reiner.
Posted Dec 18, 2025 10:27 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
That's the difference between "trivial" and "easy", as evidenced when they announced the proof of Fermat's Last Theorem, and called it "trivial". As far as I can tell (not having a clue how to understand the maths) it was.
Easy is adding up a column of ten numbers. Trivial is adding up a column of ten thousand numbers. "Trivial" is easy to specify and easy to do. The fact that "to err is human" and trivial problems are usually so large as to invite that, isn't the computer's problem :-)
Cheers,
Posted Dec 18, 2025 10:10 UTC (Thu)
by anselm (subscriber, #2796)
[Link]
Which is just too bad. It might have been a much more entertaining movie …
Posted Dec 15, 2025 22:47 UTC (Mon)
by hailfinger (subscriber, #76962)
[Link] (5 responses)
> I dont have the patience to waste my time catering to insanity.
Wow. This is offensive both to people suffering from mental illness and to people wanting to remove the AI features.
Posted Dec 15, 2025 23:11 UTC (Mon)
by dsommers (subscriber, #55274)
[Link]
I wish there were better alternatives than calibre, I use it to handle format conversions, DeDRM and managing my ebook readers. Calibre does so okay. But the user interface and user experience in the application is mediocre by today's standards.
The attitude Goyal has shown here makes the user experience even worse.
Posted Dec 16, 2025 9:22 UTC (Tue)
by epilys (subscriber, #153643)
[Link]
- Kovid Goyal
Posted Dec 18, 2025 3:04 UTC (Thu)
by milesrout (subscriber, #126894)
[Link]
Posted Dec 23, 2025 15:38 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (1 responses)
> Wow. This is offensive both to people suffering from mental illness and to people wanting to remove the AI features.
He didn't say healthcare is a waste of time; this was not offensive to "both". He said that healthcare _performed by the Calibre maintainer in a bug tracker_ is a waste of time. I think we can easily agree this is not a good place for healthcare. So this was (very) offensive only to whom he was answering to. That's plenty enough.
Sorry for picking up on this particular example but I'm tired of this sort of failed attempts to over-analyze every minute detail of what "bad" people do or say. This totally backfires and is really one of the ways the left keeps losing elections all across the world. In just 5 minutes searching I found plenty enough, actually and obviously offensive things written by that maintainer. Keep it short, quote only the worst stuff and stop there. Don't share your personal interpretation and don't drown it all in irrelevant details. Don't divine what you want to find in every word of his and don't help make him a victim/hero harassed by people obsessed by everything he says. You may also want to be more selective with social and news media because sadly most of it has become exactly that - on _all_ sides.
Less is more.
> Now I know what Kovid Goyal thinks of other people.
Without knowing him and after just over-interpreting a few sentences he wrote without thinking when he was upset? Impressive psychology skills :-(
Posted Dec 23, 2025 16:53 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
The plague of professional offendees :-(
There's enough to be seriously offended by, without making stuff up. And, given the clash of languages, as you know it really doesn't help when speakers of one language get seriously offended by foreigners who are only using their own language in a totally non-offensive way ...
Hint - American and English are two separate languages, and NEITHER are the British National Language.
Cheers,
Posted Dec 15, 2025 23:22 UTC (Mon)
by geoffhill (subscriber, #92577)
[Link] (30 responses)
On a curiosity note: I wonder how many of us there are on the other side, who would love to plug a Gemini API key into GNOME desktop (or run an local LLM) and have all my GTK apps get superpowers. It can't just be a tiny minority of us? Open source and local models seem like an especially great match.
Posted Dec 15, 2025 23:53 UTC (Mon)
by dskoll (subscriber, #1630)
[Link] (22 responses)
... strong ethical quandaries and convictions on both sides ... What ethical quandary does choosing not to use AI put one in?
Posted Dec 16, 2025 0:00 UTC (Tue)
by geoffhill (subscriber, #92577)
[Link]
The ethical issues are perpetuated by one side, or zero sides, depending on who you ask.
Posted Dec 16, 2025 12:58 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (20 responses)
This one has parallels to a pre-existing ethical quandary in medicine, where there's a number of things known about human physiology as a result of war crime experiments on humans; do you use that knowledge for good, or forsake that knowledge because of how it was discovered?
And there's branches off this one - if you refuse to use AI yourself, would you still use something created with AI? What if that thing is a cure for a terminal illness, rather than an entertainment product - does that change your position?
But, it's a relatively weak quandary - it doesn't rise to the level of "is the way training data is gathered today OK?", let alone any of the stronger quandaries on the "choosing to use AI" side.
Posted Dec 16, 2025 15:20 UTC (Tue)
by dskoll (subscriber, #1630)
[Link] (15 responses)
what if, by refusing to use AI, I prevent someone from discovering something of great value that would have been discovered with AI assistance?
Sure, but much more likely IMO is: "What if, by relying on AI, I stunt my learning process so that some great discovery I might have made on my own never happens?"
What if that thing is a cure for a terminal illness, rather than an entertainment product - does that change your position?
Yes, absolutely. I am not dead-set against AI. Machine-vision tools to suss out manufacturing defects are great. So is anything that can come up with novel drugs or medical advances. What I am against are GPTs that are being touted as being able to replace humans or act as agents. Mostly, I am appalled by the business model and level of hype surrounding GPTs and the fact that if they ever become financially viable (very doubtful at this point) only a few oligarchs will benefit to the detriment of everyone else.
Posted Dec 16, 2025 15:35 UTC (Tue)
by jzb (editor, #7867)
[Link] (12 responses)
I am not dead-set against AI. Nor am I; there are definitely applications for some of the technologies lumped in under the AI umbrella. Machine learning tools have their place. My complaints are largely with generative AI tools and some of the practices that surround them (e.g., rampant scraping, pushing genAI at people whether they want to engage with it or not). I don't, for example, care for AI-generated images or writing. I dabbled with AI tools early on for image generation, but have decided that I prefer art by humans. (Half the fun, for me anyway, was when the GenAI tools were producing truly deranged output that was clearly computer-generated and it was entertaining to see how it got things wrong. Ask for a unicorn and get a five-legged, three-eyed monstrosity that could cause nightmares...) It's unfortunate that the distinction often gets lost. If machine-learning tools can, for example, do better than people at detecting early stages of cancer in medical scans/tests, I'm all for that. (More probable, I think, is talented medical professionals plus machine learning may do better.)
Posted Dec 16, 2025 15:52 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
There was the GP who wrote his own system - in Prolog on something like an Apple II iirc - and he said patients loved interacting with it - they found it easy to be honest and didn't feel threatened. Then when they got to the actual appointment, he had a computer generated diagnosis that would come up with (maybe several) possibilities, and the likelihood of each.
The most important thing from the doctor's PoV, was it made it much harder for him to miss a vital question that could have led the diagnosis down a very different path to a far better result.
Imho this is a major loss in the modern computer world - we have turned programming into a completely different job, considered too hard for your typical worker (of all sorts, including supposedly clever people like doctors), with the result that the people who understand the problem don't understand programming and the programmers don't understand the problem. So we end up with computer systems that cost oodles of money and don't work. The current AI madness is just rushing even further down that path!
Cheers,
Posted Dec 16, 2025 19:31 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
There was nothing even remotely similar to that before. Before the AI, the best translation tools were good enough to serve only as a laughingstock.
Posted Dec 16, 2025 21:29 UTC (Tue)
by malmedal (subscriber, #56172)
[Link] (9 responses)
If you want machine learning, you want the most learned machine, yes? Today that is the llms.
My use for these things is stuff like classifying photos by content, grouping similar ones together, renaming pdfs with names like d.pdf, d (1).pdf d (2).pdf into something meaningful and move them into the correct place in the filesystem. This is quite easy to do with a locally running multimodal llm.
It is not perfect, for instance my internet bills get randomly placed in directories with names like bills, utility bills, telecom bill etc. However the result is useful, I can easily find the thing I want.
My pre-llm attempts at this with tesseract, sift, trusting pdf metadata, never worked well and were much harder to develop.
Special purpose neural nets for e.g. image classification do exist, but in my experience the general llms perform better and I can use the same llm for all tasks.
Posted Dec 17, 2025 8:46 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (2 responses)
Not necessarily. For many use cases 99.9999% of what the LLM has learned might be detrimental or useless.
Many tasks might also benefit from learning during the phase that is done purely by inference in LLMs (e.g. spam classification certainly won't be done well by something that starts every session with the knowledge of 1 year ago).
Posted Dec 17, 2025 9:53 UTC (Wed)
by malmedal (subscriber, #56172)
[Link] (1 responses)
That's what I expected myself...
Posted Dec 17, 2025 13:04 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
> That's what I expected myself...
And (anecdata) that's what I've regularly heard. People programming in DataBASIC usually get complete garbage back unless they've specifically trained the AI on their own code corpus.
While DataBASIC may be closely related to BASIC grammar-wise, the syntax is worlds apart. Probably similar to Finno/Ugrik, where it's fairly easy to learn one if you know the other, but the vocabularies bear no resemblance whatsoever to each other.
Cheers,
Posted Dec 17, 2025 9:21 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (5 responses)
LLMs are the current best option for natural language tasks; if the thing you are doing is not language based, then other models are likely better (even if that model is itself based on a generative pre-trained transformer architecture).
And it sounds like your task is heavy on natural language requirements - "generate meaningful names", "identify which name this document is connected to" - which will be why LLMs work well for that task.
Posted Dec 17, 2025 10:26 UTC (Wed)
by malmedal (subscriber, #56172)
[Link] (4 responses)
I easily have enough resources to run a small language model. Originally I used llava later gemma3n, these can be run on modest hardware including even my phone.
I do not have enough resources to develop multiple custom task-specific models, especially development-time since these are hobby-projects. And I found that they are better than using a ready-made image classification model and better than an OCR model.
Posted Dec 17, 2025 10:39 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (3 responses)
That second one is a market distortion, caused by the assertion that LLMs "just" need a bit more training and a bit more compute to become human-level intelligences, and thus focusing huge amounts of VC money on LLMs - but in time, that will change, because other models are going to be cheaper to run than LLMs for the purposes they're better at.
Posted Dec 17, 2025 11:19 UTC (Wed)
by malmedal (subscriber, #56172)
[Link] (2 responses)
That's exactly my point, LLMs is where all the effort has gone, so what I'm saying is that if you want to do almost anything with AI today the easiest way is with an LLM.
Posted Dec 17, 2025 11:36 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
Posted Dec 17, 2025 12:35 UTC (Wed)
by malmedal (subscriber, #56172)
[Link]
Posted Dec 16, 2025 15:48 UTC (Tue)
by pizza (subscriber, #46)
[Link]
I have been observing this "stunted learning" phenomena on at least a weekly basis for a while now.
These tools are great if you already know what you're doing; if you don't.. using them ensures you never will.
Posted Dec 16, 2025 16:03 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
But I've already seen people get upset at the idea that a dead-end job (human supervising chemical simulation software, and answering it when its heuristics don't tell it what to do next) is being put at risk by a GPT model that's ingested a pile of academic papers about interesting molecules, and is answering what to do next, on the basis that using GPT to replace a human is inherently bad, even though it's replacing a human in a dead-end job (one that is given to people as a hint that they should find a new career path).
Posted Dec 17, 2025 8:40 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Dec 17, 2025 9:53 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (2 responses)
The significant gap between the two is that Roko's Basilisk and Pascal's Wager are both talking about future penalties for failure to comply; this class of quandary is asserting that the penalties have already been paid, and asking whether or not those penalties are such that this knowledge is tainted and must not be used.
Posted Dec 17, 2025 13:00 UTC (Wed)
by pizza (subscriber, #46)
[Link] (1 responses)
I wouldn't be so sure of that; the FOMO within corporate leadership that's led to force-feeding "AI" into everything has resulted in metrics phrased as the latter, but they're really the former.
Posted Dec 17, 2025 14:01 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
People pushing AI into places it doesn't fit in your job is also a problem, but is a different category of problem.
Posted Dec 16, 2025 0:21 UTC (Tue)
by karath (subscriber, #19025)
[Link]
On the main topic of the article, I’d both support adding plugins for AI while wholly agreeing that the software should give users the ability to disable or even remove the functionality.
My view of the current ‘marketplace of ideas’ is that the current push for AI (in the form of LLMs) has widespread backing among ‘knowledge workers’, even where they know that there will be some fallout. LLMs are showing emergent behaviour that is more than mere database querying, however far less than becoming sentient. I suspect that in several years time, the topic of LLMs will be mined out and the new cry will be that AI is a dead end (again).
Posted Dec 16, 2025 6:46 UTC (Tue)
by dankamongmen (subscriber, #35141)
[Link] (3 responses)
i dislike everything about the move to statistical models, yet agree with this sentiment. this article evidenced a degree of bias i'm surprised to find on LWN.
Posted Dec 16, 2025 10:30 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
> i dislike everything about the move to statistical models, yet agree with this sentiment. this article evidenced a degree of bias i'm surprised to find on LWN.
If that bias was Open. Unabashed. And Obvious that makes it PERFECT for LWN.
NEUTRALITY IS IMPOSSIBLE and those people who claim to be capable of it are liars. That also includes organisations!
If you wear your biases and opinions on your sleeve - and you are careful to separate them from facts !!! - that makes your opinions MUCH more valuable than someone claiming to be neutral. Your readers / hearers then have the opportunity to make up their own minds, rather than having you pretend to be omniscient.
Cheers,
Posted Dec 17, 2025 3:08 UTC (Wed)
by dvdeug (subscriber, #10998)
[Link] (1 responses)
When I took a Discrete Optimization class, I found myself disliking the randomized tools, like simulated annealing, where it's hard to say how good the result is and impossible to say if it's the optimal result. I also found that if you wanted good answers fast, you used the randomized tools; purely predictable tools could get an okay answer fast or the optimal answer but sometimes literally after the sun burned out.
There are some cases where pure logic isn't the way to progress, even in the world of algorithms, and statistics and randomness are the right tools.
Posted Dec 17, 2025 8:54 UTC (Wed)
by taladar (subscriber, #68407)
[Link]
Posted Dec 16, 2025 18:48 UTC (Tue)
by q3cpma (subscriber, #120859)
[Link]
And I say that as someone who really hates LLMs and can't wait for the bubble to pop or significantly deflate (though I'd like something based on them to replace Grammarly/languagetool).
Posted Dec 18, 2025 23:47 UTC (Thu)
by jschrod (subscriber, #1646)
[Link]
That said, I didn't find the article biased. Both viewpoints were well represented.
Since you want that "optimistic users shall be considered", c.f. your comment title - might it be that the bias is more on your side than on the author's side?
Posted Dec 16, 2025 0:10 UTC (Tue)
by pmallory (subscriber, #122252)
[Link] (5 responses)
I was curious why this would appeal to anyone, so I took a closer look at the discussion thread to try and find out. Some people want help parsing dense writing (Joyce, Kant) written in languages the reader isn't confident with. I'd recommend these readers get annotated texts instead, but it's not my place to tell them not to use an LLM for this. Someone else wants to use an LLM to populate tags based on a book's/article's table of contents. I'm not sure if an LLM is the best solution, but again it's not my place to tell someone they shouldn't want to do that.
I guess ethical objections still remain. I'm not swayed on there either. People can already use Calibre to manage books that they didn't pay for. Is it Calibre's job to prevent people from also using LLMs trained on books that weren't paid for? Should Calibre also remove its RSS feed support, and every other feature, in case people use those unethically?
The other complaints on Calibre's forum seem to actually be complaints about Microsoft Windows and Excel, and various histrionics that also don't have anything to do with the actual feature.
So the opposition to this feature, as far as I can tell, is "I don't want it and I don't think anyone else should either". The first part is fine by me and I agree, but the second part I can't get behind.
Posted Dec 16, 2025 5:20 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Dec 16, 2025 5:37 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Dec 20, 2025 14:03 UTC (Sat)
by ras (subscriber, #33059)
[Link]
It takes longer to type your words in, but since something is going to try and understand those words you are forced to connect your thoughts into a logical narrative. Merely being forced to do that is often enough to untangle them. But if not you can press the send button and see if you made sense to something with language skills far better than most humans.
If the LLM does make some sense to it, you get the bonus round - it may unreliably parrot back to you the words of someone who has solved the problem before. After all, the LLMs don't just have incredible language skills, they also have read far more widely than any of us. I think their ability to find and regurgitate what people far more knowledgeable than me on a topic had to say is an awesome addition to search engines.
It's a shame about the unreliability. But you can't complain about the price. LLM's that have been fed the entire internet, and have had man centuries of training are free. For now. I suspect for a short time only.
Posted Dec 25, 2025 10:44 UTC (Thu)
by davidgerard (guest, #100304)
[Link] (1 responses)
Posted Dec 26, 2025 14:48 UTC (Fri)
by Phantom_Hoover (subscriber, #167627)
[Link]
Posted Dec 18, 2025 3:09 UTC (Thu)
by milesrout (subscriber, #126894)
[Link]
I don't see myself using this feature. I also don't see myself yelling in all caps about it or refusing to download the next release.
These people need to bloody grow up.
Posted Dec 20, 2025 14:25 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (11 responses)
Have you considered the possibility that there are significant numbers of people out there who genuinely find LLMs a useful tool and want the option to use them in Calibre? I’m very much an AI sceptic myself and I don’t use LLMs at all, but so much of the current backlash has ceased to be sceptical, it’s dogmatic insistence that the entire thing is bullshit hype. There’s a core of novel technological capabilities there that isn’t going anywhere and has *some* genuine applications, and it’s foolish and petty to try to bully ebook applications out of quietly exploring them.
I was frankly disappointed in this article as a piece of LWN coverage. LWN usually does a good job of reporting evenhandedly on controversies; I’m not sure I’ve seen a piece here where the author tips his hand so obviously in taking a side before.
Posted Dec 20, 2025 16:05 UTC (Sat)
by dskoll (subscriber, #1630)
[Link] (7 responses)
Even if the LWN article was slanted against AI, I'm fine with that as a counter-balance, considering what we're up against.
Posted Dec 20, 2025 18:29 UTC (Sat)
by intelfx (subscriber, #130118)
[Link] (6 responses)
Two wrongs don't make a right; in discourse more than anywhere.
Posted Dec 20, 2025 18:33 UTC (Sat)
by dskoll (subscriber, #1630)
[Link] (5 responses)
An LWN piece written by someone with a point of view is not a "wrong" in the same way that oligarchs spending $100M to target politicians whose opinions on AI they don't like is a wrong. Come on, be real.
Posted Dec 20, 2025 19:11 UTC (Sat)
by intelfx (subscriber, #130118)
[Link] (4 responses)
This is sophistics and as such invalid. To rephrase my point for more clarity and less chance of sophistics, two biased sources do not cancel each other resulting in an unbiased discourse, they just turn into a shouting match.
> Come on, be real.
Don't bring my personality into the discussion, thanks.
Posted Dec 20, 2025 19:19 UTC (Sat)
by dskoll (subscriber, #1630)
[Link] (2 responses)
It is perfectly acceptable for an article author to have an opinion and write an article that reflects that opinion.
Or are you saying that nothing should ever be published that is somehow "biased"? How would one measure such "bias"? Who gets to decide if something is "biased"?
To take an extreme example, would it be wrong to publish an article saying that it's wrong to covertly inject malware into the Linux kernel, because that's a "biased" point of view and that one should take no position about injecting malware into the kernel?
Posted Dec 22, 2025 11:37 UTC (Mon)
by Phantom_Hoover (subscriber, #167627)
[Link] (1 responses)
Posted Dec 22, 2025 13:27 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
And I was left with the strong impression that the author of this article DID know the difference - it seemed perfectly clear to me ...
Cheers,
Posted Dec 20, 2025 20:22 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
And you're claiming that you're not biased? COME ON!
I'd much rather someone wears their opinion on their sleeve, than lies by claiming to be unbiased.
Cheers,
Posted Dec 20, 2025 16:42 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (2 responses)
I just wish MORE authors would do that. Just keep a clear dividing line between facts and opinions. And notice he did say that there was a simple on/off switch! Precisely so the AI-haters could turn it off. That's much more truly neutral than the software that's shoving it down your throat.
> LWN usually does a good job of reporting evenhandedly on controversies
And why isn't this article a good job, either? True, editors are discouraged from taking sides, but it's very hard to truly hide your biases. All too often attempting to do so results in pretending some garbage theory on the other side actually has weight it doesn't deserve.
Too much authoring is encouraged to be in the 3rd person, which lends a completely false air of authority to what is being said. Take personal responsibility for your opinions, write in the first person, and as I said (do your best to) keep facts and opinions separate. The result is MUCH more honest than pretending to be neutral and dispassionate.
Cheers,
Posted Dec 20, 2025 19:14 UTC (Sat)
by Phantom_Hoover (subscriber, #167627)
[Link] (1 responses)
Posted Dec 20, 2025 20:30 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
And I'm not going to read the article again to see whether I agree with you.
But that's your opinion of his opinion, and we can choose reasonably to differ.
It certainly came over to me as a biased piece. But the author *knew* it was biased, and tried to cater to people who disagreed with him. Compared to all these people who like to lay down their infallible truth, it was a breath of fresh air :-)
Cheers,
Posted Jan 4, 2026 22:19 UTC (Sun)
by aphedges (subscriber, #171718)
[Link]
A common annoyance I find in ebook tooling is that you need to install all of Calibre, including Qt, just to use the ebook format conversion tool via the CLI. This change will hopefully allow tools to be much more lightweight!
What you DO NOT get to do is try to make that choice for other people.
Wol.
What you DO NOT get to do is try to make that choice for other people.
What you DO NOT get to do is try to make that choice for other people.
Wol
What you DO NOT get to do is try to make that choice for other people.
Are you sure AI/LLM are in the category of people?
What you DO NOT get to do is try to make that choice for other people.
What you DO NOT get to do is try to make that choice for other people.
Wol
What you DO NOT get to do is try to make that choice for other people.
What you DO NOT get to do is try to make that choice for other people.
Wold
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
Discussing with an LLM
>>it might say "Yes, I did miss Alex & Emmy as a movie he directed." or "Rob Reiner was an actor in Throw Momma from the Train, but it was directed by Danny DeVito." or "While Some Kind of Wonderful does have similarities to some movies directed by Rob Reiner, like The Sure Thing, it was directed by Howard Deutch."
Discussing with an LLM
Wol
Discussing with an LLM
You can force it to tell you that Rob Reiner directed Titanic, but that won't make it true.
Using mental illness as insult...
Now I know what Kovid Goyal thinks of other people.
Using mental illness as insult...
Using mental illness as insult...
Using mental illness as insult...
Using mental illness as insult...
Using mental illness as insult...
Wol
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
I see one, and it's quite a weak quandary: "what if, by refusing to use AI, I prevent someone from discovering something of great value that would have been discovered with AI assistance?".
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
Wol
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
For many tasks it might be more important to have the correct training data than huge amounts of randomly assembled training data.
Consider the optimistic users
Consider the optimistic users
Wol
You do not want the "most learned" machine; you want the one with most appropriate training to the task at hand.
Consider the optimistic users
Consider the optimistic users
Two things:
Consider the optimistic users
Consider the optimistic users
And mine is that an LLM isn't always what you want - it's what you can make work easily and cheaply, which is not the same thing at all.
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
I did say it's quite a weak quandary :-)
Consider the optimistic users
Consider the optimistic users
It's a different class of quandary - those are "if I reject the thing, I will be punished", whereas this is "if I reject the thing, I may miss out on rewards". The closest strong quandary is the "Death Camp Science" quandary - if we have knowledge as a result of inhumane war crimes committed at death camps, to what extent am I either implicitly endorsing those crimes or encouraging the commission of new crimes by benefiting from that knowledge?
Consider the optimistic users
Consider the optimistic users
That is a different quandary to the one I was talking about, though. Mine applies in the absence of people trying to force AI into places it doesn't fit.
Consider the optimistic users
Anti-fake balanced reporting
Consider the optimistic users
Consider the optimistic users
Wol
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
Consider the optimistic users
Tempest in a teapot?
Tempest in a teapot?
Tempest in a teapot?
Tempest in a teapot?
Tempest in a teapot?
Tempest in a teapot?
If you don't like it, don't use it!
Poor coverage
Poor coverage
Poor coverage
Poor coverage
Poor coverage
Poor coverage
Poor coverage
Poor coverage
Wol
Poor coverage
Wol
Poor coverage
Wol
Poor coverage
Poor coverage
Wol
Arcalibre Engineering
