Against LLMs and Generative AI

At the beginning of 2025 I started to write an article about the reasons why I refuse to use generative AIs and LLMs. I've never finished it and published it here, instead I've deleted it at the end of the year. It was too much rage, pain and sadness to handle. Instead, I've decided to make this article where I'm gathering links to other articles I've read. For each one, I put the title, the link, and very short excerpt as TL;DR and placeholder if the source was to disappear. This list may, or may not, be updated in the future. It is loosely grouped in categories, also most linked articles relate to several categories at once.

About everything

Generative AI: What You Need To Know
(link)
"Generative AI: What You Need To Know is a free resource that will help you develop an AI-bullshit detector."

I Will Fucking Piledrive You If You Mention AI Again
(link)
So it is with great regret that I announce that the next person to talk about rolling out AI is going to receive a complimentary chiropractic adjustment in the style of Dr. Bourne, i.e, I am going to fucking break your neck. I am truly, deeply, sorry.

Pourquoi je n’utilise pas ChatGPT
(link)
Plus le temps passe, moins je suis tentée d’utiliser ChatGPT ou d’autres outils d’IA générative. Le rythme effréné des annonces et la vision du monde des promoteurs de ces outils m’ont définitivement vaccinée contre le moindre frémissement d’intérêt qui aurait pu subsister. Et je n’ai même pas abordé ici les questions de biais, de sécurité, de protection de la vie privée, …

About cognition, knowledge and skills

The West Forgot How to Make Things. Now It’s Forgetting How to Code
(link)
When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI. It just disappears.

AI Search Has a Citation Problem. We compared eight AI search engines. They’re all bad at citing news.
(link)
The findings of this study align closely with those outlined in our previous ChatGPT study, published in November 2024, which revealed consistent patterns across chabots: confident presentations of incorrect information, misleading attributions to syndicated content, and inconsistent information retrieval practices. Critics of generative search like Chirag Shah and Emily M. Bender have raised substantive concerns about using large language models for search, noting that they “take away transparency and user agency, further amplify the problems associated with bias in [information access] systems, and often provide ungrounded and/or toxic answers that may go unchecked by a typical user.”

Contributor Poker and Zig's AI Ban
(link)
For us the ability to provide contributors with an engaging ecosystem where they can improve their systems thinking and interact with other competent, trusted and prolific engineers is a critical aspect of our business model.

The machines are fine. I'm worried about us.
(link)
The problem isn't that we'll decide to stop thinking. The problem is that we'll barely notice when we do.

Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender
(link)
A key prediction of the theory is "cognitive surrender"-adopting AI outputs with minimal scrutiny, overriding intuition (System 1) and deliberation (System 2).

Adults Lose Skills to AI. Children Never Build Them.
(link)
Adults who offload thinking to AI lose capacity they built. Children may never build it at all. When students process information through the same model, the result may be similar minds. Auditing AI output requires expertise the child is still supposed to be developing. In a study, developers who delegated coding to AI produced working code but failed conceptual understanding.

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
(link)
Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
(link)
Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation
(link)
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern.

About productivity

Let’s talk about LLMs
(link)
Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I finally turned off GitHub Copilot yesterday.
(link)
So, after giving it a fair try, I have concluded that it is both a net decrease in productivity and probably an increase in legal liability.

About social and environmental impact

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
(link)
By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
(link)
We have identified a wide variety of costs and risks associated with the rush for ever larger LMs, including: environmental costs (borne typically by those not benefiting from the resulting technology); financial costs, which in turn erect barriers to entry, limiting who can contribute to this research area and which languages can benefit from the most advanced techniques; opportunity cost, as researchers pour effort away from directions requiring less resources; and the risk of substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest, should humans encounter seemingly coherent LM output and take it for the words of some person or organization who has accountability for what is said.

The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’
(link)
A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

Elon Musk’s xAI powering its facility in Memphis with ‘illegal’ generators
(link)
The 35 generators xAI is using are “illegal” and a “major source of air pollution”, the law center wrote in a letter to the Shelby county health department on Wednesday. It says these high emission rates violate the Clean Air Act, including specified limits on toxic and carcinogenic pollution.

About economy

The AI Layoff Trap
(link)
If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on. We show that knowing this is not enough for firms to stop it. In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal. The resulting loss harms both workers and firm owners.

About disturbance

AI Slop Is Polluting Bug Bounty Platforms with Fake Vulnerability Reports
(link)
This could easily kill the whole concept of bug bounties," he said. "Why? Genuine researchers quit in frustration as they don't get proper reward for their hard work, and see AI slop scoop the money. Orgs/projects abandon bug bounty programs since they get mostly AI Slop reports. Financial backing (as donations or investment) for bug bounty programs disappears as the money is paid to scammers.

AI bots are destroying Open Access
(link)
We are headed for a world in which all good information is locked up behind secure registration barriers and paywalls, and it won't be to make money, it will be for survival.

Google Gemini tried to kill me.
(link)
Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins. Had I not checked on it 3-4 days in I'd have been none the wiser and would have Darwinned my entire family. Prompt with care and never trust AI dear people...

Word frequency tool ‘wordfreq’ stops updates, overwhelmed by AI spam
(link)
“The world where I had a reasonable way to collect reliable word frequencies is not the world we live in now,” says author Robyn Speer.

Amazon restricts authors from self-publishing more than three books a day after AI concerns
(link)
The new sets of rules come after Amazon removed suspected AI-generated books that were falsely listed as being written by the author Jane Friedman. Earlier this month, books about mushroom foraging listed on Amazon were reported as likely to have been AI-generated and therefore containing potentially dangerous advice. AI-generated travel books have also flooded the site.

About security

LLMs can't stop making up software dependencies and sabotaging everything
(link)
The problem is, these code suggestions often include hallucinated package names that sound real but don’t exist. I’ve seen this firsthand. You paste it into your terminal and the install fails – or worse, it doesn’t fail, because someone has slop-squatted that exact package name.




2026-04-30
in
AI/ML, All, Pub talk,
4 views
A comment, question, correction ? A project we could work together on ? Email me!
Learn more about me in my profile.

ScienceIsPoetry
Copyright 2021-2026 Baillehache Pascal