Anti-AI slop cult

Who really benefits from all the infighting in free/open source software regarding any use of LLMs in development?

Published 2026-03-06


This post has been written rather quickly to summarize my opinions when they are still fresh, before I forget what I wanted to originally write. Bad grammar and spelling mistakes likely ahead. You've been warned. ;)

For the past few months, I've been seeing an increase in people, especially on the Fediverse, getting unreasonably angry about developers of both big and small projects using any kind of LLM, or what is usually referred to as AI, in any part of their development workflow. This behavior has gone through the roof in the last few days as of 2026-03-06, where people are mad over the HarfBuzz developer using it. As he himself admitted on the Fediverse few days ago. If you don't know what HarfBuzz is, if you are rendering fonts in a GUI on Linux or *BSD, you are likely using this project. Both Qt and Gtk depend on it for font rendering. There is also the case of vim, the ubiquitous text editor, where one of the maintainers has also been using Claude in PR reviews. In both cases people are ringing the AI slop warning bell and calling for forking these projects, boycotting them.

To which I can only say. Good luck.

Months ago there was also the case of KeePassXC, a popular offline password manager, using Github Copilot in their PRs for fixing some issues and code review. This has also of course caused a swath of people to complain about this, trying to boycott the project, calling it insecure and untrustworthy now. The backlash was so large that KeePassXC even wrote a blog post about it.
Conveniently, this has been seemingly completely forgotten few days after the blog was published. But such is the case of most Internet drama.

So what are the consequences of this line of thinking and who does it benefit?

Squidward throwing his brain into a trash can

I'm not sorry for this meme

Boycotts and hypocrisy

With the increasing usage of LLMs in the most prevalent corners of software development, and in the age of ever increasing extreme opinions on almost anything, to nobody's surprise come attempts at boycotting software that uses LLMs in any way. The arguments for boycotts range from borderline insane to almost reasonable. In most cases it is as usual somewhere in the middle. I won't get into these arguments here though.

So if you want to boycott a piece of software based on their usage of AI, what is your future?

I don't see any outcomes where that would benefit anybody involved, including you. Simply put, with the increasing usage of LLMs, the number of projects that used AI in any way will also logically increase with it. At some point a project that you depend on and cannot replace will use LLMs in some capacity, and you'll be left with no choice other than to accept it, while maybe complaining about it on <insert your favorite social media here>. You will be left with a decision. Be a hypocrite and ignore what you've been very likely preaching on the Internet, change your mind on the whole issue and admit that you've been wrong, or force yourself to go elsewhere?

With the latter, I don't think that will be possible much longer even now. Imagine if the absolute base of the operating system you are currently very likely using, the Linux kernel, started accepting patches that were written even partially with an LLM. What will you do, where will you go?

  • Windows obviously isn't an option since you very likely hate using it and it's been using AI generated code for probably at least 2 years now. It also isn't free/open source in any way.
  • macOS is usually the refuge of ex-Linux users, but Apple has also been pushing their Apple Intelligence so that isn't an option either.
  • What about BSDs though? Well, your new laptop probably will have a hard time fully working on anything that isn't FreeBSD. So you are now again stuck with a single option that can change its mind about accepting patches with LLM generated code anytime.

Of course this has already happened in Linux. There's now agent documentation for contributing to the kernel, commits made by the same author, Sasha Levin (a stable kernel maintainer), which have been very likely written by an LLM were already accepted and in stable releases, Linus Torvalds vibe-coded in a personal SFX project that people got mad about for some reason, and LLMs have also been used on the Linux Kernel Mailing List for creating summaries of patch sets. So are you switching and to what?

True computer users of course know that the only good option is TempleOS.

Lain from serial experiments lain with a TempleOS shirt and a screenshot of TempleOS in the background

Lain in a TempleOS shirt with TempleOS in the background

Infighting

So who does it benefit in the end when people are almost non-stop complaining about pieces of software using LLMs?

The answer is simple. Everybody except the likely movement (free/open source software adoption) they are (subconsciously) fighting for.

When the community in the movement is almost divided in half about the issues of copyright/intellectual property (a dumb concept) and/or the usage of any AI, it benefits those that want the movement to die. So in a way, by being mad online about some free/open source project using LLMs, those that are angry are helping the other side trying to destroy their movement they are fighting for. How?

When everybody is preoccupied with endless and usually absolutely meaningless arguments ad infinitum, nothing productive ever gets done. The community is divided, contributors and maintainers from both sides now don't get along, forks that didn't need to exist happen and actual progress on writing good software stalls to a halt.

The movement dies.

Rational solutions to the LLM problem

There is no denying that LLMs can generate code that is absolute garbage, even with their quick improvements. On the other hand the usual denying of reality from those that hate AI is something I might get into in another post, but not here. The question is how to approach LLM usage and contributions in a way so nobody ends up in an psych ward over it, because it is here to stay no matter what anyone says. Just like the Internet technologies that were being created when the .com bubble burst, funded by millions of dollars that suddenly vanished.

I think the answer is yet again very simple. Treat LLM-assisted or completely written contributions the same way as contributions from unknown parties (ie. contributors). As harsh as that sounds, it is the reality. A human can write the same absolute garbage code full of (subtle) bugs just like an LLM can. Unknown contributors shouldn't be trusted to the same level as an LLM, since the dangers are the same.

Reviving struggling projects the new way. State sponsor your open source. Slowing sshd like a boss. - Jia Tan

What happens when a contributor gets maintainer access and turns out to be likely a state sponsored bad actor.

There's also the case of AI-generated issue/PR spam. Sadly this is a reality that we are only getting into now and projects like cURL are one of the first to get hit by this. I don't think there are much possibilities to do here besides banning users that frequently spam maintainers with walls of nonsensical text. The idea of using AI to figure out if something was written by an AI is a little insane and who knows how that would work out.

You can comment on this post on the Fediverse: https://fluffytail.org/objects/80924962-fcd7-4167-b3d4-1737cedb52d2

— phnt