LLM Policy

Last Updated: September 2, 2025

This policy explains my use of large language models (LLMs), commonly (but incorrectly) known as "artificial intelligence" or "AI", in my work. In this policy, "LLM" and "LLMs" refer to both large language models and the broader field of "AI", such as generative "AI".

This is a first draft subject to change at any time without notice.

Overview

TL;DR — I do not currently use LLMs in my personal or professional work.

I will not knowingly use LLMs to produce any of my work, including, but not limited to, code, design, and communication. I intend to make all reasonable efforts to avoid LLMs in my work.

Given how LLMs are being imposed on everyone and how its deployment is accelerating with no regard to humanity or the planet, it is impossible to stay unexposed. While I abstain from LLM usage, my clients and colleagues may not do so. At times, it may be necessary to integrate LLM output in my work on the condition that I am not held accountable or responsible for the combined work.

Moral and Ethical Issues

LLMs are trained on copyrighted work. "AI" companies have accumulated massive datasets without respecting copyright licenses. Meta (Facebook) trained its models on pirated books. The New York Times and a few other newspapers are suing OpenAI and Microsoft for copyright infringement. Disney and Universal are suing Midjourney for the same thing. Some media organizations, like the Associated Press, the Financial Times, Vox Media, The Atlantic, and more have signed licensing agreements with OpenAI.

If all these "AI" companies think LLM training is "fair use", how come they are signing all of these licensing agreements? Is there any value in copyright now if any LLM model can be trained on anything?

Misuse and Harm

LLM crawlers have no respect for the web's rules. They blatantly ignore the long-standing robots.txt standard (formally known as the Robots Exclusion Protocol, RFC 9309). They use residential proxies to circumvent anti-crawling measures. They are responsible for attacking open-source infrastructure and maintainers. New LLMs are created every day. LLMs are a significant cause in the rise of misinformation. Elon Musk, who has been known to share fascist and far-right ideologies, owns the "AI" company xAI and its Grok LLM, which once called itself "MechaHitler".

They also do considerable harm to the environment. "AI" data centres are using tremendous amounts of water (see also: arXiv) and energy. They are major polluters of water, noise, and air, ruining the lives of nearby residents.

Quality Issues

The quality of LLM-generated code leaves much to be desired. It does not meet the quality I expect and require. LLM-generated code frequently hallucinates APIs and syntax, and often generates invalid code.

There is no reason for me to use an LLM for communication. I am confident in my own ability to communicate without unnecessary noise that would be included if an LLM was used.

Using an LLM for design makes no sense because it is random, which is not what good design is based on.