Until now, all written work carried an implicit promise: behind the words you read was a thinking human. You saw words on a page, and you knew they were formed in someone else's mind before they were written down and made their way to you.
Writing carried, by its very nature, proof of thoughtfulness.
With the advent and prolific adoption of large language models (LLMs), that contract is broken. And when foundations like this start to crack, we should look up to check for anything that might now collapse.
What Breaks
This week, a colleague sent me a really nicely formatted Slack message. Complete with bolded headings and key bulleted points. Instinctively, I recognized the care that went into it, and responded in kind by reading it very carefully before writing back. It was only afterward that I realized the message was so well formatted not because the sender had put a lot of time into it, but just the opposite! He hadn't written it at all. It was AI generated.
That same day on LinkedIn, I saw someone bragging that an AI tool had generated all their posts for three weeks straight. The irony was completely lost on them—proudly announcing their absence from their own words!
We're communicating more and connecting less. The packages look the same but carry different cargo.
What is first to break?
I. Where Thoughtfulness Implies Learning
As long as writing has existed, teachers have been assigning 500 word essays for homework. Teachers assign essays because writing 500 coherent words requires mental effort, and this effort produces learning. Now, students generate long-form essays in 45 seconds with a free ChatGPT account. Math problems, chemical equations, book reports—all the same. Work being completed is no longer proof that critical thinking happened. Evidence of learning is detached from learning itself. Students rarely (but not always) optimize for learning. They optimize for completing required work and earning good grades.
By default, students don't learn. They outsource evidence of learning.
II. Where Thoughtfulness Implies Caring
Long messages signal care. When someone writes you a lengthy note, you instinctively read carefully and respond in kind. You recognize the gift of attention and reciprocate. This signal has always been reliable—until now. As this reliability fades, our basic social exchanges decay.
III. Where Thoughtfulness Creates Art
Art has always presented as a window into someone else's world. Now machines make paintings and poems in seconds. Art only matters because humans make it. When no human consciousness creates the song that moves you, what remains?
IV. Where Thoughtfulness Implies Presence
Written language let minds touch across time and space. When simulation corrupts this channel, authenticity breaks.
A letter from 1923 proves someone existed. You feel the hand that held the paper. You sense the minutes they spent forming each word. This creates a bridge between consciousnesses.
Digital writing kept this essence. Despite no paper, we knew fingers hit keys. With AI, this thread snaps. Text appears with no human behind it. Communication without communion.
V. Where Thoughtfulness Implies Intellectual Honesty
Research and debate need integrity. Reading an argument, we assume the writer believes their words or has at least considered them deeply enough to defend them.
AI creates intellectual forgeries: arguments without belief, reasoning without reasoners. Our marketplace of ideas floods with counterfeit thoughts—words that look like wisdom but lack its source.
Three Paths Forward
Attention is our scarcest resource. When we write, we spend this resource. We create an economy based on human thought.
AI generates replicas of human expression without human effort—a forgery that is good enough that we now struggle to distinguish what's been thought from what's been generated.
In economics, Gresham's Law states that "bad money drives out good." When two currencies circulate and one seems less valuable, people hoard the more valuable one. This pattern now unfolds in the economy and communication of ideas.
How do we navigate a world where words no longer prove thought? I see three paths:
New Education
We must rebuild education:
- Oral exams where you think on your feet
- Projects judged by process, not output
- Visible thinking environments
- Learning journals that document the journey
- Teaching students to direct AI, not just use it
This is a difficult path because it starts to value how students think over what students produce. It brings education a step further away from "the real world", but I think this is necessary to incentivize students to spend time in critical thought.
New Social Rules
What we once assumed now needs evidence. Our relationships need new codes:
- Say when you use AI
- Value handwriting and voice messages
- Create spaces where humans must be present
- Develop signals that show genuine thought
Within our software engineering team, AI is leveraged every day for writing code. Blocking usage of LLMs would be incredibly counter-productive, but when AI is used for writing internal communication, we have a policy to flag it as such.
New Creative Values
This forces hard questions: Does it matter who made something if it moves you? If AI writes a poem that makes you cry, does it matter that no human felt those emotions? I think yes, it does matter.
In art, we must:
- Make the creation process visible
- Build communities around verified human creation
- Let humans direct AI, not replace humans
I believe the nature of and human value of art will create tendencies for the rejection of AI-created art. I don't think we need any built-in mechanisms for forcing a premium on human provenance.
The Deeper Questions
AI forces us to name what we value in human connection. Not information. Something deeper—shared vulnerability, time investment, conscious attention.
The world is increasingly complex and difficult to understand. Yet at the same time, more people are outsourcing critical thinking. Has the best generation of critical thinkers already been born? Will creativity trend downward?
Or will we learn to value human thought not because it's inevitable, but because we choose it? Not because it's efficient, but because it's more meaningful to the human that produced it that anything that an LLM can produce. Not because it's perfect, but because it's human.
I think it's more important than ever to resist the temptation to outsource hard problems to AI. To rely on ourselves to articulate the real thoughts in our minds instead of lazily turning to AI to put into words what is in our head. Language is the most important and most human of all inventions—we mustn't lose our ability to use it effectively with each other.