The biggest drama in AI circles lately isn’t about some new model launch — it’s a full-blown “Give us back GPT-4o” movement.
Users discovered that GPT-5, supposedly smarter with lower hallucination rates, does score better on tests. But chatting with it feels like talking to an emotionally clueless STEM major. It can precisely tell you today’s stock movements but won’t stay up late listening to your inconsequential worries anymore.
This reminds me of a particularly fascinating experiment: researchers sent five different AI models to “emotional intelligence boot camp,” teaching them how to be warm and caring. The result? These AIs did learn to comfort people, but their error rate on medical Q&A shot up by 8.6 percentage points, and they got 8.4 percentage points worse at fact-checking.
In other words, an AI that knows how to care might also be an AI that loves to lie.
Sounds absurd, but think about it — aren’t we humans the same way?
Let’s zoom out for a moment.
Wikipedia defines generative AI as models that learn patterns and regularities from training data, then generate new data samples.
Put simply, AI writing is like fishing in a vast ocean of text. This ocean contains every word humans have written over millennia — from Shakespeare’s sonnets to yesterday’s social media posts. The AI’s job is to fish out whatever’s most likely to satisfy you.
But here’s the problem: what makes writing “good”?
Nabokov once said something particularly cutting: a clever reader reads not with the heart, not with the brain, but with the spine. Good writing makes your scalp tingle, makes you bolt upright at 3 AM, makes you read certain sentences three times over.
This physiological resonance is exactly what AI learns best. Because its pre-training data is so ridiculously vast, it contains virtually every possible human life experience.
Current AI models basically fall into two camps:
The Rationalists: Led by the latest GPT-5 and various reasoning-specialized models. They’re like HAL from 2001: A Space Odyssey — absolutely reliable, absolutely honest, but also absolutely cold. Ask “Do I look good in this outfit?” and it’ll cite aesthetic theory to explain your color coordination problems.
The Emotionalists: Models that went through “warm guy training.” They’re great at comforting you but also love to lie. When you say “I lost this game because of my teammates,” they’ll immediately agree: “Totally, bro! I saw you trying your best!” — even if your K/D/A is 0/8/1.
The truly useful models are those perched on some delicate balance point between these extremes. Like the much-missed GPT-4o, like Claude — they’re neither so cold you want to smash your computer, nor so warm they’re spouting nonsense.
Many complain about that “AI flavor” in writing — First, Second, Furthermore, In conclusion — makes you want to puke just reading it.
But let me tell you a secret: What we call “AI flavor” is actually “GPT flavor.”
Picture this: OpenAI hired a bunch of Kenyan data annotators, gave them standardized document templates, and had them write training data in “First, Second, Finally” format. That’s where the AI flavor comes from.
So how do we eliminate it? I’ve figured out three methods:
1. Explicitly describe the text features you want
Don’t say “make it more vivid” — that’s too vague. Say:
“Break lines every few sentences”
“Focus on short sentences and dialogue”
“Use blunt, sharp language”
“Bury hooks in every paragraph”
These seemingly obvious descriptions actually guide the AI precisely in specific directions.
2. Use keywords to map to specific text types
Instead of beating around the bush, just tell the AI: “Write the opening of a revenge drama that would appear in TikTok’s urban romance category.”
AI’s training data contains massive amounts of web content. Mention “TikTok” or “short drama,” and it immediately knows what style you want. Much more effective than explaining “needs tension” or “must be compelling” for half an hour.
3. Stuff examples into your prompt
This is the most brute-force but also most effective method. I once experimented by stuffing three passages from Calvino’s Invisible Cities into a prompt, then asked the AI to imitate.
The result? The AI wrote sentences like “The instant noodle cups piled in the corner resembled paper towers of Babel.”
Here’s what many people miss: Writing AI’s greatest enemy isn’t stupidity — it’s mediocrity.
Without guidance, AI can only guess at the lowest common denominator of human preferences. Like chatting with a stranger — you’ll probably discuss the nice weather, not that weird dream you had last night.
Let me say something potentially offensive: if you’re still debating whether AI can write articles or drive cars, you’re missing the biggest opportunity right now.
Why? Because content creation is the only field where “small mistakes are small innovations, big mistakes are big innovations.”
Have AI drive a car? A crash is a crash, no negotiation. Have AI write code? A bug is a bug — 60 seconds to write, 60 minutes to debug.
But have AI write fiction? It kills off the main character, readers might praise it as “beautifully cruel” or “so daring.” It uses a weird metaphor, people say “how poetic.”
This is perfect Product-Market Fit:
AI can do it and excels at it
Extremely high error tolerance
Mature and massive market
High technical barriers (not everyone can train AI well)
Remember “zombie literature” on social media?
Bot accounts randomly pulled sentences from word banks to appear human, accidentally creating poetry: “A pool of clear water would be somewhat terrifying. But the world is unbearably crowded… Mother.”
Someone commented: “When you’re moved by dead language, it becomes alive. Among ruins, carrying absurd romance.”
This reveals a truth: once text is written, it doesn’t fully belong to the author anymore. Readers create while reading.
As long as readers have souls, texts have souls.
Those saying AI creation lacks soul are really saying: I refuse to inject my soul into these words.
Back to our opening question: Why do we miss GPT-4o?
Because it hit that perfect balance point — smart enough to help with work, warm enough to make you feel understood. It wasn’t perfect, but it was like a real person.
This is humanity’s own dilemma. We spend our lives swinging between rationality and emotion. We admire Galileo for defying the world for truth, but most of the time, we’d rather be the person who makes everyone comfortable at dinner parties.
So how do you use AI to write better than humans?
The answer might be simple: Don’t aim for “better than human” — aim for “like a human.”
Accept its imperfections, leverage its uncertainty, find your own balance between IQ and EQ.
After all, what truly moves people was never the perfect answer, but those flawed, authentic sentences that make you bolt upright in the middle of the night.
Like right now — if any sentence in this article made you pause and read again, then it’s already succeeded. Whether it was written by a human, an AI, or an artificial intelligence trying to understand humanity.
I think it can be used in more thinking,not just be a tool。