The AI Content Mill When UX Thought Leadership Became Performance Art
My LinkedIn and X feed has become a fascinating anthropological study. Every morning, I scroll through an endless stream of "UX thought leaders" breathlessly announcing the latest AI tool that will "revolutionize design" or sharing profound insights about how ChatGPT helped them write better microcopy.I've seen someone posted a 2,000-word analysis of Figma's new AI features with the urgency typically reserved for breaking news. The post got hundreds likes and dozens of comments from other designers nodding along as if witnessing the invention of fire.
It should burn alright.
Meanwhile, look at actual design work in the wild like apps that crash when you rotate your phone, websites where the contrast is so poor you need a magnifying glass, and user flows that make tax preparation seem intuitive.
The disconnect is jarring.
The Surface-Level Spectacle
The pattern is painfully predictable. A new AI tool drops, and within hours, LinkedIn is flooded with hot takes.
- "I tried Midjourney for wireframes and here's what happened (you won't believe #7!)."
- "5 ways GPT-4 transformed my design process."
- "Why every UXer should be using [insert tool here] right now."
The content follows a formula with dramatic opening hook, numbered list of shallow observations, buzzword-heavy conclusions that sound profound but say nothing. I saw someone write an entire post about how AI helped them choose between... two shades of blue.
Lah, the post had more engagement than most portfolio pieces showcasing actual problem-solving.
What's missing from these breathless announcements? Any evidence that the poster understands fundamental principles in their craftmanship. No discussion of information hierarchy, user mental models, or cognitive load. No mention of accessibility considerations or edge cases. Just excitement about a new toy and the social validation that comes from being first to the trend.
The Em Dash Epidemic
Perhaps nothing exemplifies this performative knowledge better than the great em dash discourse of last month. A prominent figure in LinkedIn posted that they're afraid to use em dash because their writing will be perceived as AI-writing.
The post exploded.
What.
Suddenly, my feed was full of posts sharing their own em dash revelations. People were chiming in about how they will not use em dash anymore. Comments sections filled with variations of "Mind = blown" and "This is why I follow you!"
Here's what nobody discussed, when em dashes actually improve comprehension versus when they add unnecessary complexity. How punctuation choices vary across cultures and literacy levels. Whether a dash-heavy interface might feel pretentious to certain user segments. The actual impact on task completion rates or user satisfaction. You know—the stuff that matters for people trying to accomplish real goals with our interfaces.
Instead, we got surface-level pattern recognition dressed up as expertise. A bunch of career influencers learned to identify an em dash without learning when, why, or for whom they should use one.
The AI Process Theater
The AI-assisted design process posts are even more revealing. I regularly see content that goes something like:
Here's how I use ChatGPT in my UX workflow:
- Generate user personas
- Brainstorm feature ideas
- Write user stories
- Create survey questions
- Draft interface copy.
On the surface, this sounds productive. But dig deeper and you realize what's missing about any mention of user research methodology, validation techniques, or critical evaluation of AI-generated content. No discussion of the biases baked into large language models or the risks of homogenized solutions. No acknowledgment that good design often requires understanding context that no AI model has access to.
I watched a designer present AI-generated user personas at a webinar snippets recently. The personas were polished, complete with stock photos and detailed behavioral descriptions. They were also completely generic—the kind of personas that could apply to any product in any market. When someone asked about the research behind them, the presenter admitted they hadn't actually talked to any users yet. The AI personas were a starting point, they said, to "guide their research direction."
But heeey, if you don't understand your users well enough to create realistic personas, how do you know if the AI-generated ones are any good? It's like using a GPS without knowing your destination.
The Expertise Illusion
What's really happening is the commoditization of UX knowledge. Complex practices that took years to master are being reduced to tool recommendations and surface-level tips. The appearance of expertise has become more valuable than actual expertise.
I see designers with two years of experience positioning themselves as AI-in-design experts because they've tried every new tool and can articulate the features clearly. Meanwhile, designers with decades of experience shipping successful products are quietly doing their work, occasionally shaking their heads at the noise.
The irony is brutal, bruh!
While everyone's talking about AI augmenting human creativity, we're witnessing the opposite. Human judgment is being replaced by algorithmic thinking. Instead of learning to see problems clearly and develop taste through practice, designers are outsourcing pattern recognition to tools they don't fully understand.
Real design expertise is messy and contextual (speakin' as a content designer myself, here). It's knowing when to break conventions and when to follow them. It's understanding that what works for a productivity app won't work for a meditation app. It's recognizing that good design often means making things invisible, not adding more features or flair.
These nuances don't translate well to viral LinkedIn posts. They can't be reduced to "5 AI tools every designer needs" or "The one prompt that changed my design process." They require sustained practice, honest feedback, and the humility to admit when something isn't working.
The Real Work
I respect the designers who deeply understand user needs and can translate those needs into clear, functional experiences, not the one with the most AI tools in their toolkit. Why? Because they've learned to see interfaces the way users see them, not the way other designers see them. They ship products that people actually use successfully, not just products that look good in Dribbble screenshots.
Their LinkedIn or X posts, when they bother to make them, are usually about lessons learned from real projects, honest reflections on failures, or thoughtful analysis of design decisions in products they admire. They get a fraction of the engagement that AI tool reviews get, but they contain infinitely more wisdom.
Maybe that's the real insight here?
In an attention economy, expertise and visibility are often inversely correlated. The people with the most valuable knowledge are too busy applying it to spend their days creating content about it. Meanwhile, the content creators optimize for engagement over insight, and we mistake their productivity for progress.
The tools will keep coming,
the hot takes will keep flowing,
and the cycle will continue.
Good design will still require the same things it always has. Deep empathy for users, rigorous thinking about problems, and the craft skills to execute solutions elegantly. No amount of AI hype can substitute for that foundation, though apparently, it can provide endless material for thought leadership theater.