Skip to content

Your company, now powered by word salad

Product Management, Leadership, Large Language Model4 min read

Sloppy paste

Lets be honest: not every employee performs at the level an organisation would ideally like to see. Most people are, by definition, average. Some are below average, just as a well-behaved Gaussian distribution would predict. And some are simply here to get paid and do exactly enough to avoid awkward conversations.

This is not a new problem. It is a permanent feature of hiring at scale.

In fact, it becomes almost inevitable as organisations grow, especially when they grow quickly. Software startups are a perfect example. In the beginning, there is usually a small group of highly motivated engineers building something they genuinely care about. Then venture capital arrives. Time-to-market becomes the all-defining metric. Code quality slips. Tech debt accumulates. After every funding round, new goals appear, deadlines tighten, and headcount grows.

New people are onboarded at speed. New services pop up daily. Acquisitions happen. Restructurings follow. Systems are abandoned. People with deep domain knowledge leave. Hiring accelerates, not because the organisation is ready, but because there is work to do, a headcount to fill, a budget to consume, CapEx to spend.

A few years later, many of these companies wake up to an uncomfortable truth: they are no longer very different from the incumbents they once set out to disrupt. Processes multiply. Reporting chains grow. Progress slows. This is the perfect environment for a certain type of individual to blend in and remain safely unremarkable.

Enter AI.

Machines against humans

Suddenly, there is a way to work less while appearing to do more.

That annoying weekly report that no one really reads? No problem, ChatGPT has you covered. A long Slack thread asking for input? Just dump it into your favourite LLM and paste the response back. A vaguely strategic document due by end of day? A few prompts later, it practically writes itself.

Time for a fresh cup of coffee. Maybe plan the weekend.

This is, of course, an exaggeration. But not by much.

For at least some people in large organisations, this is now daily reality. They blend in, do just enough to avoid triggering alarm bells, and appear productive by using machines against humans. LLMs become a force multiplier for mediocrity.

The rise of polished nonsense

The damage this behaviour causes is substantial. And LLMs scale that damage potential to impressive heights.

Even without AI, it is already hard to understand what colleagues actually mean in documents, tickets, slide decks, and chat messages. Now add text that is fluent, confident, and professionally structured - but largely empty.

What happens when people spend significant portions of their productive time reading and interpreting LLM-generated slop? Smart-sounding but generic word clouds that gently suggest that if you do not understand the point, the problem must be you.

You are no longer analysing ideas. You are analysing noise.

When leadership joins in

The real trouble begins when this behaviour moves up the hierarchy.

Imagine a manager whose primary output is an endless stream of polished slides and documents, all perfectly formatted, all confidently written, and all largely generated by an LLM. On the surface, everything looks legitimate. The words are there. The structure is there. The diagrams even line up.

The thinking is not.

At that point, the organisation is no longer just inefficient. It is directionless.

A lack of leadership is damaging even without AI. When LLMs are used to mask weak thinking, avoid accountability, or simulate clarity where none exists, they make the problem harder to detect and faster to spread. Decisions drift away from reality. Feedback loops break. Teams lose trust, often without being able to articulate exactly why.

Conclusion: death by sloppy paste

LLMs are powerful tools. Used well, they amplify strong thinkers, reduce busywork, and accelerate learning. Used poorly, they enable people to hide behind volume instead of substance.

The real risk is not that AI will replace competent employees. The risk is that it allows incompetence to scale quietly, convincingly, and cheaply. Organisations rarely collapse overnight because of this. Instead, they slowly suffocate under layers of polished text, empty alignment, and decisions made by people who stopped thinking because the machine made it easy not to.

If you want LLMs to help your business rather than harm it, you need a culture that values clarity over verbosity, reasoning over presentation, and accountability over output. Otherwise, sloppy paste will not just slow your organisation down.

It might be what eventually kills it.

© 2026 Mat Hansen. All rights reserved.