You Can't Unsee It: A Quick Guide to Spotting LLM-Generated Text on Habr

A veteran Habr author with 15 years of experience and ~200 published articles breaks down four telltale patterns that expose AI-generated "slop" — and explains why letting it proliferate will drive real experts off the platform for good.

Introduction

Hello. Allow me to introduce myself: lieuten... uhh, vvzvlad. And I hate AI slop.

I've had some discussions on this topic with authors and blog editors at the level of "who do you think you are kid, we vouch for our authors," and to avoid questions on the matter (even though I'm a bit uncomfortable listing credentials this bluntly, I rarely do this): I have roughly 15 years of experience in copywriting, around 200 articles on Habr alone (about 50 in drafts, because those are ancient reviews with dead images that I'm embarrassed about), and roughly another 50 in other companies' blogs without my name on them. I've published plenty on other platforms too; for a while, I lived entirely off earnings not from copywriting, but specifically from writing articles with substance. I curated several training courses.

Over those 15 years, I've seen good writers, bad writers, terrible writers, and copywriters from freelance exchanges (they occupy their own floor in this pyramid, like in that joke about Little Johnny). I helped tons of companies run blogs on Habr, trained their authors, made advertising articles look like they weren't ads, and made advertising articles that looked like ads but were enthusiastically received by readers. I championed "marketing of meaning" before Fadeev so aptly named it, argued with Marina Rozhkova back when AMR was still alive, exposed several of her projects on Habr, told her how to do reviews, got banned on Habr for my own reviews, ran my own project, and trained writers there on how to write interestingly and usefully... Well, a lot has happened.

I have two awards from Habr that nobody cares about except me, and that I completely forgot about until I started writing this paragraph. I'm friends with the heads of several major corporate blogs, and we sometimes roast each other's terrible content. I think that's enough credentials.

So why all this preamble? Because over the past year, the quality of texts on Habr has sharply declined. And I don't mean fewer good authors — that's always been an issue. I mean the appearance of a new type of text that has no soul whatsoever. Texts that look like they were written by a diligent student who learned all the rules but never understood why they exist.

These are LLM-generated texts. And I'm going to teach you to spot them.

Why This Matters

LLM text is text that was either fully generated by a language model or so heavily "enhanced" by one that any trace of the original author has been completely erased. Both cases are equally bad.

Why is this a problem?

  • These texts have no personal experience, no real opinions, no genuine insights
  • They dilute the quality of the platform with empty calories
  • They devalue the work of people who actually write
  • They train readers to accept mediocrity as the norm

Let me be clear: I'm not against using AI as a tool. I use it myself — for brainstorming, checking facts, polishing wording. The problem starts when AI becomes the author, and the human becomes the person who clicks "Generate."

The Classic Signs

After reading hundreds of LLM articles (sometimes against my will), I've compiled a guide. These are the markers that scream "a robot wrote this." Some are strong signals, others are weak. But when you see several of them in one text — it's a guaranteed diagnosis.

1. The "Diving In" Opening

LLMs adore starting articles with cosmic-scale declarations. "In today's rapidly evolving world of technology..." or "In an era when artificial intelligence is transforming every aspect of our lives..." These openings say absolutely nothing. A real author usually starts with a specific problem, personal experience, or at least a joke.

If the first paragraph could be placed at the beginning of literally any article on any topic — that's an LLM sign.

2. Bullet-Point Mania

LLMs love lists. They love them pathologically. Every argument turns into a bulleted list. Every comparison becomes a table. Every concept gets broken into 5 key aspects, each starting with a bold keyword followed by a dash and an explanation.

Real human text has rhythm. Sometimes it's a long paragraph, sometimes a short remark, sometimes a list where it actually makes sense. LLM text is an endless parade of identical structures.

3. The "On the One Hand... On the Other Hand" Template

LLMs are terrified of having an opinion. Every statement is immediately counterbalanced. "While X has many advantages, it's important to consider the potential drawbacks." "Despite the benefits of Y, there are significant challenges to be addressed."

A real author picks a side. Even if they present multiple viewpoints, they make it clear which one they find more compelling. An LLM creates perfectly balanced text where nothing is ever good or bad — everything is "nuanced."

4. Corporate-Speak Overload

Words that real people almost never use in natural writing but LLMs sprinkle everywhere: "leverage," "utilize," "facilitate," "implement," "optimize," "streamline," "robust," "comprehensive," "cutting-edge," "revolutionary," "paradigm."

In Russian LLM text, watch for: "в рамках" (within the framework of), "является" (is/constitutes) used excessively, "данный" (this/given) instead of "этот" (this), "осуществлять" (to carry out) instead of simpler verbs, "ландшафт" (landscape) in non-geographic contexts, "давайте разберемся" (let's figure it out), "погрузимся" (let's dive in).

5. Suspiciously Perfect Structure

Every section has an introduction, body, and conclusion. Every argument has exactly three supporting points. The article follows a textbook structure so perfectly that it feels like it was generated from an outline rather than written by a thinking person.

Real articles are messy. They have tangents, asides, sections that are longer than they should be because the author got excited. LLM text is surgically precise in its structure.

6. The Absent Author

No personal anecdotes. No "I once tried this and it blew up in my face." No opinions that could be controversial. No humor that could fall flat. The text is written from the perspective of an omniscient, dispassionate observer who has never actually done anything.

When the text does include "personal" touches, they feel fabricated: "As a developer with many years of experience, I can say that..." — this is a classic LLM attempt at faking personality.

7. The Conclusion That Summarizes Everything

An LLM conclusion always restates every point made in the article, adds something about "the future" being "promising but uncertain," and ends with a call to action that nobody asked for. "What do you think about this topic? Share your thoughts in the comments!"

Real authors sometimes don't even have conclusions. Or their conclusion is one sentence. Or it's a joke. Or it introduces a completely new thought. LLM conclusions are photocopies of the introduction with different words.

8. Hallucinated Expertise

The text confidently states things that are wrong or misleading, but does so with such authority that a reader unfamiliar with the topic might not notice. This is especially dangerous in technical articles where an LLM might mix up library versions, confuse similar concepts, or describe a workflow that doesn't actually work.

9. Perfectly Parallel Construction

Every list item follows the same grammatical pattern. Every paragraph in a section has roughly the same length. The text has an almost metronomic rhythm that real writing never achieves because real writing is produced by a brain, not an algorithm.

10. Emoji and Markdown Abuse

LLMs in blog-post mode love decorating text with emoji headers, excessive bold text, and markdown formatting that goes beyond what's useful. Real authors use formatting as a tool; LLMs use it as decoration.

What About the Gray Zone?

Not every text with some of these features is LLM-generated. Bad writers also produce formulaic text. And some LLM-assisted text might preserve enough of the author's voice to be valuable.

The key question isn't "was AI involved?" It's "is there a real human perspective here?" If someone uses ChatGPT to clean up their grammar but the ideas, stories, and opinions are genuinely theirs — that's fine. If someone generates an entire article and their only contribution is choosing the topic — that's slop.

The Real Problem

The real danger isn't any single LLM article. It's the normalization of soulless content. When readers get used to LLM text, they start expecting it. When editors accept it, they lower the bar. When platforms reward it with views and engagement, they incentivize more of it.

We're heading toward an internet where finding genuine human insight is like panning for gold in a river of AI-generated sludge. Habr used to be a place where you could find that gold reliably. I'd like it to stay that way.

How to Fight It

As a reader: develop your taste. Read enough LLM text to recognize the patterns, then actively seek out and upvote content that feels human.

As a writer: be yourself. Your typos, your weird tangents, your controversial opinions — those are features, not bugs. The one thing AI can't replicate is your actual experience and genuine personality.

As an editor: reject the slop. It's easier to publish an LLM article than to send it back and ask for a rewrite, but the long-term cost to your platform is enormous.

As a platform: consider making AI disclosure mandatory. Not as a punishment, but as transparency. Readers deserve to know what they're reading.

Conclusion

You now have the tools to recognize LLM text. Use them. Once you start seeing the patterns, you truly can't unsee them — it's like learning to spot Photoshop artifacts or recognizing stock music in commercials.

The internet is drowning in AI-generated content. But as long as there are people who care about genuine writing, there's hope. Be one of those people.