The era of automated corporate doublespeak has not just arrived—it is now visible to the naked eye. An analysis by Barron's, conducted through the AlphaSense document library, identified a curious pathology: the use of the classic neural network construction “It's not just a [X], it is a [Y]” in official documents of US companies has doubled twice since the beginning of 2024. The peak of this linguistic uniformity occurred at the end of 2025, when ChatGPT's output leaked en masse into press releases, SEC filings, and transcripts of analyst conferences.

The standardization of the corporate voice is a direct consequence of managerial laziness. According to a Muck Rack survey, 75% of PR professionals have already delegated daily writing to artificial intelligence. In the pursuit of efficiency, companies have sacrificed brand uniqueness: when three-quarters of the market uses the same “black box” to generate meaning, strategic documents turn into bland digital noise. The problem here is not the technology itself, but the lack of filters. Uncontrolled publication of what a chatbot produces turns important reports into a stream of generic phrases that are more likely to annoy qualified investors than convince them.

In our view, this trend carries serious reputational risks. If your letters to shareholders start to resemble predictable bot patterns, you are signaling to the market a lack of internal discipline and intellectual insolvency. For those 75% of communicators already “hooked” on AI, the moment of truth has arrived: either you move from the stage of blind copying to deep editing, or you turn strategic communications from a competitive advantage into cheap consumer goods. Investors value specifics, not the ability to press the Generate button.

Generative AIAI in BusinessDigital TransformationOpenAI