The effectiveness of AI in corporate training has turned out to be a large-scale statistical hallucination born of self-selection bias. A study by Christoph Riedl, published on arXiv, calls into question the feasibility of uncontrolled AI implementation in educational processes. Data analysis of 52,000 users of a chess platform over five years showed: the real learning effect of AI disappears once the individual motivation of the employee is subtracted from the equation.
According to Riedl and his colleagues, neural networks help only those who are already driven by results and possess high skills. In practice, this means that high-performing staff use AI only as another lever to strengthen their leadership, while weak employees receive no profit from the technology. Instead of the promised skill leveling, business gets a deepening gap between "stars" and the rest of the staff. What top management takes for the success of digital transformation turns out to be just the old-fashioned diligence of certain individuals.
Even more dangerous is the risk of intellectual stagnation. The use of centralized AI feedback leads to "intellectual convergence"—a sharp decline in the diversity of approaches. As an analysis of 42 platform experiments showed, people using the same AI sources begin to produce identical solutions. As a result, the organization loses unique expertise: employees may seem more efficient, but the company's collective intelligence turns into a homogenized mass, incapable of unconventional maneuvers.
Our verdict: deploying AI assistants "for everyone" without considering basic personnel motivation is not only useless but harmful. This creates a dangerous appearance of progress during a real degradation of skills. To avoid turning the office into an incubator of identical mediocrities, a transformation strategy must focus on human capital development, rather than the blind belief that algorithmic feedback will automatically "pull up" laggards.