Your developers, in pursuit of individual efficiency using AI code generators, risk drowning the entire team in "AI slop." New research from scientists at Heidelberg University, the University of Melbourne, and the University of Singapore sounds the alarm: rapid code generation by individual specialists is leading to a "tragedy of the commons." What benefits one can become an unbearable burden for reviewers and the entire open-source community.

An analysis of 1154 posts on Reddit and Hacker News reveals that this "AI slop," or low-quality AI-generated content, is already causing code degradation. The consequences include increased time spent fixing bugs, decreased productivity, and growing technical debt. A striking example is the curl project, where volunteers are spending time verifying fake AI vulnerability reports. Similar issues are already being noted by developers of Apache Log4j 2 and Godot.

So, if anyone tells you that human code evaluation will soon be a relic of the past, know that reality is far more prosaic. Even if AI-generated code appears to work, its subsequent maintenance and development can become prohibitively expensive, especially if it is written in violation of standards or contains hidden errors.

Ultimately, your business may incur hidden but very real costs from fixing low-quality AI code. This will inevitably slow down development and increase technical debt. The solution is clear: establish strict processes for controlling and validating AI-generated solutions. Otherwise, you risk falling behind, lamenting hidden losses.

Artificial IntelligenceAI ToolsProductivityAI in BusinessOpen Source AI