A collective task force of researchers from Carnegie Mellon, MIT, Oxford, and UCLA has confirmed what skeptics have long suspected: AI is transitioning from a helpful assistant into a cognitive crutch. Through a series of experiments, hundreds of participants were tasked with solving math problems and logic tests. One group was provided with a 'smart' assistant capable of handling tasks autonomously. The results were as predictable as they were alarming—the moment the safety net was removed, users froze. The number of gross errors grew exponentially, and the drive to solve problems evaporated at the first sign of friction.
As MIT Assistant Professor Michiel Bakker points out, we are witnessing a dangerous trade-off: short-term productivity gains are being bought at the price of fundamental skill degradation. AI functions like a GPS navigator that gets you to your destination but strips away your ability to navigate the world independently. The core issue is that learning and professional growth require resistance and the struggle to overcome errors. When an algorithm hands over a finished answer instead of guiding the thought process, the brain simply shuts down its problem-solving mode as a redundant function.
For business leaders, this isn't just an efficiency concern; it is a systemic risk to intellectual capital. Bakker, who previously worked at Google DeepMind, warns that delegating critical thinking to agentic systems leaves employees helpless during systemic failures. If a worker doesn't grasp the underlying logic of a process, they cannot fix errors generated by a 'hallucinating' model. There is a biting irony here: deploying technology to accelerate workflows may result in a staff of highly-paid operators who are incapable of performing basic tasks without a chatbot's prompt.
Instead of turning employees into mere appendages of OpenAI’s servers, companies must implement skill-retention protocols and define clear boundaries of responsibility. AI should function as 'scaffolding'—supporting the structure while forcing the human to do the climbing. While developers try to curb the pathological 'sycophancy' of models, executives should ask themselves: will your team become useless the moment the internet goes down or an API stops responding?