The AI Tax: What Amazon’s Outage Means for ERPs

During latest months, I have been discussing the underlying friction between generic generative AI and rigid enterprise architectures. This week, a major industry event perfectly illustrated the exact dynamic I can easily imagine playing out on the factory floor and within corporate IT departments.
According to a recent CNBC report, Amazon’s retail technology leadership recently convened an internal meeting to address a series of site outages affecting the checkout process and pricing displays. Internal memos pointed to a specific contributing factor: AI-assisted coding changes.
Developers had utilized generative AI tools to accelerate code deployment, inadvertently bypassing standard human oversight and guardrails. While the tools increased deployment speed, they also introduced critical errors. In response, Amazon updated its protocols to mandate that Senior Engineers rigorously review AI-assisted code generated by less experienced staff.
When I read this, the implication was clear: if a company with Amazon’s extensive cloud infrastructure experience encounters these operational challenges with unverified AI outputs, we must seriously question the way we are integrating generic AI assistants into complex manufacturing ERPs.
The AI Tax
This situation reflects a broader structural challenge that we can encounter in every digital transformation project today. A recent global study by Workday, titled “Beyond Productivity: Measuring the Real Value of AI,” finally puts a hard number on this issue, highlighting a metric they refer to as the AI Tax: 37%.
According to the report, 37% of the time employees save by using AI is subsequently spent on rework: correcting, auditing, and rewriting inaccurate AI outputs. For every 10 hours of efficiency gained, nearly 4 hours are reallocated to verifying the generated content.
The study notes that only 14% of employees consistently achieve net-positive outcomes from AI. From what I observe, the perceived productivity gains we are chasing often require a much closer, more critical scrutiny.
The Auditing Burden on Junior Profiles
The data provides fascinating insights into who actually absorbs this additional workload. Workday identified a demographic they term the “Low-Return Optimists”, predominantly younger professionals (aged 25 to 34) who are frequent and enthusiastic AI users.
This highlights a paradox that is increasingly common across the industry. Organizations often provide powerful generative tools to junior profiles, hoping for a rapid increase in senior-level output. However, without the operational experience and deep judgment developed through years of troubleshooting real supply chains, less experienced professionals naturally struggle to quickly identify when an AI misinterprets a complex business rule.
While they can generate initial drafts rapidly, significant time and energy must then be spent rigorously auditing the output to prevent systemic mistakes. Essentially, the manual verification shifts the workload rather than eliminating it. As demonstrated by the Amazon case, lacking a deep understanding of the underlying architecture can lead to deploying logic that disrupts core processes.
Precision is Non-Negotiable
A fundamental rule must govern our approach to these tools: we should leverage Generative AI after completing our foundational work to refine or enhance it, never before to generate it from scratch. It triggers a well-documented cognitive trap known as automation bias. As recent 2024 and 2025 empirical studies on human-AI collaboration demonstrate, when a machine presents a completed output, human reviewers instinctively default to a hasty, superficial verification, uncritically accepting errors they would normally catch themselves.
Consider the classic example of a physical warehouse inventory count. If you provide users with a counting sheet that already displays the current quantities expected by the ERP, they will often skip the actual counting, or do it superficially, subconsciously trying to make their numbers match the system’s pre-determined value. We naturally default to the machine’s assumption.
In fields like content creation, an AI-generated error often mean a quick fix. But in enterprise logistics and robust ERP systems like Infor LN, good enough is simply not a viable strategy. We manage Bills of Materials (BOM), cross-docking schedules, and strict Quality Management parameters. If a user relies on generative AI to update a production routing or adjust a lead time based on an incorrect pattern, it triggers a systemic chain reaction. You might end up ordering incorrect raw materials or completely halting a factory floor. In supply chain operations, precision is an absolute requirement. The Workday data confirms my field observations: rework is heavily concentrated in roles where accuracy is non-negotiable. You cannot bluff your way through a physical warehouse receipt.
Mitigating the AI Tax
Based on my experience during the mapping of business flows, the objective is not to restrict AI usage. It is to transition our workforce towards what Workday terms the Augmented Strategist, experienced professionals who use AI to identify data patterns and assist decision-making, rather than delegating the core work entirely. I called it Algorithm Auditor in a previous article.
For business leaders looking to integrate AI safely into their operations, I strongly recommend these strategic adjustments:
- Re-evaluate hours saved: generating incorrect data faster is not a true productivity gain. Organizations must shift their metrics toward measuring actual outcomes and “first-pass yield” accuracy.
- Implement “Agentic Engineering” protocols: never allow an AI to write directly to core SaaS databases without deterministic guardrails. As I highlighted in my recent analysis on the dangers of autonomous agents in ERP systems, validation from subject matter experts who understand the physical realities of your business remains absolutely essential.
- Reinvest in Human Expertise: rather than allocating all AI-driven cost savings into further technology infrastructure, companies must reinvest in workforce training. The real operational bottleneck is not the AI’s generation speed, but the capacity of your human workforce to accurately review and validate its output.
- Maintain Data Governance: high-quality output requires high-quality input. If your legacy ERP data is unstructured or outdated, AI tools will simply process and scale your existing chaos. Read my previous article if you want to know more.
The lesson that I personally learned out of the recent changes in Amazon protocols, which returned human review procedures, is quite obvious. The AI implementation in sustainable enterprises is not a volume generation. It entails the construction of safe, regulated, and highly qualified structures.
These boundaries must be pushed, you have to put your hands on it, and understand that human mastery continues to be a critical element in true enterprise value.
Written by Andrea Guaccio
April 3 2026