AI in Testing: The Scalability Trap We Can't Afford to Fall Into
- Vivek Upreti
- Sep 4
- 2 min read

AI is revolutionising many aspects of our work, and in the world of software development, it offers tempting promises. We hear about the potential for single-person billion-dollar companies emerging from AI's capabilities. But there's a flip side, especially in testing, where AI could inadvertently create single-person disasters.
It might seem like progress when one person can now spin out 100,000+ tests for a single application. Add to this the persistent push for "frontend-only" testing and the industry's obsession with "coverage vanity metrics," and it all feels like we're moving forward. But, this isn't progress; it's a scalability trap.
The danger isn't hypothetical. We once experienced this firsthand, writing over 4,000 End-to-End (E2E) UI tests for a chat application without AI. The outcome? The maintenance cost almost killed the project. The return on investment (ROI) simply didn't justify the immense effort, and the sheer quantity of tests blinded them to the actual quality. Now, imagine this problem amplified 10x or more with AI.
The harsh truth is that AI doesn't magically fix broken testing strategies; it amplifies them. If your current approach already struggles with managing a large test suite, introducing AI to generate tests at an unprecedented scale will only worsen the issue. Before you celebrate that "AI wrote me 100k tests," it's crucial to pause and ask a fundamental question: "Is this useful, or just noise at scale?".
The real challenge we face is this:
If AI makes test bloat effortless, how do we prevent our Quality Assurance (QA) efforts from drowning in their own overwhelming output?
The answer lies not in generating more tests, but in rethinking our testing strategy entirely, focusing on smart, targeted, and maintainable evaluations rather than sheer volume.

