Where Have All the QA Gone?
The Systematic Elimination of QA Teams and Why AI Can’t Fill the Gap
The Quiet Exodus That’s Breaking Your Software
Something strange is happening in the software industry. Applications are shipping faster than ever, but they’re also breaking more often. Login screens freeze. Payment systems glitch. Updates that once felt polished now arrive with a rough, unfinished quality that users increasingly accept as normal. Behind this shift is a workforce change that few outside the industry have noticed: the systematic elimination of Quality Assurance teams.
In March 2023, Indeed — the job listings platform — laid off 15% of its workforce and eliminated its QA function entirely. According to reports from engineers who remained, the overall quality of tests immediately nosedived. Just over a year later, Figma’s CEO had to publicly apologize after an AI feature launched without proper QA vetting, walking into a PR disaster that could have been avoided with more thorough testing.
These aren’t isolated incidents. They’re symptoms of an industry-wide philosophy that has quietly taken hold: the belief that QA is an expendable cost center rather than an essential safeguard.
The Economic Squeeze
The math seems simple on a spreadsheet. A senior QA engineer might cost a company $100,000 or more annually. Multiply that by a team of ten or twenty, and you’re looking at significant overhead. When executives hunt for budget cuts, QA departments present an tempting target — their value is preventative, which makes it invisible until something goes wrong.
The tech industry’s boom years masked this problem. When growth was the only metric that mattered and venture capital flowed freely, companies could afford redundancy. But as interest rates rose and the market corrected, the belt-tightening began. According to TechCrunch’s comprehensive tracking, layoffs have swept through tech companies of all sizes throughout 2024 and 2025, and QA teams have been disproportionately affected.
The reasoning often sounds reasonable in a boardroom: developers can write their own tests. Automation can handle regression testing. AI can catch bugs faster than humans. Agile methodologies mean everyone owns quality. The discipline that took decades to develop was dismantled in months.
As one industry commentator bluntly put it: companies just said “we’re no longer going to have that function, figure it out.” It was disrespectful and demoralizing to folks who had spent their careers in QA.
The Promise of AI (And Its Limits)
Into this void stepped artificial intelligence, promising to revolutionize software testing. The pitch is seductive: AI-driven tools can generate test cases automatically, identify patterns humans miss, and run thousands of scenarios in the time it takes a manual tester to verify a single feature. Industry projections suggest AI adoption in QA will continue climbing, with some estimates placing AI-related QA spending at 40% of central IT budgets by 2025.
The technology genuinely works for certain tasks. AI excels at processing large data sets, recognizing patterns, completing repetitive assessments, and applying predictive analysis. Tools powered by machine learning can analyze code in real time, identifying potential bugs as developers write. Self-healing test scripts can automatically adapt when application interfaces change, reducing maintenance overhead.
But here’s what the marketing materials don’t emphasize: AI frequently fails where it matters most.
The fundamental limitation is that AI testing tools operate based on predefined rules and historical data. They can verify whether a feature works as expected under conditions that have been anticipated, but they struggle with the unexpected. Machines focus on structural components, not how humans actually perceive and use applications. An AI cannot judge whether an interface is user-friendly or assess genuine engagement with a product.
Consider accessibility testing. AI can find missing buttons or links, but it might overlook poor color contrast that makes text hard to read — something a human tester would immediately recognize as a problem. Or take ethical concerns: a recruiting application might pass all automated tests while containing biases in its training data that lead to discrimination against certain user groups. The bug isn’t a glitch; it’s baked into the system’s logic in ways that pattern-matching algorithms cannot detect.
The Bugs AI Can’t Catch
The hidden failures of AI testing reveal themselves in predictable categories.
First, there’s contextual understanding. Many software defects arise from scenarios that fall outside historical data or are deeply tied to business logic and human context. An AI system might successfully identify a repeatable defect in code but miss subtle integration issues that only surface when business rules interact with edge cases that weren’t anticipated during training.
Then there’s exploratory testing — the intuitive, curiosity-driven investigation that skilled QA professionals perform. Testers follow instincts, vary their actions, and uncover unexpected bugs by thinking like users who don’t read manuals. Automation cannot replicate this behavior because it follows fixed, repeatable scripts. The best bugs are often found by people asking “what if?” in ways that never occurred to the developers.
User experience presents another blind spot. Automation tools cannot evaluate emotional reactions, accessibility challenges, or visual appeal. These aspects require real users or manual testers who bring domain knowledge and can assess whether an issue affects business value — not just whether code executes without throwing an error.
Perhaps most critically, AI testing can create a false sense of security. Teams relying solely on AI-powered tools may miss important defects that could have significant impacts on functionality and user experience. The human eye catches inconsistencies using the entire background of using similar solutions. Artificial intelligence only relies on limited data and mathematical models. The more advanced this technology gets, the more difficult it is to verify the validity of results, and the riskier overreliance becomes.
The Real-World Consequences
The CrowdStrike incident of July 2024 stands as a stark reminder of what happens when testing fails. A routine update caused failures across 8.5 million Windows devices, leaving critical sectors — banking, healthcare, transportation — in chaos. Downtime extended up to 72 hours for major organizations, with financial losses estimated at $3 billion. The root cause? A misconfigured update that clashed with existing Windows configurations — exactly the kind of complex, environment-specific issue that comprehensive QA processes are designed to catch.
Beyond spectacular failures, there’s a quieter erosion of quality that users experience daily. Software failures cost US companies $2.41 trillion in 2022, according to one industry report — a figure that has only grown. These failures result in financial losses, higher expenses for defect fixing, and the risk of customers migrating to competitors.
The pattern is predictable: a company eliminates or reduces its QA team, product quality starts diminishing nearly immediately, bugs begin creeping in with longer lag times to fixes, and eventually the technical debt becomes impossible to ignore.
Why Human Testers Still Matter
The attributes that make great QA professionals cannot be replicated by algorithms. Curiosity drives testers to ask questions that machines never consider. Domain expertise helps them understand which bugs matter to users and which are merely technical anomalies. Communication skills allow them to advocate for quality across teams and explain risks to stakeholders who don’t speak code.
Human testers bring adaptability. They can respond in real-time to unexpected behavior without needing scripts rewritten. They bring judgment about whether an issue affects business value. They perform exploratory testing driven by creativity and intuition — exploring unexpected paths, catching edge-case bugs, and evaluating overall usability in ways automation simply cannot.
The most effective QA strategies combine automation with human oversight. AI handles repetitive tasks — regression testing, pattern recognition, continuous monitoring — while skilled testers focus on strategic work: security testing, accessibility audits, usability evaluation, and the creative exploration that catches bugs before users do.
A Path Forward
The discipline of QA isn’t dying — it’s evolving. But that evolution requires investment, not elimination.
Companies should resist the temptation to treat AI testing as a replacement for human expertise. Instead, the technology works best as an enhancement — handling the tedious work so that QA professionals can focus on what humans do best. Continuous validation of AI tools remains essential, with feedback loops that improve models over time while human oversight catches the gaps.
For QA professionals navigating this shifting landscape, the path forward involves adaptation. Specializations in security testing, AI model validation, performance testing, and ethical testing are increasingly in demand. The role of QA is transforming from script execution to strategic oversight — guiding AI on what to test, interpreting results, and bringing the contextual judgment that machines lack.
For organizations, the lesson is straightforward: the “one weird trick” to faster software delivery was never firing your testers. Quality assurance is not a cost center to be minimized — it’s a competitive advantage that protects reputation, revenue, and user trust.
The next time you encounter a buggy app, a broken update, or a feature that clearly wasn’t tested with real users in mind, remember: somewhere, a QA team was probably cut to make the numbers work. And somewhere else, an AI is confidently reporting that all tests passed.



Really solid case for why layoffs created a quality crisis rather than effciency gains. The CrowdStrike example crystalizes something crucial: AI testing tools basically optimize for passing tests rather than catching the multi-system integration bugs that actually break production. Its kinda like teaching to the test instead of understanding the material. The false-confidence layer is probly more dangerous than having no automation at all.