ai coding tools increase bugs

Despite their growing popularity among developers, AI coding tools are introducing considerably more bugs and vulnerabilities into software than their human counterparts. Recent research reveals that AI-generated pull requests contain an average of 10.83 issues compared to 6.45 issues in human-generated code. This represents 1.7 times more issues overall in AI-produced code, with critical issues appearing 1.4 times more frequently. At the 90th percentile, the difference is even more striking with 26 issues detected in AI pull requests versus 12.3 in human ones.

AI coding tools may save time, but they’re shipping with nearly twice the bugs of their human counterparts.

The problems aren’t limited to general bugs. Logic and correctness errors appear 1.75 times more often in AI code. Code quality and maintainability issues occur at 1.64 times the rate of human-written code. Security vulnerabilities show up 1.57 times more frequently, while performance errors are 1.42 times more common. These increased issue rates directly translate to longer code review times for development teams. Organizations must recognize these predictable weaknesses and develop active mitigation strategies to address them.

Security vulnerabilities in AI-generated code present particular concerns. Common problems include improper password handling, insecure object references, cross-site scripting (XSS) vulnerabilities, and insecure deserialization issues. Research indicates over 40% of AI-generated code contains security flaws, with specific issues like broken authentication (CWE-306), broken access control (CWE-284), and hard-coded credentials (CWE-798) appearing regularly in code from tools like GitHub Copilot and Cursor. This mirrors the challenges seen in ITSM integration where standardized frameworks help maintain compliance and reduce technical vulnerabilities.

Dependency management creates additional risks when using AI coding tools. Simple prompts can lead to dependency explosion, with even basic applications like to-do list apps incorporating 2-5 backend dependencies. AI tools frequently recommend stale libraries with known CVEs that weren’t addressed before the model’s training cutoff date. This markedly expands the attack surface for potential exploits.

Perhaps most concerning is the phenomenon of hallucinated dependencies, where AI suggests packages that don’t actually exist. This creates opportunities for slopsquatting attacks, where malicious actors register these non-existent package names and insert malware. Developers who blindly install these recommendations risk giving attackers access to their systems or build pipelines, potentially compromising entire software supply chains.

You May Also Like
ai agent driven enterprise execution

Ditch Rule-Based Automation: An Enterprise Execution Framework Built on AI Agents

Ditch rule-based automation: explore bold AI agent frameworks that rethink enterprise execution and governance. Will your processes survive the shift?
ai adoption and trust rise

Why IT’s AI Boom in 2026 May Surprise Skeptics: Adoption, Trust, Value, and What’s Real

How dramatically has artificial intelligence transformed the business landscape in recent years?…
cloudflare outage impacts global websites

When Cloudflare Falters, the Internet Trembles: How One Outage Silenced Global Giants

How did a single database permission change bring down a significant portion…
ai driven organizational knowledge overhaul

Knowledge Management 2026: Prepare for a Radical Shift in How Organizations Capture and Use Knowledge

AI will replace KM clerks — unless organizations rebuild knowledge as mission-critical infrastructure. Learn what to change next.