• Home  
  • AI Coding Tools Can Increase Bugs and Vulnerabilities, Research Shows
- AI

AI Coding Tools Can Increase Bugs and Vulnerabilities, Research Shows

Despite their growing popularity among developers, AI coding tools are introducing considerably…

ai coding tools increase bugs

Despite their growing popularity among developers, AI coding tools are introducing considerably more bugs and vulnerabilities into software than their human counterparts. Recent research reveals that AI-generated pull requests contain an average of 10.83 issues compared to 6.45 issues in human-generated code. This represents 1.7 times more issues overall in AI-produced code, with critical issues appearing 1.4 times more frequently. At the 90th percentile, the difference is even more striking with 26 issues detected in AI pull requests versus 12.3 in human ones.

AI coding tools may save time, but they’re shipping with nearly twice the bugs of their human counterparts.

The problems aren’t limited to general bugs. Logic and correctness errors appear 1.75 times more often in AI code. Code quality and maintainability issues occur at 1.64 times the rate of human-written code. Security vulnerabilities show up 1.57 times more frequently, while performance errors are 1.42 times more common. These increased issue rates directly translate to longer code review times for development teams. Organizations must recognize these predictable weaknesses and develop active mitigation strategies to address them.

Security vulnerabilities in AI-generated code present particular concerns. Common problems include improper password handling, insecure object references, cross-site scripting (XSS) vulnerabilities, and insecure deserialization issues. Research indicates over 40% of AI-generated code contains security flaws, with specific issues like broken authentication (CWE-306), broken access control (CWE-284), and hard-coded credentials (CWE-798) appearing regularly in code from tools like GitHub Copilot and Cursor. This mirrors the challenges seen in ITSM integration where standardized frameworks help maintain compliance and reduce technical vulnerabilities.

Dependency management creates additional risks when using AI coding tools. Simple prompts can lead to dependency explosion, with even basic applications like to-do list apps incorporating 2-5 backend dependencies. AI tools frequently recommend stale libraries with known CVEs that weren’t addressed before the model’s training cutoff date. This markedly expands the attack surface for potential exploits.

Perhaps most concerning is the phenomenon of hallucinated dependencies, where AI suggests packages that don’t actually exist. This creates opportunities for slopsquatting attacks, where malicious actors register these non-existent package names and insert malware. Developers who blindly install these recommendations risk giving attackers access to their systems or build pipelines, potentially compromising entire software supply chains.

Disclaimer

The content on this website is provided for general informational purposes only. While we strive to ensure the accuracy and timeliness of the information published, we make no guarantees regarding completeness, reliability, or suitability for any particular purpose. Nothing on this website should be interpreted as professional, financial, legal, or technical advice.

Some of the articles on this website are partially or fully generated with the assistance of artificial intelligence tools, and our authors regularly use AI technologies during their research and content creation process. AI-generated content is reviewed and edited for clarity and relevance before publication.

This website may include links to external websites or third-party services. We are not responsible for the content, accuracy, or policies of any external sites linked from this platform.

By using this website, you agree that we are not liable for any losses, damages, or consequences arising from your reliance on the content provided here. If you require personalized guidance, please consult a qualified professional.