-

CodeRabbit’s “State of AI vs Human Code Generation” Report Finds That AI-Written Code Produces ~ 1.7x More Issues Than Human Code

Review of AI-coauthored PRs and human-only PRs finds AI-Generated PRs have more bugs, more security risks and heavier review tails

SAN FRANCISCO--(BUSINESS WIRE)--CodeRabbit, the leading AI-powered code review platform, today released the “State of AI vs Human Code Generation”, a comprehensive new report analyzing the quality of AI-generated code in real-world software development. The study, which analyzed 470 real-world open source pull requests, found that AI-generated code introduces significantly more defects across every major category of software quality – including logic, maintainability, security, and performance – compared to human-authored code. The report can be downloaded here.

The study found that AI-generated code introduces significantly more defects across every major category of software quality – including logic, maintainability, security, and performance – compared to human-authored code.

Share

Despite several high-profile 2025 postmortems identifying AI-authored or AI-assisted changes as contributing factors, before this report, there was little hard data on which issues AI introduces most often or how those patterns differ from human-written code. This study fills that gap and provides clear insight into the specific risks and failure modes present in AI-generated pull requests.

Key Findings:

  • AI-generated PRs contain ~1.7x more issues on average than human-written PRs.
  • Critical and major defects are up to 1.7x higher in AI-authored changes.
  • Logic and correctness issues rise 75%, including business logic errors, misconfigurations, and unsafe control flow.
  • Security vulnerabilities rise 1.5–2x, especially improper password handling and insecure object references.
  • Code readability problems increase more than 3x, with elevated naming and formatting inconsistencies.
  • Performance inefficiencies, such as excessive I/O, appear nearly 8x more often in AI-generated code.

“These findings reinforce what many engineering teams have sensed throughout 2025,” said David Loker, Director of AI, CodeRabbit. “AI coding tools dramatically increase output, but they also introduce predictable, measurable weaknesses that organizations must actively mitigate.”

The use of AI code generation is rapidly increasing, with over 90% of developers now reporting to use these tools to boost productivity and handle routine tasks. Companies can experience significant gains, such as 10% faster engineering speed and major reductions in time spent on repetitive work by using these tools, and its value is continuing to be realized. To help organizations mitigate risks, the report also outlines practical steps for teams adopting AI-assisted development, including:

  • Project-context prompts - AI makes more mistakes when it lacks business rules, configuration patterns, or architectural constraints. Provide prompt snippets, repo-specific instruction capsules, and configuration schemas;
  • Policy-as-code for style - Readability and formatting were some of the biggest gaps. CI-enforced formatters, linters, and style guides eliminate entire categories of AI-driven issues before review;
  • Stricter CI enforcement - Given the rise in logic and error-handling issues: require tests for non-trivial control flow, mandate nullability/type assertions, standardize exception-handling rules, and explicitly prompt for guardrails, where needed;
  • Enhanced security scanning - Mitigate elevated vulnerability rates by centralizing credential handling, blocking ad-hoc password usage, and running SAST and security linters automatically; and,
  • AI-aware PR checklists - Reviewers should explicitly ask: if error paths are covered, if concurrency primitives are correct, if configuration values are validated, and if passwords are handled via the approved helper. These questions target the areas where AI is most error-prone.

About the Report

The analysis draws exclusively from 470 open-source GitHub PRs, using CodeRabbit’s structured review taxonomy to classify issues across logic, maintainability, security, and performance categories. The PRs include 320 that were labelled as AI-coauthored and 150 as human-only. Statistical comparisons were made using normalized issue rates and Poisson rate ratios with 95% confidence intervals.

Supporting Resources

To learn more about AI-powered code review and keep up-to-date on the latest resources and features, check out:

About CodeRabbit

CodeRabbit is the category-defining platform for AI code reviews, built for modern engineering teams navigating the rise of AI-generated development. By delivering context-aware reviews that pull in dozens of points of context, CodeRabbit provides the most comprehensive reviews coupled with customization features to tailor your review to your codebase and reduce the noise. CodeRabbit helps organizations catch bugs, strengthen security, and ship reliable code at speed. Trusted by thousands of companies and open-source projects worldwide, CodeRabbit is backed by Scale Venture Partners, NVentures: NVIDIA's venture capital arm, CRV, Harmony Partners, Flex Capital, Engineering Capital and Pelion Venture Partners. Learn more at www.coderabbit.ai.

Contacts

Media Contact
Heather Fitzsimmons
heather@mindsharepr.com
650-279-4360

CodeRabbit


Release Versions

Contacts

Media Contact
Heather Fitzsimmons
heather@mindsharepr.com
650-279-4360

Social Media Profiles
More News From CodeRabbit

CodeRabbit Raises $60M Series B Following Unprecedented Growth as Vibe Coding Triggers a Need for New Code Quality Standards

SAN FRANCISCO--(BUSINESS WIRE)--CodeRabbit, the pioneer of AI code reviews, today announced $60 million in Series B funding led by Scale Venture Partners, with participation from NVentures: NVIDIA Venture Capital, and existing investors CRV, Harmony Partners, Flex Capital, Engineering Capital and Pelion Venture Partners. The funding means that CodeRabbit, known for delivering the most advanced context-aware AI code reviews, has raised a total of $88 million. The funding will be used to accelera...

AI Code Review Pioneer CodeRabbit Recognized in Redpoint’s InfraRed 100

WALNUT CREEK, Calif.--(BUSINESS WIRE)--CodeRabbit, the most advanced AI code review platform, today announced its inclusion on the Redpoint InfraRed 100. This prestigious list highlights the 100 up-and-coming private companies in Cloud Infrastructure, showcasing the future leaders set to revolutionize the market. CodeRabbit integrates foundational gen-AI models with multiple data sources that enhance the code context, leading to higher quality reviews, cutting down code review time and bugs in...

CodeRabbit Brings Free AI Code Reviews in Visual Studio Code

WALNUT CREEK, Calif.--(BUSINESS WIRE)--CodeRabbit, the pioneer of the most advanced codebase-aware AI code reviews, today announced that it is now available on the popular Visual Studio Code editor (start a 14-day free trial here). The integration brings CodeRabbit’s AI code reviews directly into Cursor, Windsurf, and VS Code at the earliest stages of software development—inside the code editor itself—at no cost to the developers. This new support enhances CodeRabbit’s multi-layered review appr...
Back to Newsroom