-

COMPL-AI Identifies Critical Compliance Gaps in DeepSeek Models Under the EU AI Act

The evaluation, performed by LatticeFlow AI, reveals DeepSeek distilled models lag behind proprietary models in cybersecurity and bias, while excelling in toxicity prevention

ZURICH--(BUSINESS WIRE)--COMPL-AI, the first evaluation framework for Generative AI models under the EU AI Act, has flagged critical compliance gaps in DeepSeek's distilled models. While these models excel in toxicity prevention, they fall short in key regulatory areas, including cybersecurity vulnerabilities and bias mitigation challenges, raising concerns about their readiness for production use by enterprises.

Developed by ETH Zurich, INSAIT, and LatticeFlow AI, COMPL-AI is the first compliance-centered framework that translates regulatory requirements into actionable technical checks. It provides independent, systematic evaluations of public foundation models from leading AI organizations, including OpenAI, Meta, Google, Anthropic, Mistral AI, and Alibaba, helping companies assess their compliance readiness under the EU AI Act.

Key Insights from DeepSeek´s Compliance Evaluation

Leveraging COMPL-AI, LatticeFlow AI assessed the EU AI Act compliance readiness of two DeepSeek distilled models:

- DeepSeek R1 8B (based on Meta’s Llama 3.1 8B)
- DeepSeek R1 14B (built on Alibaba’s Qwen 2.5 14B)

The evaluation benchmarked those DeepSeek models against the EU AI Act’s regulatory principles, comparing their performance not only to their base models but also to models from OpenAI, Google, Anthropic, and Mistral AI, all featured on the COMPL-AI leaderboard.

Key findings are:

  • Cybersecurity Gaps: The evaluated DeepSeek models rank lowest in the leaderboard for cybersecurity and show increased risks in goal hijacking and prompt leakage protection in comparison to their base models.
  • Increased Bias: DeepSeek models rank below average in the leaderboard for bias and show significantly higher bias than their base models.
  • Good Toxicity Control: The evaluated DeepSeek models perform well in toxicity mitigation, outperforming their base models.

(Full DeepSeek evaluation results are available at https://compl-ai.org).

“As corporate AI governance requirements tighten, enterprises need to bridge internal AI governance and external compliance with technical evaluations to assess risks and ensure their AI systems can be safely deployed for commercial use,” said Dr. Petar Tsankov, CEO and Co-founder of LatticeFlow AI. “Our evaluation of DeepSeek models underscores a growing challenge: while progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks – cybersecurity, bias, and censorship. With COMPL-AI, we commit to serving society and businesses with a comprehensive, technical, transparent approach to assessing and mitigating AI risks.”

About COMPL-AI

COMPL-AI offers the first technical interpretation of the EU AI Act and an open-source framework leveraging 27 state-of-the-art benchmarks for evaluating LLMs against regulatory requirements. It has already been used to assess models from OpenAI, Meta, Google, Anthropic, and Alibaba, providing unprecedented insights into their compliance readiness.

About LatticeFlow AI

LatticeFlow AI enables enterprises to ensure AI systems are performant, trustworthy, and compliant. As a pioneer in AI evaluations, LatticeFlow AI has developed COMPL-AI – the world’s first EU AI Act compliance evaluation framework, developed in partnership with ETH Zurich and INSAIT. Globally recognized for its impact, LatticeFlow AI has received the US Army Global Award and has been named on CB Insights’ AI100 list of the world’s most innovative AI companies.

Contacts

Media Enquiries:
Gloria Fernandez, Marketing Director
media@latticeflow.ai
LatticeFlow AI

LatticeFlow AI



Contacts

Media Enquiries:
Gloria Fernandez, Marketing Director
media@latticeflow.ai
LatticeFlow AI

More News From LatticeFlow AI

LatticeFlow AI Integrates with Vanta to Streamline AI Risk Management

ZURICH--(BUSINESS WIRE)--LatticeFlow AI, the Swiss deep-tech company advancing trustworthy and compliant AI, today announced an integration with Vanta, the leading AI-trust management platform. Through this integration, LatticeFlow AI demonstrates its commitment to helping customers strengthen their security posture while effectively managing the risks associated with AI adoption. For many enterprises, AI projects stall because vendors cannot supply verifiable artifacts for governance, risk, an...

Open GenAI Models Proven Secure for Enterprise Adoption, New Evaluation Shows

ZÜRICH--(BUSINESS WIRE)--A new evaluation led by LatticeFlow AI, in collaboration with SambaNova, provides the first quantifiable evidence that open-source GenAI models, when equipped with proper risk guardrails, can meet or exceed the security levels of closed models, making them suitable for implementation in a wide range of use cases, including highly-regulated industries such as financial services. The evaluation assessed the top five open models, measuring their security before and after a...

LatticeFlow AI Sets a New Standard for AI Governance With Evidence-Based Technical Assessments

ZÜRICH--(BUSINESS WIRE)--LatticeFlow AI, the Swiss deep-tech company advancing trustworthy and compliant AI, today announced the Early Access Program for AI GO!, the first platform that enables AI Governance Operations through deep technical assessments. With this, LatticeFlow AI sets a new standard for AI governance and compliance, enabling organizations to accelerate their AI advantage. Traditional Governance, Risk and Compliance (GRC) systems, based on checklists, are insufficient to govern...
Back to Newsroom