-

Speechmatics Achieves a World First in Bilingual Voice AI with New Arabic–English Medical Model

The new industry-leading bilingual model includes the world's first Arabic–English bilingual medical model, achieving 6.3% WER on mixed-speech benchmarks and 35% fewer errors than the nearest competitor.

CAMBRIDGE, England--(BUSINESS WIRE)--Speechmatics today launched its new Arabic–English bilingual model, a single production-ready model that handles Arabic dialects and English simultaneously. It can be deployed on-premises and on-device, supports speaker diarization and speaker focus, and runs across real-time and batch workflows.

"We ran extensive evaluations on complex clinical audio, including code-switching and dialect-heavy consultations common across MENA. Speechmatics' bilingual medical model was the only one that met the performance thresholds."

Share

As part of the rollout, Speechmatics introduces the world's first Arabic–English bilingual medical model: a specialized clinical variant trained on twice the vocabulary of its English Medical Model, built to ensure that patient records are always accurate and up to date.

Code-switching: going beyond monolingual AI

A doctor names a drug in English then switches back to Arabic. A Gulf contact center agent shifts registers without thinking. A finance officer moves across both languages in a single sentence. Across MENA, this is Monday morning.

Monolingual models weren't built for this. When a speaker shifts between Arabic and English mid-sentence, the model loses the thread - misattributing words, dropping terminology, or simply getting it wrong. In a contact centre or a clinical setting, that's not an edge case. It's the norm.

Our model, tailored to support both languages, resolves this issue, with speaker diarization and speaker focus ensuring every word is attributed to the right person throughout.

In new benchmarking, Speechmatics achieves a 35% lower Word Error Rate than Google on Arabic–English code-switching tasks (6.3% vs 9.7%), making it the most accurate code-switching model available

Dialect coverage that clears the field

Arabic carries distinct vocabulary, phonology, and rhythm across the region, including Gulf, Egyptian and Levantine dialects. Models trained on broadcast Modern Standard Arabic struggle the moment a real conversation starts.

Speechmatics leads major providers on Arabic-only transcription, delivering 24% lower Word Error Rate than Google (4.5% vs 5.9%) and outperforming OpenAI Whisper, AssemblyAI, Deepgram, Amazon, and Microsoft.

Built for enterprise deployment

Data sovereignty is a hard requirement across MENA, with regional data protection legislation in Saudi Arabia, the UAE, and beyond placing strict obligations on where voice data is processed and stored. Speechmatics meets this directly.

The model deploys across cloud SaaS, on-premises, and on-device, powered by NVIDIA AI infrastructure and optimized through NVIDIA Dynamo-Triton for high-throughput, low-latency processing at scale. Sub-second latency is maintained across all deployment modes.

Real-time streaming and batch transcription run on the same model, removing the accuracy trade-off that typically comes with switching between the two. Speaker diarization, speaker focus, punctuated transcripts, and timestamped outputs are included as standard.

The world's first bilingual medical model

In clinical environments across MENA, English drug names, procedures, and dosages appear constantly inside Arabic speech. Generic models mishandle them, and those errors can land in the patient record.

Trained on twice the vocabulary of Speechmatics' English Medical Model, incorporating both English and Arabic clinical terminology, real dialect variation, and speech from actual clinical settings, the world's first Arabic–English bilingual medical model accurately transcribes ICD-10-CM codes, drug names, dosages, and clinical shorthand regardless of which language carries them. On-premises and on-device deployment make it viable for the regulated environments where clinical AI is increasingly being built across the region.

“This was critical to achieving meaningful outcomes for customers across the region who kept describing the same challenge. In a Cairo hospital or a Riyadh contact center, Arabic and English flow concurrently - the drug name arrives in English, the rest of the sentence is Arabic. Delivering significant impact meant removing that friction from voice interactions. We trained on real voices, real dialects and real clinical vocabulary - because that’s the only way to build something that truly works where it’s used.” - Katy Wigdahl, CEO, Speechmatics

"We ran extensive evaluations on complex clinical audio, including code-switching and dialect-heavy consultations common across MENA. Speechmatics' bilingual medical model was the only one that met the performance thresholds we require to maintain high-quality clinical documentation as we scale regionally. That alignment made the partnership a strong fit for our expansion." - Patrick Nguyen, Head of Engineering, MENA, Sully.ai

Both models are available now. Visit speechmatics.com for access and deployment options.

Contacts

Media Contact: Mieke Smith Communications & Content Lead, Speechmatics mieke.smith@speechmatics.com

Speechmatics


Release Versions

Contacts

Media Contact: Mieke Smith Communications & Content Lead, Speechmatics mieke.smith@speechmatics.com

Social Media Profiles
More News From Speechmatics

Speechmatics and Sully.ai Partner to Scale Healthcare AI Infrastructure Globally

CAMBRIDGE, U.K.--(BUSINESS WIRE)--Speechmatics and Sully.ai today announced a strategic partnership to power the next generation of autonomous healthcare agents and scribes. Built on NVIDIA AI infrastructure, the collaboration combines best in class medical-grade speech models with autonomous agent workflows to deliver AI receptionists and clinical scribes that handle real operational tasks and deliver tangible ROI across clinical settings. The partnership arrives as global healthcare faces acu...

Speechmatics sets record in medical Speech-to-Text with 93% accuracy

CAMBRIDGE, England--(BUSINESS WIRE)--Speechmatics today launched a next-generation Medical Speech-to-Text (STT) model for clinical transcription, reaching 93% general real-world accuracy and outperforming peers with 50% fewer errors on medical terms. Engineered for the speed and complexity of care, the model extends coverage of medical care and pharmaceutical terms and is optimized for rapid, multi-speaker dialogue. The result: cleaner notes, fewer corrections, and a clearer record of each enco...

What the UK’s Emergency Services Can Teach Startups About AI That Actually Works -- New Report from Speechmatics

CAMBRIDGE, England--(BUSINESS WIRE)--While many “AI-first” launches have fizzled, emergency services are showing what it really means to put AI to work. A new report from Speechmatics reveals that 100% of UK ambulance calls now run through Voice AI infrastructure. Titled The Voice AI Reality Check: Frontline Perspectives for Enterprise in 2025, the report explores how speech systems are quietly transforming high-pressure environments like triage, crisis response and public service contact centr...
Back to Newsroom