Writer AI Large Language Models Achieve Top Scores on Stanford HELM

Benchmarks reinforce Palmyra as the enterprise-ready LLM model with transparency and accuracy for enterprise generative AI use cases

SAN FRANCISCO--()--Writer, the leading generative AI platform for enterprises, announced today that Palmyra, its family of large language models (LLMs), has achieved top benchmark scores from Stanford’s Holistic Evaluation of Language Models (HELM), demonstrating its leadership in the generative AI field.

In key benchmark tests, Palmyra outperformed models by OpenAI, Cohere, Anthropic, Microsoft, and important open-source models such as Falcon 40B and LLaMA-30B.

HELM is a benchmarking initiative by Stanford University’s Center of Research on Foundation Models that evaluates prominent language models across a wide range of scenarios. Palmyra excelled in tests that evaluated a model’s ability to understand knowledge and answer natural language questions accurately.

The HELM results validate Palmyra’s proficiency in knowledge comprehension, making inferences, and accurately answering open-ended, context-based questions that are worded naturally. These scores highlight Palmyra’s power and ability to complete advanced tasks, which makes it uniquely capable of tackling a wide range of enterprise use cases.

"We are thrilled to see Writer Palmyra at the top of these benchmarks," said Waseem AlShikh, Writer co-founder and chief technology officer. "Our models have demonstrated their breadth of knowledge comprehension and ability to accurately answer questions in natural language – all with an efficient-sized model that doesn’t exceed 43 billion parameters. These results offer further proof that the Writer generative AI platform is the enterprise-ready choice for organizations looking to accelerate growth, increase productivity, and align brand."

In a world where LLMs are increasingly undifferentiated, training data, duration, and methodology make a big difference. Unlike other model families, Palmyra is trained on high-quality formal writing and has a deep vertical focus, with industry-specific models for healthcare and financial services. The models are transparent and auditable rather than black box, built so data stays private, and can be self-hosted. Given that Palmyra LLMs don’t exceed 43 billion parameters, these latest rankings further demonstrate that smaller, more efficient, and more accessible models can still deliver superior results.

See Writer Palmyra resources here:

Comparison of Writer and closed models

 

Cohere

Claude

Text Davinci-003

ChatGPT

Writer

BoolQ

85.6%

81.5%

88.1%

73.9%

89.6%

MMLU

45.2%

48.1%

56.9%

59.8%

60.9%

Natural Questions

76.0%

68.6%

77.0%

63.7%

79.0%

Results from HELM. Models used for testing are Cohere Command beta (52.4B), Anthropic-LM v4-s3 (52B), OpenAI text-davinci-003, gpt-3.5-turbo-0301, Palmyra-X

Comparison of Writer and open source models

 

MMLU

TruthfulQA

Palmyra-X

60.9%

61.6%

Falcon-40B

57.0%

41.7%

llama-30b

56.8%

42.3%

Source: Hugging Face

About Writer

Writer is the generative AI platform for enterprises. We empower your people — product, operations, support, marketing, HR, and more — to maximize creativity and 10x productivity.

Our secure platform snaps easily into your business data sources and delivers accurate answers and content that are fine-tuned on your own data and follow your own AI guardrails. We put generative AI in people’s hands right where they work, and enable you to build it into your end-user applications, without putting your or your users’ data at risk.

Writer is enterprise-grade, doesn’t use or share your data, and features open and transparent LLMs that are deployable in a variety of ways, including self-hosted. We're compliant with SOC 2 Type II, GDPR, HIPAA, and PCI, and are deployed at leading enterprises, including Intuit, UiPath, Spotify, L’Oreal, Uber, and Deloitte. Visit us at writer.com.