WhyLabs Launches LangKit to Make Large Language Models Safe and Responsible

WhyLabs Open-sources a Powerful Technology to Equip Enterprises with Critical Safety Guardrails for LLMs

SEATTLE--()--WhyLabs, the leading observability platform trusted by high-performing teams to control the behavior of AI & data applications, today announced LangKit - the observability and safety standard for Large Language Models (LLMs). LangKit enables detection of risks and safety issues in open-source and proprietary LLMs, including toxic language, jailbreaks, sensitive data leakage, and hallucinations.

“As more organizations incorporate LLMs into customer-facing applications, reliability, and transparency will be key for successful deployments,” said Andrew Ng, Managing General Partner of AI Fund. “With LangKit, WhyLabs provides an extensible and scalable approach for solving challenges that many AI practitioners will face when deploying LLMs in production.”

“With the emergence of LLMs, the AI community faced a unique phenomenon: our ability to evaluate the performance of this new wave of AI technologies is increasingly challenged. At WhyLabs, we have been working with the industry’s most advanced AI/ML teams for the past year to build an approach for evaluating and monitoring generative models; these efforts culminated in the creation of LangKit,” said Alessya Visnjic, co-founder and CEO at WhyLabs.

With LangKit, AI practitioners extract a critical set of telemetry data from prompts and responses to describe the behavior of an LLM. The WhyLabs Platform enables users to set alert parameters for activity, including malicious prompts, sensitive data, toxic responses, problematic topics, hallucinations, as well as jailbreak attempts. With these alerts and guardrails, application developers can prevent inappropriate prompts, undesirable LLM responses, and violations of LLM usage policies.

“In an era in which AI transitioned from buzzword to vital business necessity, effective use of LLMs is a must. As our team at Tryolabs helps enterprises put this powerful technology into practice, safety remains one of the main blocks for widespread adoption,” said Alan Descoins, CTO at Tryolabs, who specializes in helping enterprises accelerate their adoption of AI. “WhyLabs’ LangKit is a leap forward for LLMOps, providing out-of-the-box tools for measuring the quality of LLM outputs and catching issues before they affect tasks downstream — whether end users, other applications, or even other LLMs. The fact that it’s easily extensible and lets you add your own checks is also a big plus!”

“At Symbl.ai we deliver conversation intelligence as a service to builders, so observability is critical for smooth operations and excellent customer experience. Our platform enables experiences powered by both Understanding and Generative AI for which LangKit is critical to enable transparency and governance required across the end-to-end AI stack.,” said Surbhi Rathore, CEO of Symbl.ai, “The WhyLabs Platform provides observability tools for a wide range of AI use cases, and the addition of LLM observability capabilities reduces engineering overhead, and we can address all operational needs with one platform.”

WhyLabs LangKit provides a unified set of telemetry guardrails for safe, reliable, and observable LLM deployments, enabling organizations to:

  • Validate and safeguard individual prompts & responses: detect when either a prompt or a response is not compliant with policies and take corrective action
  • Evaluate that the LLM behavior is compliant with policy: track LLM performance against a golden set of prompts to detect changes in behavior or policy violations
  • Monitor user interactions inside an LLM-powered application: monitor prompts, responses, and user interactions to be alerted about degradations in overall user experience
  • Compare and A/B test across different LLM and prompt versions: ensure that changes to the LLM API are not causing a degradation of the customer experience

These capabilities are available in the WhyLabs AI Observability Platform alongside the existing solutions for ensuring responsible model deployment, such as monitoring of embeddings, model performance, and unstructured data drift. Industry leaders like Glassdoor, Airspace, Fortune 500 enterprises, and AI-first startups rely on WhyLabs to prevent issues in production ML models and ensure high-quality customer experience in AI-powered applications.

To get started with LangKit, check out the resources below:

About WhyLabs

WhyLabs, Inc. (www.whylabs.ai / @whylabs) enables teams to deploy AI applications responsibly and run them without failure. From Fortune 100 companies to AI-first startups, teams have adopted WhyLabs’ tools to monitor ML and generative AI applications. WhyLabs’ open source tools and SaaS observability platform surface drift, data quality issues, bias, and hallucinations. With WhyLabs, teams reduce manual operations by over 80% and cut down time-to-resolution of AI incidents by 20x. The company is funded by Andrew Ng’s AI Fund, Madrona Venture Group, Defy Partners and Bezos Expeditions. WhyLabs was incubated at the Allen Institute for Artificial Intelligence (AI2) by Amazon Machine Learning alums.

Contacts

Kelsey Olmeim press@whylabs.ai

Social Media Profiles

Contacts

Kelsey Olmeim press@whylabs.ai