Affectiva to Launch a Cloud-Based API to Distinguish Emotion in Speech

API Made Available to Beta Users Puts Affectiva One Step Closer to Developing First Ever Multi-Modal Emotion AI

BOSTON--()--Affectiva, the global leader in Artificial Emotional Intelligence (Emotion AI), today announced its new cloud-based API for measuring emotion in recorded speech, is available to beta users.

With its roots at MIT, Affectiva has more than 15 years of experience developing emotion recognition technology. The new API has been developed using an existing deep-learning based framework with expert data collection and labeling methodologies. This, coupled with its existing emotion recognition technology for analyzing facial expressions, makes Affectiva the first AI company to allow for a person’s emotions to be measured across both face and speech.

Humans express emotions through facial expressions, voice and gestures, and only seven percent of what we communicate is conveyed through the actual words. Affectiva’s new API observes changes in speech paralinguistics, tone, volume, speed, and voice quality to distinguish anger, laughter, arousal and the speaker’s gender, in conversation.

“More often than not, humans’ interactions with technology are transactional and rigid,” said Dr. Rana el Kaliouby, co-founder and CEO, Affectiva. “Conversational interfaces like chatbots, social robots or virtual assistants could be so much more effective if they were able to sense a user’s frustration or confusion and then alter how they interact with that person. By learning to distinguish emotions in facial expressions, and now speech, technology will become more relatable, and eventually, more ‘human.’

“Action could be taken to more quickly appease a disgruntled customer after he or she expresses anger on the phone, or a vehicle’s navigation system could discover that the driver is experiencing a burst of road rage and react accordingly, just to name a few examples,” continued el Kaliouby. “Ultimately, the ways in which socially and emotionally-aware technology will enrich our lives is endless. Affectiva’s new API puts us that much closer.”

Through Affectiva’s beta program, speech classifiers will be continuously developed and improved upon so that emotions in speech can be identified in real-time and in conversations. The goal is to create a multi-modal Emotion AI platform that is able to distinguish emotions across multiple communication channels. Expanding Affectiva’s emotion recognition technology to include speech, in addition to facial expression, ensures Emotion AI will be applied to a variety of new use cases and markets.


  • To be invited into the beta program for Affectiva’s new API, please contact Affectiva here.
  • Affectiva’s new API is being demoed today at Affectiva’s Emotion AI Summit at MIT Media Lab in Cambridge, MA. Please visit Affectiva’s blog for learnings and insights from the panelists and speaking sessions.

About Affectiva

Affectiva is the pioneer in Emotion AI, the next frontier of artificial intelligence. Affectiva's mission is to bring emotional intelligence to the digital world with its emotion recognition technology that senses and analyzes facial and vocal expressions and emotions. Affectiva's patented software is built on an emotion AI science platform that uses computer vision, deep learning and the world's largest emotion data repository of nearly 6 million faces analyzed from 75 countries, amounting to more than 2 billion facial frames.

For more information:


March Communications
Stephanie Jackman, +1 617 960 9875

Release Summary

Affectiva, the global leader in Artificial Emotional Intelligence, today announced its new cloud-based API for measuring emotion in recorded speech.


March Communications
Stephanie Jackman, +1 617 960 9875