Affectiva Launches Multi-Modal Automotive In-Cabin AI to Improve Road Safety and Accelerate Autonomy

Industry-First, Real-Time AI Solution Measures Face and Voice to Monitor Driver State and Occupant Experience, and Enhance the Development of Automated Mobility

Loading media player...

Affectiva Automotive AI

BOSTON--()--Affectiva, the global leader in Artificial Emotional Intelligence (Emotion AI), today announced Affectiva Automotive AI, the first multi-modal in-cabin AI sensing solution. Affectiva Automotive AI identifies, in real time from face and voice, complex and nuanced emotional and cognitive states of a vehicle’s occupants, to deliver comprehensive people analytics. This allows original equipment manufacturers (OEMs) and Tier 1 suppliers to build advanced driver monitoring systems (DMS), as well as differentiated in-cabin experiences that span the autonomous vehicle continuum. Affectiva’s solution also enables developers of automated driving systems to improve their technology for use in robo-taxis and other highly automated vehicles (HAV) in the emerging Automated Mobility sector.

Affectiva Automotive AI for Driver and Occupant Experience Monitoring

To date, AI has allowed self-driving systems to understand what’s happening outside of the car. But, with one out of every 10 crashes caused by drowsy driving and 1,000 injuries daily in the U.S. involving distracted drivers, OEMs recognize the need to deploy in-cabin AI to improve road safety by identifying dangerous driving behavior. As self-driving capabilities continue to advance, it will be important to monitor driver state, in real-time, to assess if the driver can assume control in semi-autonomous vehicles. And, as autonomous vehicles become commercially-available, Affectiva Automotive AI’s understanding of the in-cabin environment will shift to allow for the adaptation of the travel experience based on occupant moods and reactions.

“Affectiva is the only AI company that can deliver people analytics using algorithms that are built with custom-developed deep learning architectures,” said Dr. Rana el Kaliouby, CEO and co-founder, Affectiva. “We have built industry-leading Emotion AI, using our database of more than 6 million faces analyzed in 87 countries. We are now training on large amounts of naturalistic driver and passenger data that Affectiva has collected, to ensure our models perform accurately in real-world automotive environments. With a deep understanding of the emotional and cognitive states of people in a vehicle, our technology will not only help save lives and improve the overall transportation experience, but accelerate the commercial use of semi-autonomous and autonomous vehicles.”

Interfacing with a vehicle’s safety systems, Affectiva Automotive AI can identify complex driver impairment states caused by drowsiness, physical distraction or mental distraction from cognitive load or anger. This is significantly more advanced than current systems that rely on simplistic head pose and eye gaze measurements.

“Our Learning Intelligent Vehicle (LIV) was developed to shape consumer acceptance of autonomous vehicles by building two-way trust and confidence between human and machine,” said Ola Boström, Vice President of Research, Autoliv, Inc. “Supported by Affectiva’s AI, LIV is able to sense driver and passenger moods, and interact with human occupants accordingly. As the adoption and development of autonomous vehicles continues, the need for humans to trust that they’re safe at the hands of their vehicle will be critical. AI systems like Affectiva’s that allow vehicles to really understand occupants, will have a huge role to play, not only in driver safety but in the future of autonomy.”

Affectiva Automotive AI for Automated Mobility

In addition to changing the way the automotive industry looks at safety and occupant experience monitoring, AI is accelerating the shift towards a mobility-as-a-service model. AI is enabling automation as ride hailing services continue to grow and automated mobility-as-a-service becomes a multi-billion dollar industry by 2025.

Affectiva Automotive AI is used in system design so developers can better create HAV for use in fleets, to understand passengers’ cognitive and emotional states as a feedback loop. For example, once robo-taxis and other mobility as-a-service applications of an HAV are deployed, these vehicles can adapt their driving and interactions based on passenger reactions. This allows Automated Mobility providers and ride hailing companies to deliver tailored journeys that rapidly build customer trust, confidence and overall satisfaction.

“Imagine how much better your robo-taxi experience would be if the vehicle taking you from Point A to Point B understood your moods and emotions,” said Christopher Heiser, co-founder and CEO, Renovo. “Affectiva Automotive AI is integrated into AWare, our OS for automated mobility, and running on our fleet in California today. It is a powerful addition to the AWare ecosystem by providing a feedback loop between a highly automated vehicle and its occupants. Companies building automated driving systems on AWare can use this to tune and potentially provide real-time customization of their algorithms. We believe this real-time passenger feedback capability, combined with aggregate analytics on the occupants’ emotional and cognitive states, will be fundamental to technology developers and operators of automated mobility services.”

The first release of Affectiva Automotive AI enables the development of next-generation DMS and passenger experiences. It allows for in-cabin face and head tracking of all occupants simultaneously, and supports multiple camera positions, as well as Near-Infrared and RGB cameras. The solution measures in real time critical facial expressions and emotions such as joy, anger and surprise, as well as vocal expressions of anger, arousal and laughter. In addition, the solution provides key indicators of drowsiness such as yawning, eye closure and blink rates. Affectiva Automotive AI runs on embedded and mobile devices. Affectiva is collaborating with leading OEMs, Tier 1s and technology providers to bring its Automotive AI solution to production vehicles.

Affectiva will be demoing its Affectiva Automotive AI solution at NVIDIA’s GPU Technology Conference in Silicon Valley, March 26-29, 2018, at booth 923. Affectiva will also be speaking at GPU Technology Conference on the panel, The Future of Mobility and In-Car People Analytics, on Thursday, March 29th at 2:00 p.m. local time.

About Affectiva
Affectiva, an MIT Media Lab spin-off, is the leading provider of AI software that detects complex and nuanced human emotions and cognitive states from face and voice. Its patented Emotion AI technology uses machine learning, deep learning, computer vision and speech science. Affectiva has built the world’s largest emotion data repository with over 6M faces analyzed in 87 countries. Used by one third of the Fortune Global 100 to test consumer engagement with ads, videos and TV programming, Affectiva is now working with leading OEMs, Tier 1s and technology providers on next generation multi-modal driver state monitoring and in-cabin mood sensing.

Contacts

March Communications
Stephanie Jackman, +1 617-960-9875
affectiva@marchcomms.com

Release Summary

Affectiva, the global leader in Artificial Emotional Intelligence (Emotion AI), today announced Affectiva Automotive AI.

Contacts

March Communications
Stephanie Jackman, +1 617-960-9875
affectiva@marchcomms.com