February 11, 2021

By Ashley McManus, Director of Global Marketing, Affectiva 

Speaking to a device and getting an answer is thrilling, but the basic question-and-answer process is not as cutting edge as it once was. Today, human-machine interactions are going one step further, with in-car assistants now capable of real conversations, an exchange of information in which context, voice, and even eye movements play a role. Taking the experience one step further, being able to create different reactions and experiences based on the mood and personal state of people within a vehicle will be critical.

I recently had the chance to sit down with Stefan Hamerich, Director of Product Management at Cerence for Affectiva’s Human-Centric AI Podcast, where we spoke about designing for personal experiences within the vehicle. Stefan’s team is responsible for the strategy and roadmap of all embedded speech input and text products at Cerence, as well as their embedded platform. In this episode, he shares some of his thoughts and ideas around designing in-vehicle systems with the human in mind. 

Here are a few tidbits from our conversation: 

You recently conducted a research project with a major European automaker. Can you discuss broadly what that project entailed?

While I can’t share too many details because this new generation of cars isn’t yet on the road,  I can give the engineering view. As we look beyond speech recognition, there is more to explore. That’s why it’s so great to be connected with Affectiva and its Emotion AI technology. We envision this human-machine interaction to become more “human” and want to build the machines in such a way that they also understand the humanity of the interaction.

These interactions involve emotion and mood, things that are very important, and when we understand them, there is a power to that. It’s powerful for technology to be able to understand happiness or frustration. Imagine being in your vehicle, on the way to a meeting and you suddenly realize that you might not make it on time. You become frustrated, and if the car can sense this frustration and perceive the concern, what if it could support the driver by communicating in a calm manner to help alleviate some of that stress? 

Another way many vehicle assistants demonstrate emotional intelligence is to tell jokes to their occupants. Humor of course differs depending on the person. But if a system tells a joke and the vehicle can sense the reaction to that joke, it could leverage AI to tailor its jokes to the type of humor the passenger likes. This is one step towards creating a personalized experience. 

Affectiva is also working in the automotive space to provide in-cabin sensing solutions to car companies and optimizing what the in-cabin environment looks and feels like. We've worked with Cerence closely for a while now. How are you building emotion recognition into your systems?

Emotion indeed is about to appear as a new modality, in addition to speech, gaze, gesture and more. That is really part of that communication between the driver and the car. We all know that human-to-human communication is not just speaking - there is much more. It's gestures, eye movements, emotions, all expressed through facial expressions, or tone of voice to name a few. When we want to humanize the interaction between the car and the driver, emotion becomes an important piece of this puzzle. Leveraging the AI technology to not only understand that emotion but also using it to support the driver going forward, that is exactly what we want to do in  terms of emotional recognition.

As an example of this in action took place back at CES 2019, where Cerence and Affectiva  had a joint demo to showcase some initial work around occupant fatigue and frustration. The demo was a prototype at the time, but we have since already learned so much about how customers actually like these types of In-Cabin sensing capabilities, so we have continued working with Affectiva and really enjoy doing so. 

What are some of the ways Cerence and Affectiva are working together, and how has the experience been?

There are several things we're doing together. We are working to integrate Affectiva emotion recognition technology within our Cerence Drive platform, and then leverage that data and knowledge to guide the experience. We think of communication as always consisting of two hands: on one hand, there is the input, where we work with Affectiva to sense which mood the passenger or the driver is in. The other hand is the output, where we combine this data to present a personalized experience based on what that occupant needs. 

For example, If you are interacting with the in-vehicle assistant and you are frustrated, it would be helpful if the assistant could sense your frustration and react differently. That is the other thing we are working on: creating an experience in such a way that it calms down the passenger. It’s a really complex problem to solve: you need lots of pieces to fit together, but that is what we do. It’s been a great experience working with Affectiva, we continue to have joint research projects together, demos, customer work, etc. From my point of view, I hope it's just the start and that there's much more to come!

 

To hear the full Q+A, listen to the podcast here.

Discover More About the Future of Moving Experiences