September 23, 2020

By Rahul Hatkar 

The history of voice recognition can be traced back to the 1960s, when IBM introduced its first digital speech recognition tool. Fast forward a few decades, and several speech recognition products such as Dragon Dictate and Microsoft Clippy were introduced to the market. The automotive industry was quick to follow, with voice first finding its way into the car in the 1990s with simple commands and capabilities. At the same time,  telecommunication technologies were developing at a rapid pace, and with data becoming faster, cheaper and better, a new era of connectivity and possibilities took shape. Pairing the two – voice in the car and enhanced connectivity – was a natural evolution and a match made in heaven. Taking it one step further, advancements in human-machine interaction gave way to smarter phones, smarter speakers and varying IoT devices. The confluence of these three dimensions led to the advent of chatbots and voice assistants as we know them today: intelligent technologies that power the command, control and capabilities of devices, ushering in a new paradigm of human-machine conversations. As voice assistants became a popular mode of interaction, it was only natural for a growing number of cars to be equipped with their own voice assistants to power in-car connectivity. 

Today, automotive assistants are changing the way we use our vehicles and the way we interact with our surroundings, going beyond simple car controls and fixed responses and actions to a conversational interaction that brings advanced capabilities to our otherwise mundane commutes. For example, in-car voice assistants now serve a wide range of applications, from seeking stock and weather information to inquiring about a specific function of the car (“How can I turn on the fog lamp?” or “Does this car have a child lock?” for example). Extend its capabilities a little further, and the assistant can help manage your calendar, remind you of important events such as meetings and children pick-ups, read you the news, help with your shopping list, and even guide you to the nearest gas or EV charging station based on your route and preferences. Through the evolution of technology, conversations with these assistants have further evolved to understand more complex requests like, "Do I need an umbrella tomorrow?" or “Where can I pick up coffee while filling gas?" or even, in the case of contactless payments, "Can you pay for a tank full of gas?" or “Do I need to pay to park here?” The assistant can even handle multi-step queries and commands like, “Navigate home and send Susan a text with my ETA.” We’re also going beyond voice, enabling the driver to simply look at a building and can ask, "How much does dinner for two cost here?" All of this – and more – is possible with Cerence solutions, all while the driver has their hands on the wheel and eyes on the road, providing a safer experience for those with and around you.    

Cerence technologies have been improving in-car voice interactions for more than 20 years. From Mercedes’s MBUX to Audi’s next-gen voice assistant, you will find Cerence contributing to the success of industry-leading in-car voice assistants worldwide.  The key to this wide adoption is our understanding of each OEM’s specific needs, our ability to decode the complexities of working in an automotive environment, and our innate understanding of user behaviour. As we work with Indian OEMs to bring our technologies such as our state-of-the-art speech recognition, natural language understanding (NLU), text-to-speech (TTS) and speech signal enhancement into the Indian market, we expect the boundaries of usage will be redefined, causing a shift in driver behaviour. We’re already seeing a number of new products that offer voice-enabled capabilities such as opening the windows or sunroof or phone calling and messaging. But soon, tasks such as finding and paying for a parking spot at Connaught Place may no longer be the challenge that it is today. The ability to pre-book an EV charging station close to Marine Drive, or explore a vast variety of music while navigating to Goa, or pay utility bills while navigating to Whitefield no longer need to be a distraction from the drive. And, as India moves towards smart cities with intelligent mobility infrastructure, a new range of voice-enabled services will emerge, further simplifying and organizing our daily routine. 

With all that’s ahead of us, we continue to expand our offerings and look to new areas for growth. With our recent expansion in Pune, India, that houses an R&D and engineering center and an upcoming tuning lab, we are providing great opportunities to talent in AI, ML and core technologies for developing global products. We are also making major strides in addressing the challenges of the Indian consumer by developing significant capabilities for Indian regional languages on the heels of our successful testing of Hindi with full NLU capabilities. We look to further expand to more regional languages in India and provide highly customized solutions to support the needs of the Indian OEMs. We look forward to delighting consumers in India with a truly conversational automotive assistant that enhances their driving experience each and every day. 

Care to join us? Check out our open roles here
 

Découvrez-en davantage sur le futur de l'expérience des déplacements