Tuesday, September 24, 2024
spot_img
HomeCar ReviewsThe Slow Roll Toward Talking Cars

The Slow Roll Toward Talking Cars

If you have voice recognition (VR) in your car, you’ve probably tried talking to your ride once or twice, but after finding it doesn’t work, you likely didn’t use it again.

Automakers are attempting to get you and your car on speaking terms again by employing a hybrid approach to VR—a mash-up of onboard and cloud-based technology—that will help your car understand a command and respond appropriately.

With the advent of cloud-based VR such as Apple’s Siri and Google Now, asking a machine for something is becoming more normal. VR still has a ways to go before it’s fully conversational and responds accurately to your every command, but it’s getting better. And now the same capability is coming to the car, albeit slowly.

Nuance, the company that supplies the majority of onboard VR systems for automakers, is rolling out its Dragon Drive platform, a hybrid of both onboard and cloud-based VR. Although Dragon Drive is available on some production vehicles, cloud-based systems on portables can still talk circles around most embedded automotive systems.

Blame that on the technology lag typical in cars, said Erik Clauson, senior manager of product design for Nuance. “Smartphones and other devices are always going to be faster,” he told me at the NuanceAutomotive Forum in Detroit last week. But with Dragon Drive’s hybrid approach, automotive VR can get up to speed quicker than with embedded alone.

“Cloud-based systems provide another style of interaction,” Clauson explained. Vehicle-specific commands like switching radio stations and other low-latency tasks are perfect for a VR engine built into the car. More situational and location-based commands—such as “find the nearest gas station”—can be better handled by a cloud-based system.

The immediate challenge will be getting the two VR systems to work seamlessly for car owners. One of the hurdles in the rollout of Siri Eyes Free, Apple’s answer to VR in the car for its connected portable devices, has been marrying that feature with existing systems since drivers have to tell the car whether they want to talk to Siri or the embedded VR through separate buttons or commands.

In addition, many embedded systems have a specific speech protocol that dictates language interactions, whereas cloud-based VR is conversational. “If you say ‘navigation’ in a car, that’s different than if you say ‘coffee,'” said Clauson. The taxonomy of how to get into the domain of navigation is hierarchical with embedded systems, he added.

“With the cloud, you can use natural language and just say, ‘Find a coffee shop,'” Clauson said. “That’s an intent; the driver must be looking for a physical location, so he must be trying to do a destination search. We’re trying to apply that to hybrid systems, but we’re in this gray area,” he added. “From an engineering perspective it’s very black and white, but users don’t really care. Users want to interact how they naturally speak.”

In time, the two technologies will respond to the same commands and react in similar ways, although Clauson believes the communication breakthrough is still a new-car production cycle away. “The cloud continues to make great strides,” he said, “but to get state-of-the-art technology into production in a car takes three years. And that’s one of the key challenges that we’re facing.

Isabella Turner
Isabella Turner
Isabella Turner, a writer from Leeds, is passionate about the intersection of health and vaping. With a background in health journalism, she offers evidence-based insights into the potential benefits and risks associated with vaping.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments