Creating a Sixth Sense for Driving

We're all waiting for the fully self-driving car, but how can we use technology to create a safer driving experience in the meantime?

We rely very heavily on our sight when driving a car. We don’t only use it to analyze what happens around us in the traffic, but also to be able to get information about the car. The dashboard of the car is filled with data that requires us to take focus away from the traffic. It’s even used to replicate information from traffic signs to show us things like speed limits. But, do we really need to use our sight to get all this information?

We’re already using other senses to some degree. We use our smell to tell that we might have a problem with our engine, our hearing to get aware of traffic and even our touch to get feedback from the road through the steering wheel. But could we extend the use of these senses to relieve our sight of some responsibilities?

To experiment with this, we have built an app that communicates with you through other senses. The app primarily communicates with you through haptic feedback and speech synthesize. It can for example tell you that you’re driving too fast by giving you haptic feedback through your Apple Watch and tell you that you’re running low on fuel through speech synthesize.

How did we build it?

First off we need to get information about the car. To do that we’re using a OBD2 (On-board diagnostics) scanner that connects over WiFi with your phone. The scanner has a ELM327 microcontroller built in that translates the various OBD2 protocols to a byte stream that our app can read.

In the app we set up readable and writable streams with CFStreamCreatePairWithSocketToHost that are connected to the scanner. The write stream is mainly used to specify which OBD-II PIDs we want data from. We parse the response on the read stream and call a delegate method. We found an old Objective-C lib to do this that we rewrote for Swift (and hope to release as open source soon).

To perform speech synthesize and recognition we’re using SpeechKit from Nuance. On top of the native AVSpeechSynthesizer, SpeechKit does both speech recognition and natural language processing that comes in handy for applications like these.

Try it out

We’ve published the code on GitHub for those of you who want to try it out. You’ll however need to get a hold of a OBD2 scanner for your car to make it work.