Autonomous driving – green light, red light, or yellow?
Earlier this year Tesla CEO Elon Musk said the future is now. By the middle of 2020, he said at an event for investors, Tesla’s autonomous system will have improved to the point where drivers will not have to pay attention to the road. He revealed that Tesla has plans to roll out Level 5 autonomous taxis next year in some parts of the United States, which means they will be capable of driving themselves anywhere on the planet, under all possible conditions, with no limitations.
That’s compelling, but is it really possible within such a short timeframe? In May, a month after Musk’s speech, Consumer Reports said that the new lane-changing feature on Tesla’s updated Navigate on Autopilot software lags far behind a human driver’s skills.
A July story in The New York Times quoted other automaker execs whose views were considerably more subdued than Musk’s about the state of automated driving today. Ford’s chief executive, Jim Hackett, the article reports, said that “the industry overestimated the arrival of autonomous vehicles.”
Ford hasn’t abandoned its plans and, in fact, is working with Volkswagen to use autonomous-vehicle technology from startup Argo AI in ride-sharing services in the next couple of years. Argo AI CEO Bryan Salesky chimed in that creating driverless cars that could go anywhere was “way in the future.”
Using AI to Build AI
Musk’s predictions may be optimistic, but Ford may also be misguided about just how long true autonomous driving will take.
In September, a research paper entitled Human-Machine Collaborative Design for Accelerated Design of Compact Deep Neural Networks for Autonomous Driving was published. The paper was jointly written by Waterloo AI Institute, University of Waterloo, DarwinAI Corp., and Audi Electronics Venture.
The paper evaluated the performance of DarwinAI’s Generative Synthesis AI-assisted design platform in accelerating the design of a compact deep convolutional neural network for object detection in autonomous driving based on a low-cost GPU.
GenSynth uses “AI to build AI” in order to dramatically reduce the size of deep learning neural networks while maintaining functional accuracy and reducing inference time, the company says. And it facilitates “explainable’ deep learning – the ability to understand why a network makes the decisions it does. GenSynth, whose end result is to enable and accelerate AI for organizations and developers, is based on technology invented by Waterloo Engineering professors Alexander Wong and Mohammad Javad Shafiee.
The user design prototype behind the white paper was specified by Audi, a partner in the project, with a focus on detecting ten different types of automotive-relevant classes. They included cars, buses, pedestrians, and traffic lights.
A New Approach to Deep Learning
GenSynth technology is premised on the idea of human-machine collaboration. The case study paired up the work of Audi engineers with the speed of AI to create better, more compact networks tailored for specific tasks more quickly.
“It’s humans collaborating with AI,” says Sheldon Fernandez, CEO of DarwinAI. “It still requires the creativity of a human to understand context and take that laborious, repetitive work and outsource that to AI. Then they can spend time on creating the network.”
For instance, a human has to interject himself or herself in a situation like this: a bike may be attached to an SUV, and an autonomous vehicle may keep stopping when faced with the setup. “The machine couldn’t differentiate a mounted yet still moving bicycle from one ridden by a pedestrian” Fernandez says. It needs human intervention to train the system to recognize the difference between the two. “GenSynth uses other forms of artificial intelligence to probe and understand neural networks in a fundamental manner,” he says. It then uses AI a second time to generate smaller and smarter networks:
“Deep learning excels at finding correlations in data when the amount of data would overwhelm a human. We build up a very sophisticated understanding of the network and use AI to generate a new family of neural networks more compact than the original but as good from a functional standpoint.”
It’s a really high level of optimization that also illuminates how networks reach their conclusions, Fernandez says.
It can also significantly accelerate design tasks. Typically, people pick the task they want to do – like translate a language – then go to GitHub and choose the public model they think will do a good enough job and try to customize it for that task. “That can take months,” Fernandez says. It also can be expensive to deploy such models to production – when the neural network is finally designed, it sometimes needs to be tested against terabytes of data.
“Running those tests takes a lot of GPU time, and in the cloud there’s expense associated with it as well. With a smaller model, tests run faster,” he says. “So, there are real economic savings around our optimization technology.”
Success and Next Steps
The findings of the research were that the technology enabled the creation of a neural network that was almost two times smaller than usual without sacrificing any accuracy.
“The [Audi] engineers identified that the web-based platform was fast and easy to use for generating optimized neural networks based on an original input design prototype, as a data scientist or engineer only had to care about which dataset to use and which training parameters they had previously used to build the input design prototype….The engineers liked that the platform generates specialized neural network models which were significantly smaller and faster than the original design prototype fed into the GenSynth platform while maintaining modeling accuracy.”
DarwinAI has so far done numerous paid engagements in aerospace, consumer electronics, automotive and other industries. Ultimately, Fernandez says, the technology could find its way into a Level 5 vehicle.
Image used under license from Shutterstock.com