Can you brief us on the technology used? How soon can this be used in Indian cars?
The beauty of this technology lies in a black-box (DriveSens), which is capable of interfacing with any automotive standard drive-by-wire platform, and turns it into a driverless unit. It comprises of a highly-robust perception module responsible for sensing and estimation of the environment around the vehicle from vast vocabulary of visual cues (explicit and implicit) such as drivable road space, lane markings, traffic lights, traffic signs, to moving obstacles (both static and dynamic) such as vehicles and pedestrians, all by using camera data. It uses a tactical grade IMU fused with GPS and vehicle Odometry for localization of the vehicle on centimeter scale accuracy.
Various kinds of technologies have been used primarily focusing into Camera, GPS and IMU. The camera has been widely used to integrate various functionality like Intelligent Pedestrian Control System, Barking Control, Forward Collision Avoidance, and Ego Vehicle Modeling with respect to global positioning and inertial measuring unit acting on a dead reckoning fashion for centimeter scale accuracy. The AI brain of the vehicle is the primary control unit to make any decision system in real time scenario.
It’s hard to predict how soon the driver will become redundant but it will be sooner than you think. However driving in Indian road scenario is a fairly complicated task and we’re constantly involved in the R&D and the platform is being updated rigorously. Our latest work include using state-of-the-art machine learning techniques for modeling an ideal driving behavior where the robotic vehicle doesn’t necessarily drives like a robot but an actual human driver.
Human drivers plan ahead by negotiating with other road users mainly using motion cues – the “desires” of giving-way and taking-way are communicated to other vehicles and pedestrians through steering, braking and acceleration. These “negotiations” take place all the time and are fairly complicated – which is one of the main reasons human drivers take many driving lessons and need an extended period of training until we master the art of driving.
The challenge behind making a robotic system control a car is that for the foreseeable future the “other” road users are likely to be human-driven, therefore in order not to obstruct traffic, the robotic car should display human negotiation skills but at the same time guarantee functional safety yet conform to the driving ethics. Knowing how to do this well is one of the most critical enablers for safe autonomous driving.
In India its difficult as because of road condition and traffic congestion, but we are trying to introduce the Level 2 autonomy also known as Advanced Driver Assistance System for India which can completely assist to the driver by continuously monitoring the external condition of other vehicles, so that the driver can make a very perfect decision and can reduce road accident drastically. So in India, we are trying to introduce driver assistance functionality instead of fully autonomous driving system now.
Has the product been tested on Indian roads given that Indian traffic is chaotic?
It’s not permissible to test in India in public road as laws of driverless cars are yet to be formed. However it has been tested in various controlled environments where all the basic road scenario has been created artificially to test vehicles.
How do you benchmark your project against global well established projects led by organizations such as Google and Tesla?
Autonomous Technology that is set to become a reality that’ll completely disrupt not just the way we drive, but the way we live. It’s become a free-for-all race, where all the top automakers are competing with not just each other but also against non-automotive tech giants like Google to put driverless or autonomous cars on the road.
If successful, how do you think can AI be leveraged for improving driving standards in India?
AI will only be a mirror of humans. In order to make AI think more like humans and ultimately faster than humans think, artificial neural networks are now copying the structure of the human brain. It seems like researchers are ultimately chasing the light-bulb moments created when neurons make connections. What kind of ethical filter do we need to ‘control the action’ that AI will take once these lightbulb moments are created? ‘How much is too much AI’.
It will take some time in India, it cannot be established directly on the road, and gradually it will become advanced by learning and re-learning of the behavior of the Indian Transportation System and followed by up-gradation of the existing system. Also the transportation system and protocols need to change where vehicle to vehicle (V2X) communication will be the major feature for communication.
Every new technology till date has come with its imperfections at start. If John Mauchly gave up working on the computer after reaching a dead end, we would not so easily be sitting and commenting on social media. Driverless cars will be here soon, it will still take some time before the masses feel the true impact of the technology.
Driverless cars do offer a promising alternative to driving and would reduce the number of road fatalities due to human error. Automated vehicles have the potential to dramatically improve road safety and revolutionize our transport systems.
What are your plans for rolling out this product in Indian cars? Have any tieups been made with Indian automakers?
We’ve been in talks with various automotive companies, OEMs and 3rd party vendors for rolling out this product in the near future. We define the autonomous world as a future state when intelligent technology systems, operating without human participation, enable new business models in a more efficient society. We’re really looking forward to a time when generations after us look back and say how ridiculous it was that humans were driving cars.