Tesla has developed its own processor specifically for neural networks and autonomous driving. It's called a full self-driving computer (FSD)
Autonomous cars must interpret and respond to situations as a human driver. Here is how it will work
No matter when, can self-driving cars ever work at all? This is probably the question in the minds of most people. But to work, fully autonomous cars will require the invention of a machine that has human cognitive abilities.
The building block of the human nervous system is the neuron, and millions of them form neural networks in the body's central nervous system. To make autonomous cars a reality, computer scientists are needed to create artificial neural networks (ANNs) that can do the same job as human biological neural networks.
So assuming what is actually achievable, another thing an autonomous car should have is the ability to see, and that's where opinions in the industry split. Until recently, the conventional wisdom was that, as cameras, radar and ultrasonic sensors cars already have on cruise control and advanced driver assistance systems, lidar (laser rangefinder) is essential. Lidar is like high definition radar, using a laser beam instead of radio waves to scan a place and create an accurate HD image of it.
One stumbling block is the high cost of lidars, which cost over £60,000 just two years ago. Cheaper options with how should bring the price down to £4000 but that's still a lot for a single component. Not everyone believes that lidar is necessary or desirable though, and how Tesla and Cornell University scientists independently arrived at this conclusion.
Cornell found that artificial intelligence processing (AI computers) can distort images from the front camera. But by changing the position in the software to give it more of a bird's-eye view, the scientists were able to achieve similar positioning accuracy for lidar with a multi-pound stereo camera placed on either side of the windshield.
Tesla's reasons are that there are no human-equipped laser projectors for the eyes and that the secret lies in a better understanding of how neural networks and how to nurture them. While a human can identify an object from a single image, at first glance what the computer sees is a matrix of numbers that determines the location and brightness of each pixel in the image.
Therefore, the neural network needs thousands of images to find out the identifier of an object, each label allowing it to be identified in any situation. Tesla says no, the chip hasn't yet been custom-made with neural networks and autonomous driving, which is why it's spent the last three years developing it. The new computer can be installed and has been included in the new Tesla since March 2019. Tesla Fleet is already collecting hundreds of thousands of images needed to train the neural network's brains in 'shadow mode' but with no offline features enabled at this stage. Tesla boss Elon Musk expects to get a full set of self-driving installed software components in cars this year and robotaxis test work in 2020.
50 trillion operations per second
Tesla engineers say a self-driving car needs a neural network computer capable of performing at least 50 trillion operations per second (50 vertices). In comparison, the human brain can manage about 10 maximum. The new Tesla computer consumes no more than 100 watts of power, so it can be installed. Bosch and NVIDIA are developing a similar "brain" for autonomous cars to be ready by 2020. It's called Bosch AI self-driven computer.
Is the public willing to share the roads with self-driving cars?
Does semi-autonomous systems make cars safer?
Why the arrival of fully autonomous cars reflects the slow growth of the robot