Tesla Autonomy Day (Day 1)


So today Tesla held their first Autonomy for analysts. They described their own customer neural network chip for self driving. Began in 2016, Dec. 2017 first chip from manufacturing , July 2018 – chip full production, Dec. 2018 – started to retrofit employee cars, March 2019 – model S and X added to new cars, and April forward in the new Model 3’s. So three years from design to scale in all their major vehicles.

The goals of the hardware team are:

The final chip looks likes this:

Straight forward board, graphics, power, and compute. The chip has redundant chips, for safety and security. And the whole thing fits behind the glove compartment. Adding the idea of redundancy to say that the chance of the computer fully failing is less than someone losing conscieness while driving.

The chips us a consensus algorithm to see if they agree before actually sending the command to the car.

14 nm FinFET CMOS:

Also has a safety system on the chip so it will ONLY run software signed by Tesla. The chip can handle 2100 frames per second. Developed their own Neural Network Compiler for the chip – not a big surprise here.

Results:

  • Runs at 72w (goal was stay under 100w)
  • Chip costs is about 80% the prior
  • Can handle 2,300 framers per second (it is 21x faster than last chip) (NVidia is 21 TOPS(1) and they are at 144 TOPS)

According to Elon – “all Tesla cars being produced now have all the hardware required for full self driving.”

1) TOPS – Trillion Operations Per Second

Software discussion:

Andre created the computer vision class at Stanford. Has been training neural networks at Google, Stanford, and other places.

So the basic problem that the network solves is visual recognition. By explaining on training and back propagation explains how a network actually works. In order to help the training sometimes they will have a human annotate an image for things like lane lines.

This is key for when their are unusual results, night time, wet roads, shadows, etc. Key message is more data makes the networks better. And while simulation can help, the real world is needed to address the “long tale” of reality.

Using bounding boxes, Andre took us thru a concrete example of how the training works. Great example of fixing problems in learning, a bike on the back of a car. Had to use training sets to show bikes on the back of cars, to help it realize they are one item from a driving perspective, not two. Finding things on the road is also critical, so sourcing on the road, the networking needs to understand in this.

The software identifies when people have to intervene to fix things, capture that data and send it to Tesla, which fixes it and ads it to the unit test cases. Tesla can also request the “fleet” to send them certain types of data, i.e. people coming in from the right or left lane to center. They then run the new model in “shadow mode” to test it in production. They check the results, and then can deploy the new fixes by flipping a bit between “shadow mode” and production mode. This is basically A-B testing. This is happens regardless if the car is using auto-pilot or not. People’s driving is actually annotating the data for Tesla.

Augmented Vision in the vehicles will show you what the car is actually “seeing”. Path prediction is used to handle clover leafs on highways.

Love the comment that most of us don’t shoot lasers out of our eyes to see around, this is why visual recognition is critical for level 4 and level 5 self driving cars. Tesla is going all in on visual.

Elon believes they can be feature complete this year for self driving, for people to trust it by 2nd quarter next year, and year end 2020 for regulatory ready. Of course that will take longer to get regulators to actually approve it.

Final representation is about driving at scale with no intervention by Stuart:

How do you package the HW and Computer vision together to drive this a production at scale. They actually run multiple simulations at the same time, and use them to challenge each other to get to a single set of real world truth.

A deeper discussion of how shadow mode works.

Then Controlled Deployment:

Get clips to then play them back with new simulations.. and tune and fix algorithms.

Then confirm vehicle behavior and allow the system to validate it is working correct, And based on this data collection they will then decide when code is ready to be turned on.

So based on their lifecycle they are able to increase the speed of moving fixes into production.

Elon ended up talking about how things were added in for redundancy since 2016, and how the cars are designed to be robo-taxis since Oct. 2016. Cars prior to this will not be upgraded, as it is more expensive to upgrade a car, then just make a new car.

Reviewing of the master plan, says that Tesla is on track. Expect to have first Robo Taxis out next year, with no one in them. The challenge is for people to realize that the exponential improvement curve is getting us there faster than not. Biggest challenge is getting regulatory approval. Will become a Uber or Air BnB model, and where not enough people share their vehicles Tesla will deploy vehicles themselves.

Goal of Robo Taxis will be to smooth out the demand curve of the fleet.

If you lease a Model 3 you won’t have the option to buy it (based on this new model).

Current battery pack is design GE for 300,000 miles.. new model will be 1M miles with minimal maintenance.

Compared cost of owning / driving a car verses a Robo taxi. And then looked at gross profit of a Robo taxi.

Looks like Tesla will take this seriously, and they actually have a clause in their contracts that people can’t use the car with other “networks” so you have to use there robo taxi network.

Very interesting presentation, technology (HW, SW, and Scale) plus business models.