# Is Autopilot actually "Learning" on V9?



## 3V Pilot (Sep 15, 2017)

I've noticed some interesting new Autopilot behavior since upgrading to V9 and I'm wondering if anyone else has had similar experiences. I know there is a lot of talk about Tesla's "Neural Net" and how it will help make autopilot better but does that mean that our cars now "Learn" on their own and in collaboration with other Tesla's on the road? From what I've seen over the first few days of V9 I'm starting to think the learning features are now active, let me explain.....

On V8 and prior the Autopilot behavior was always pretty static, by that I mean it would always react pretty much the same to any given road. For example on my morning commute there is one spot where the fast lane of the freeway splits and becomes 2 lanes. V8 Autopilot would always veer left as the lane lines widened, then snap back right to stay in the same lane it was in to begin with. I just started disengaging auto-steer prior to that section so I didn't look like a half crazed maniac...lol.

On V9 the first day I left Autopilot engaged and in that same spot there was much less of a left input and a minor correction back to the right. Then, the very next time I noticed there was almost no movement at all, like the car "knew" what was coming and didn't swerve at all. This has never happened before, Autopilot changing behavior without a software update.

I also have another example where V9 was getting much too close to a curb on a right hand bend one day and I had to disengage auto-steer twice, the next day around the same bend it stayed right in the middle of the lane and seemed smoother at that. Again, same road, different behavior, no update.

I don't think I'm imagining things and maybe this whole Neural Net thing is actually active now but I'd just like to know if anyone else has seem similar events? Or did I just miss the fact that this is what Elon has been talking about all along?


----------



## Gavyne (Jul 7, 2018)

Neural net was always working, V9 just made it a whole lot better.


----------



## 3V Pilot (Sep 15, 2017)

Gavyne said:


> Neural net was always working, V9 just made it a whole lot better.


I was under the impression that the Neural Net was helping Tesla come out with better Autopilot upgrades, basically back at the factory. Was it your impression that the cars actually got better in between updates? This is the first I've noticed it and that is why I'm assuming it's actually is integrated into the car now and having a direct effect.


----------



## babula (Aug 26, 2018)

If what you reported is true, thats truly incredible.

Not sure if it always existed in V8 but I recall reading something about it getting 500x better in V9.


----------



## Gavyne (Jul 7, 2018)

Neural net is about processing the data, such as V9 increased the capacity in which the data can be processed. But neural net is also about training the data. Each driver using the autopilot trains the data, the new learned data then is applied to all cars. If your question is whether autopilot is learning, then yes, it most definitely is learning. What we don't know is their process for validation, like who's validating the data, how much are they doing it manually, etc..


----------



## GDN (Oct 30, 2017)

Neural nets are something I don't understand, but reading about them I've been led to believe it will get smarter on it's own. I'm not sure if that happens in each individual car, or if the feedback from cameras and car paths is just fed back to the mothership at night which then improves maps or other tidbits that could be fed back to the navigation system to improve its behavior. I am definitely led to believe however that it doesn't take the next release to come out to make the car/nav smarter, it is an ongoing improvement at all times. Just don't know the details of how.


----------



## Bernard (Aug 3, 2017)

Gavyne said:


> Neural net is about processing the data, such as V9 increased the capacity in which the data can be processed. But neural net is also about training the data. Each driver using the autopilot trains the data, the new learned data then is applied to all cars. If your question is whether autopilot is learning, then yes, it most definitely is learning. What we don't know is their process for validation, like who's validating the data, how much are they doing it manually, etc..


In the case discussed, there cannot have been any person validating -- Tesla does not have tens of thousand of employees just monitoring Tesla drivers ;-) But the system can learn perfectly well on its own, through corrections. If you take over from Autopilot (and do not have an accident and do not end up the way of another car ;-), the system can learn from that; in fact, it could also learn from your not taking any action -- by concluding it is doing fine and reinforcing its current settings. Given that the task is car driving, where errors can be lethal, human corrections are likely to be given a lot more weight than if no correction is applied -- in fact, I would guess that reinforcement through lack of action is not happening, since it may be a bit too risky.
But, yes, doing that requires a lot of data -- the system has to have enough information to characterize the situation that led you to take over, and, more challengingly, it must infer what the correct action should have been -- next time, it cannot just mimic exactly what happened, since that would be jerking you around (at the time when you took over in the first occurrence), and that takes a lot of computing power, even if (as I describe below) it is just recomputing a few calibration values rather than adjusting the net's weights.
A neural net is a very simple concept (originally inspired by early -- and somewhat mistaken -- descriptions of the brain as a collection of neurons, each neuron connected to a fairly large number of others, using connections that can be "weighted" -- some are more influential than others); the concept is quite old and the first "artificial" neural nets date back 50 years, but the issue has always been one of scale: the brain has billions of neurons and early research was using neural nets with tens, or at best hundreds of nodes, because anything larger would have taken forever to train (and also because data were scarce). I would not be surprised if Tesla turned up the rate of per-vehicle learning (through recalibration).
I would be rather surprised if individual cars regularly recompute the weights of their own nets -- this is a very time-consuming computation and it would be risky to allow modification of the entire net based on some non-validated data such as a correction. More likely, what is happening is some local recalibration. For instance, it would be much simpler to have some parameters describing how close to the inside of a turn the car should drive and recalibrate those parameters after the driver corrects the trajectory in a curve rather than rerun the computation of marginal probabilities using the correction data. (Another inexpensive measure would be to memorize each correction so as to be ready to deviate from the otherwise automatic course of action when presented with the same circumstances -- a sort of patch between firmware updates.)
In any case, it's great and fascinating!


----------



## Gavyne (Jul 7, 2018)

I believe I read in an earlier article where they mentioned people going through the data. So I think there are people looking at the data. What we don't know is the exact details, like what data they are looking at, how much of it, and what they're doing with the data. They won't release the details either, just as Google won't release their search engine algorithm.

I do agree the machine is learning on its own. But at this point in time I do believe Tesla still is tweaking and inputting data in the neural net.


----------



## KFORE (May 19, 2018)

Each car is certainly not learning on its own. The trained NN's are included in OTA updates and your car runs the trained nets locally, but does not self-learn for each specific car. That would actually be a nightmare in terms of having an entire fleet learning in a non homogeneous way. No two cars would react the same. Tesla doesn't want that. Instead, your car may flag an issue automatically and send it to Tesla once it gets back on wifi, which they then can use to train the larger NN that all cars get during each OTA update.

My guess what you're experiencing is the NN's are quite new in V9 and are going to produce some dynamic results as Tesla works out the kinks of using all cameras as a part of the network.


----------



## kort677 (Sep 17, 2018)

auto pilot is always learning, many of us share the data from the car with the mothership, the mothership receives vast amounts of data that is then used to make adjustments. I have noticed changes in the AP system. most of us who drive regular routes have certain spots where the AP can get a bit wonky, and sometimes that wonkiness is lessened, why? because the AP refined itself and "figures" out whatever was causing that wonkiness


----------



## garsh (Apr 4, 2016)

KFORE said:


> Each car is certainly not learning on its own. The trained NN's are included in OTA updates and your car runs the trained nets locally, but does not self-learn for each specific car. That would actually be a nightmare in terms of having an entire fleet learning in a non homogeneous way. No two cars would react the same. Tesla doesn't want that. Instead, your car may flag an issue automatically and send it to Tesla once it gets back on wifi, which they then can use to train the larger NN that all cars get during each OTA update.


KFORE's got it. 

"Machine learning" is an extremely CPU-intensive _offline_ process. Tesla will use _thousands_ of machines running in parallel for this. You feed in a ton of data (collected from all the cars of the fleet), and out comes a new neural network (aka neural net, or NN).

The neural net is then included in the next version of vehicle software. It is also CPU-intensive, but can be handled by the car's single computer. This allows the car to make decisions based upon all of its input. But there is no "learning" happening at this point. The car will just be making decisions based on the programming of the neural net.

If you notice any difference in behavior in a single car from one drive to the next, then I think it's more likely due to calibrating the cameras. Each car's cameras will be aimed slightly differently, and it may take the car a little while to determine exactly how each camera is aimed and compensate for the differences.


----------



## MelindaV (Apr 2, 2016)

babula said:


> If what you reported is true, thats truly incredible.
> 
> Not sure if it always existed in V8 but I recall reading something about it getting 500x better in V9.


You are thinking of the AP3 hardware (next year) being capable of x better performance than AP2.5


----------



## Ed Woodrick (May 26, 2018)

KFORE said:


> Each car is certainly not learning on its own. The trained NN's are included in OTA updates and your car runs the trained nets locally, but does not self-learn for each specific car. That would actually be a nightmare in terms of having an entire fleet learning in a non homogeneous way. No two cars would react the same. Tesla doesn't want that. Instead, your car may flag an issue automatically and send it to Tesla once it gets back on wifi, which they then can use to train the larger NN that all cars get during each OTA update.
> 
> My guess what you're experiencing is the NN's are quite new in V9 and are going to produce some dynamic results as Tesla works out the kinks of using all cameras as a part of the network.


I'll agree, and to slightly translate. Each car does not learn. What occurs is that driving information is accumulated in the Tesla cloud and with the data, they use the neural nets to create sets of parameters that then become part of a specific release. So everyone's car drives essentially the same, but upon each update, they drive better, using information from other drivers.

Would we really want cars to learn the bad driving habits of some drivers?


----------



## Rick59 (Jul 20, 2016)

Everyone is making very persuasive arguments so let me put in my nickel’s worth.
I doubt that Tesla is using valuable central processing time to correct a small aberration in one location, in one car.
There is no reason to think that the car’s computer can’t process some simple transactions and remember them for the next time. I don’t see why a world-wide update would be needed to remedy a situation in my remote corner of the planet.


----------



## Jay79 (Aug 18, 2018)

If your car will auto raise the suspension on the S or X or open your garage door based Geographical Data by remembering your input, its plausible to assume it can remember exactly where you disengaged autopilot and what manual correction was made at the time and place of the incident. Perhaps V9 has opened up this kind of self calibration and fine tune refinement. I think the NN will be an over all data push to continue to smooth AP out and implement enhanced AP features. Due to the million scenarios each driver will experience, a fine tune adjustment within each car makes perfect sense. FYI, this is all guesses and speculation


----------



## JWardell (May 9, 2016)

In my understanding, the entire system learns as a whole, and if you have a high number of interventions or corrections in a spot, it will learn the proper behavior, then encode that in the high-resolution maps that are updated and downloaded from time to time. Round-trip time could be weeks or months, but it should eventually stop veering into an exit ramp or braking for a bridge once the map tells it not to


----------



## 3V Pilot (Sep 15, 2017)

Wow, some really great responses here from people who really understand this stuff. Thanks everyone! It all just fascinates me and I like to see the car improving all the time. How it all works, well that's mirrors and magic to me! Sounds like my first assumption was correct in that the main learning happens back at Tesla's top secret underground lair were they lock away the most knowledgeable computer geeks and never let them see the light of day.

I'm just glad it does learn somehow and since I like to believe the car is my own personal AI droid I'll just keep on believing that it's little brain is learning all on it's own....lol.


----------



## PNWmisty (Aug 19, 2017)

Jay79 said:


> If your car will auto raise the suspension on the S or X or open your garage door based Geographical Data by remembering your input, its plausible to assume it can remember exactly where you disengaged autopilot and what manual correction was made at the time and place of the incident. Perhaps V9 has opened up this kind of self calibration and fine tune refinement.


That's a good point. While I agree very much with those who say the NN learns as a fleet, I think it might be overstating what we know to assume every car will behave exactly the same in every situation. I think it's likely that individual cars could set "flags" at certain points. These points would probably be at spots where the EAP has a decision to make and either decision seems relatively safe and somewhat equally suitable. The "flag" could then develop a preference for one behavior over the other, over time, based upon whether EAP was disengaged or whether it was necessary to swerve shortly after the flag or not.


----------



## PNWmisty (Aug 19, 2017)

3V Pilot said:


> I don't think I'm imagining things and maybe this whole Neural Net thing is actually active now but I'd just like to know if anyone else has seem similar events? Or did I just miss the fact that this is what Elon has been talking about all along?


I don't have enough repeat miles to say whether each car is actively learning but I can say the capabilities of version 9 are miles ahead of the previous version, especially in terms of being able to successfully navigate curvy, single lane roads/highways requiring varied speeds. I was in a 55 mph zone that had two consecutive 90 degree turns (the highway navigates around farmer's fields). Locals normally take these corners at 25-30 mph, speed up to about 45 mph in the middle section and then resume 55 mph driving after the second corner. On version 8 the EAP would enter the corners too fast and the car would cross wide into the oncoming lane (tested when there was no oncoming traffic). Yesterday, version 9 drove that section just like a local would. It also drove 8-9 miles of the same curvy rural highway slowing down appropriately for tighter corners and accelerating back to speed at corner exit, all without driver intervention.

So, while there is plenty to be unhappy with in terms of dashcam file corruption, etc, the real upgrade here was the capabilities of EAP.


----------



## Bernard (Aug 3, 2017)

garsh said:


> KFORE's got it.
> 
> "Machine learning" is an extremely CPU-intensive _offline_ process. Tesla will use _thousands_ of machines running in parallel for this. You feed in a ton of data (collected from all the cars of the fleet), and out comes a new neural network (aka neural net, or NN).
> 
> ...


The NN does not change except through firmware updates, clearly -- a neural net of the size used by Tesla takes enormous computational power to train.
But there is somewhat more to the car's Autopilot than just the monolithic NN.

For instance, we know Autopilot uses a number of calibration parameters (although we don't know what those are -- Tesla refers to initial camera calibration and little else). Some of these parameters could be adjusted as owners drive the car -- this is a form of learning if it is done in reaction to driving inputs. (Again, we know it occurs in the first 20-50mi or so for parameters affecting the cameras, but we do not know whether additional system parameters may be involved nor whether additional changes can be made after these initial 20-50mi.)
Automated Autopilot calibration for a particular driver (not just for a particular car) is of course desirable, so I would expect that Tesla is at least thinking about it and perhaps even introducing some minor aspects of it in new firmwares. (Even once we have true level 5 autonomy, not everyone will want to be driven around in the same manner, however safe; and while driver input remains essential, interpreting that input is important.)

There are other ways to modifiy the Autopliot that do not require recomputing the parameters of the NN, but I doubt that Tesla would use them, as frequent firmware updates almost always provide a better solution.
(For instance, memorizing certain special cases where the driver overrode the software, so as not to repeat the error that had to be corrected. Think of it as a patch on the current firmware, until a later firmware release fixes the problem by providing new weights for the NN itself. This is doable, but it might slow down Autopilot, since it would have to check whether one of these cases applies, and it is also somewhat risky.)


----------



## KenF (Jul 3, 2018)

Another point not mentioned is that Tesla is continually refining the high-resolution map data that it uses to support Autopilot. If your Tesla doesn't have the latest high-resolution map data for a particular area, then Autopilot performance in that area could be subpar until your car downloads the appropriate data. If your Tesla is on WiFi at home, then you should receive regular high-resolution map updates - as often as every week depending on your location. If your Tesla isn't on WiFi, then you may receive the latest high-resolution map data for a particular area only after you drive through that area.


----------



## garsh (Apr 4, 2016)

PNWmisty said:


> I think it's likely that individual cars could set "flags" at certain points.





Bernard said:


> Automated Autopilot calibration for a particular driver (not just for a particular car) is of course desirable, so I would expect that Tesla is at least thinking about it and perhaps even introducing some minor aspects of it in new firmwares.


I really doubt that anything like this is happening. If Tesla truly believes that they can produce a "good driving car" through machine learning, then they will be concentrating their efforts on improving that. Additional "tweaks" like this would simply be overriding what the neural net says to do. As a software developer who would want to implement a feature like this, you'd have to make the argument (to Elon) that you think these sorts of tweaks are required because you don't believe that the neural net can be improved to handle these cases properly.

I think that's the kind of argument that gets you fired from Tesla.


----------



## Mike (Apr 4, 2016)

Well, something as anodyne as the automatic hi-beams seems to have improved with V9.

I have to guess more cameras are being used to not blind oncoming drivers but not be tricked by the same reflected (speed limit) sign that would always momentarily turn them off in V8.


----------



## PNWmisty (Aug 19, 2017)

garsh said:


> As a software developer who would want to implement a feature like this, you'd have to make the argument (to Elon) that you think these sorts of tweaks are required because you don't believe that the neural net can be improved to handle these cases properly.


Not really. The argument could be as simple as the current neural net can become more useful in a shorter timeframe with this kind of common sense tweak. Musk is not going to fire someone who has a practical plan to improve the functionality of the neural net on a faster timeline.


----------



## PNWmisty (Aug 19, 2017)

One more thought on whether every car will behave the same in the same situation. The discussion about HOV lane markings and whether EAP will cross them got me thinking. Different states have different laws regarding whether solid lines can be legally crossed. The neural net will have to at some point, if not already, take into account the jurisdiction it's driving in to modify its behavior. It would be impractical to develop a different neural network for every state or jurisdiction. The logical conclusion is that there needs to be another layer added on top of the neural network to handle local situations. Much like a human driver drives instinctually but modifies that behavior depending upon which local rules may be in place (or their own personal preferences). 

This layer, or another layer above it, could be used to modify behavior specific to an individual car based on previous experience. The ability to customize the neural network to geographical or individual vehicle considerations doesn't detract from the effectiveness or usefulness of the NN, it adds to it.


----------



## L. David Roper (Apr 19, 2018)

Where is the computer located in the Model 3?


----------



## GDN (Oct 30, 2017)

L. David Roper said:


> Where is the computer located in the Model 3?


Which one? There are multiple. I believe the main is in the middle of the dash area is called VCFront and then there is VCLeft and VCRight and other various components. Check out this thread and watch his videos, you'll find more than you can digest. https://teslaownersonline.com/threa...world-to-fix-totaled-teslas.8225/#post-138240


----------



## JWardell (May 9, 2016)

L. David Roper said:


> Where is the computer located in the Model 3?


The autopilot computer is under the glove compartment and designed to be easily swapped and upgraded.


----------



## MelindaV (Apr 2, 2016)

the Tesla Daily podcast recently had an guest on (Jimmy_D from TMC) with a great conversation on the AI back at the mothership vs the in car processing. The show was about an hour long, and explained some of what the cars are doing and how
https://itunes.apple.com/us/podcast...-d-10-29-18/id1273643094?i=1000422830102&mt=2


----------



## 3V Pilot (Sep 15, 2017)

MelindaV said:


> the Tesla Daily podcast recently had an guest on (Jimmy_D from TMC) with a great conversation on the AI back at the mothership vs the in car processing. The show was about an hour long, and explained some of what the cars are doing and how
> https://itunes.apple.com/us/podcast...-d-10-29-18/id1273643094?i=1000422830102&mt=2


Thanks for posting, I haven't had time to listen yet but I will. For now I've come to the conclusion that the system is not actually learning or making adjustments in real time. This is because it has shown similar behavior to the first drive and the issues it had. It does seem to have more leeway on how to drive than earlier releases because I've found it doesn't always perform the same in any given situation. I'm guessing they are giving the AI a little more "leash" to learn and see where it goes, but then again I know nothing about programming and it's all just a guess on my part.


----------



## Bokonon (Apr 13, 2017)

There's a new CleanTechnica article out this morning that references the podcast above, and aims to explain Tesla's approach to autonomous driving in a way that's accessible to non-technical readers:

Deep Dive Into Tesla's Autopilot & Self-Driving Architecture vs Lidar-Based Systems

(I haven't read the whole thing yet.)


----------



## ADK46 (Aug 4, 2018)

I agree with those saying the learning process - very difficult - is done by Tesla. Our cars get an immutable copy of the trained network. 

At least, I think that is the simple truth. I imagine there are actually many NNs involved. For example, there may be a dedicated NN for finding the painted lines in the image from a front camera. That greatly simplifies things for other NNs that need to know where they are. Train the "Lines NN", then leave it alone. There must be a lot of regular procedural computer code, too. User preferences must be implemented. I'd like some procedural code that checks the Lines NN against the map data, and resolves any disparities.

It would be nice if there were hooks into procedural code at some points of the overall process. The NN for dimming the headlights tells a bit of procedural code "I see a bright light! I'm gonna dim them!" The procedural code consults a table of prior errors and locations, and replies "Don't you remember? That's a big green sign, not a car. Sheesh."

There's a particularly sharp curve on the interstate near me. At that point, I wish some bit of procedural code would intervene and kick the auto-steer NN in the ass. I'm sure clever software people know how to do that. 

I'll probably wish I'd read those links before posting this. I will, promise.


----------



## garsh (Apr 4, 2016)

garsh said:


> "Machine learning" is an extremely CPU-intensive _offline_ process. Tesla will use _thousands_ of machines running in parallel for this. You feed in a ton of data (collected from all the cars of the fleet), and out comes a new neural network (aka neural net, or NN).
> 
> The neural net is then included in the next version of vehicle software. It is also CPU-intensive, but can be handled by the car's single computer. This allows the car to make decisions based upon all of its input. But there is no "learning" happening at this point. The car will just be making decisions based on the programming of the neural net.


Here's a great overview article about how the machine learning works for Tesla's autopilot:

Deep Dive Into Tesla's Autopilot & Self-Driving Architecture vs Lidar-Based Systems

EDIT: And I just now noticed that @Bokonon posted the link yesterday. I did read through the whole article, and while I think it's a bit wrong when it starts talking about Waymo's approach to self-driving, it does a great job of simplifying how Tesla's system works for people who haven't had much exposure to machine learning.


----------



## ADK46 (Aug 4, 2018)

As promised, I've gone to the links. The Jimmy_d podcast is long but may contain the best description possible of the Tesla system outside of Tesla. He and others have been able to inspect the actual code and sleuth out some of its elements well enough to understand it's basic architecture and find tantalizing details.

I was delighted to find that the Tesla system is what I and others here thought it must be. Jimmy_d also expressed surprise about how his guesses lined up with what he found. (I was a metallurgist, not a software engineer, but have dabbled in computers since 1968, including a bit of neural network data analysis in the late 1990s.)

_The clear answer to the original question is no - our cars certainly do not learn on their own_. For one thing, it takes a roomful of computers to handle that task. For another, Tesla can't have each of our cars driving differently, in an unverifiable manner. This is strictly _"fleet learning"_. (Jimmy_d did not rule out that our cars might be "learning" some small things themselves, but not necessarily by training a neural network. Electronically-controlled automatic transmissions have been adapting to drivers for decades.)

I took notes, but the nerds among us will definitely want to listen to the entire podcast:

1. Musk does not define "full self-driving", he does not reference the canonical "Levels". We don't know - he may not know - just how far Tesla's camera-only approach will take us, or when. Tesla started their efforts after a 2012 breakthrough in handling the large nets required for this approach, using GPUs (the "Inception" algorithm). Others seem stuck on the pre-Inception view of things, meaning a Lidar-based system. Who will be right? It depends on future increases in computing horsepower.

2. Not hitting stuff is relatively easy. Waymo cars don't hit things, but they don't drive like humans and end up _getting_ hit a lot, and pissing off other drivers. Learning to drive like humans is much harder.

3. Waymo, etc. are determined not to let their cars kill people, which they fear would end their programs. _Musk is willing to allow Teslas to kill people!_ His thinking is that the proper goal is to be the first company that saves 1000 lives, overall. (I understand the logic here, but have additional logic: _I_ _must not be the owner of a Tesla that kills someone._)

4. The car is not driven directly by the output of a neural net! There is not just one big neural net, anyway. The neural nets in the car are each dedicated to some specific task, mostly to analyze image data. Conventional computer algorithms are used wherever possible, and one of them does the steering based on the rest. I was happy to hear that: i think of procedural code as an adult in the room you can talk to. Neural nets are _intrinsically inscrutable_.

5. Image analysis first involves finding "kernels" in the images - it is necessary to quickly reduce the amount of data, to brutally reduce it to only what the subsequent analysis requires. A kernel might be a line segment, a gradient, an edge. I recently read a book on the human vision system - it does this also: a tremendous amount of data reduction takes place immediately in the retina itself. It sends along the angles of lines of contrast and that sort of information to areas of the brain that do more of the same. What we perceive in our mind's eye is a model built from this reduced data, not the "pixel" data from our eyes.

6. Teslas used to get a little lost while going over a rise - the lines in the road become foreshortened. Jimmy-d believes a "path prediction" algorithm was implemented that fixed this. This caught my attention since my car seems surprised by curves on level road, as though it is not following any such path. If there is such an algorithm, it needs refinement. Seems simple enough - look at the damn map. Cynically, I have suspicions that the hill problem was solved by not looking ahead so far. We have all seen indications that the system does not look far ahead, where lanes merge or at intersections. I digress.

7. Version 9 has introduced some big changes in the visual neural network(s). Data from all 8 cameras are fed into one big net. Already reduced to kernels? He's not clear about this. The data include full color information; prior nets got only grayscale and one color. Two frames are fed into the net on each time step, which allows it to figure out movement, and also to distinguish items from the background, e.g., a photo of a person on a bus stop versus a real person. Jimmy_d is very impressed with this, saying "Nobody else has done this."

8. But Jimmy_d - he must be a clever guy - figures it takes this new net 300-400 msec to do the calculations for one time step - too long to help drive a car. He speculates that it's not actually being used yet. This is not as crazy as it seems since in the past he's found quite a few extra nets in the software, perhaps just for development purposes. And he's found that Version 9 redundantly includes the old nets that this "all camera" net is meant to replace. But note: a day or two ago, on sharp curves, I thought I detected steering corrections being made about every half-second - an interesting correspondence.

Despite all this apparent sophistication and Jimmy-d's enthusiasm, we still see goofy results at times, such as dancing trucks in the blindspot display. Is that what the car is seeing and acting upon? I hope it's from some ancillary net not involved in driving the car. And, to keep harping on this subject, I don't understand why my car goes around curves like a drunk.


----------



## 3V Pilot (Sep 15, 2017)

ADK46 said:


> As promised, I've gone to the links. The Jimmy_d podcast is long but may contain the best description possible of the Tesla system outside of Tesla. He and others have been able to inspect the actual code and sleuth out some of its elements well enough to understand it's basic architecture and find tantalizing details.
> 
> I was delighted to find that the Tesla system is what I and others here thought it must be. Jimmy_d also expressed surprise about how his guesses lined up with what he found. (I was a metallurgist, not a software engineer, but have dabbled in computers since 1968, including a bit of neural network data analysis in the late 1990s.)
> 
> ...


Thanks for the post, I have not had time to listen to the podcast and that is a great summary.


----------



## Mike (Apr 4, 2016)

ADK46 said:


> Cynically, I have suspicions that the hill problem was solved by not looking ahead so far. We have all seen indications that the system does not look far ahead, where lanes merge or at intersections. I digress.


I agree with this observation 100%

My car will still hunt in the right lane of a freeway as the lane widens for an exit ramp if no painted lines exist on the starboard side of the vehicle.

Awesome post @ADK46, thanks for all that info.


----------



## ADK46 (Aug 4, 2018)

For my own amusement, I sketched out the architecture of a self-driving system before learning anything about them. I still don't know much, but it seems I was not wildly off. I present it here, hoping it provides a hint of what "architecture" means in this context.

This is just for the basic skill of driving down a road. I've paid no attention to a critical aspect of developing an architecture: anticipating additions.

I buried all the neural network components into the big "Vision" box, which magically and mysteriously supplies customized information to the other components. I did not wish to speculate about what must go into that box. I'm not sure my scheme is followed by Tesla - I thought there should be an intermediate point where a path for the next few seconds is defined, along with the necessary future inputs to the steering rack. This would be updated much more frequently than that, of course, but if that process went awry, the existing plan would be available, and should not be far wrong. No need to panic! The last module before the steering rack need only make small corrections to the plan, from error detected by the Vision system and also from the inertial measurement unit (can detect wind gusts, etc.).


----------



## Mike (Apr 4, 2016)

ADK46 said:


> For my own amusement, I sketched out the architecture of a self-driving system before learning anything about them. I still don't know much, but it seems I was not wildly off. I present it here, hoping it provides a hint of what "architecture" means in this context.
> 
> This is just for the basic skill of driving down a road. I've paid no attention to a critical aspect of developing an architecture: anticipating additions.
> 
> ...


I think the dead reckoning aspects that are clearly needed for the system to act human are still sorely lacking.


----------



## ADK46 (Aug 4, 2018)

Mike - Starboard? Dead Reckoning? Seems we are both sailors.

Dead reckoning must be amazingly accurate from an IMU. A great deal of redundancy is available for predicting a path: mapping, GPS (dynamically corrected by map data, and vice versa), the lines on the road, cars ahead. The output of each of those modules should include a quality parameter, so the the One True Path Module knows how to weigh each input (and to report to the driver when things are getting dicey). Deviations from this path can be obtained from the IMU (very quickly), vision system (quickly), and GPS (depends). Raw GPS is not accurate enough, of course, but corrected GPS can be derived from map data - it can be established that the true (relative to the lane) position is, say, 13 feet to the left.


----------



## Mike (Apr 4, 2016)

ADK46 said:


> Mike - Starboard? Dead Reckoning? Seems we are both sailors.
> 
> Dead reckoning must be amazingly accurate from an IMU. A great deal of redundancy is available for predicting a path: mapping, GPS (dynamically corrected by map data, and vice versa), the lines on the road, cars ahead. The output of each of those modules should include a quality parameter, so the the One True Path Module knows how to weigh each input (and to report to the driver when things are getting dicey). Deviations from this path can be obtained from the IMU (very quickly), vision system (quickly), and GPS (depends). Raw GPS is not accurate enough, of course, but corrected GPS can be derived from map data - it can be established that the true (relative to the lane) position is, say, 13 feet to the left.


Retired cc-130 navigator, with 1000's of sun/moon/star shots under my belt

The dead reckoning I used to do was old fashioned watch, map, ground.......and before that, something called pressure pattern lines.

In all cases, logic is used to have an expected solution prior to an event happening.

Until that actually happens with these cars, the whole self driving thing will still be very crude.


----------



## garsh (Apr 4, 2016)

One interesting tidbit from this talk. The ML stack is still not deciding "policy". That is, it's still primarily just recognizing things. Decisions such as "how much to turn the wheel" (policy) are still handled by human-written code, not by the neural net. It sounds like they do plan to eventually have the NN handle that part of self-driving as well.


----------



## TomT (Apr 1, 2019)

Based on my experience, if it is learning at all, it is doing it very slowly... I have a place on one state highway where it makes a serious mistake every time and I have to take manual control. This has happened at least two dozen times and still continues to happen...


----------

