Is HW3 required to fix the problems with AP / EAP / NoA / AutoHighBeam / AutoWipers?

  • SUPPORT THE SITE AND ENJOY A PREMIUM EXPERIENCE!
    Welcome to Tesla Owners Online, four years young! For a low subscription fee, you will receive access to an ad-free version of TOO. We now offer yearly memberships! You can subscribe via this direct link:
    https://teslaownersonline.com/account/upgrades

    SUBSCRIBE TO OUR YOUTUBE CHANNEL!
    Did you know we have a YouTube channel that's all about Tesla? Lots of Tesla information, fun, vlogs, product reviews, and a weekly Tesla Owners Online Podcast as well!

  • It's OK to discuss software issues here but please report bugs to Tesla directly at servicehelpna@teslamotors.com if you want things fixed.

MNScott

Active member
Joined
Mar 16, 2019
Messages
48
Location
Minnesota
Tesla Owner
Model 3
Country
Country
Phantom braking was a myth to me...until it wasn't. Scared the crap out of me, was quite aggressive and I'm just lucky there was nobody close behind me. Driver should not need to cringe every time we come up to an overpass or signage over the road. I did bug reports the few times it has happened to me. Even if it does it 1 out of 1000 times - it's too much. Then Tesla says that FSD is "coming soon" and wants $5k-$7k of my money. Nope. Not until you can show me that the stuff I've already purchased can work. All. The. Time.
 

mswlogo

Top-Contributor
Joined
Oct 8, 2018
Messages
719
Location
MA
Tesla Owner
Model 3
Country
Country
Phantom braking was a myth to me...until it wasn't. Scared the crap out of me, was quite aggressive and I'm just lucky there was nobody close behind me. Driver should not need to cringe every time we come up to an overpass or signage over the road. I did bug reports the few times it has happened to me. Even if it does it 1 out of 1000 times - it's too much. Then Tesla says that FSD is "coming soon" and wants $5k-$7k of my money. Nope. Not until you can show me that the stuff I've already purchased can work. All. The. Time.
Exactly. The LDA/ELDA issues folks complain about are a "myth" to me too. Until....
 

Klaus-rf

Well-known member
Joined
Mar 6, 2019
Messages
293
Location
SoCal
Tesla Owner
Model 3
Country
Country
Had an interesting alarm / alert today. On 4-lane divided 45 MPH roadway with bike lane on right. Driving in right traffic lane. Using AP with TACC set to 47. First set of bicycles I passed AP didn't seem to notice - don't know if they showed up on the animated screen. I noticed, as usual, that AP still insisted on driving down the center of the lane while passing bikes. The AZ state law says we must leave 5 feet clearance.

I forced auto-steer off before passing the next [single] bike on the right to add more room while leaving TACC on. I was at least 7 feet away from the single bike. Car starts panic beeping / alerting as if I was about to hit something.

Didn't seem to care when AP was taking the car within 3 feet of bikes.

Unpredictable and inconsistent.
 

mswlogo

Top-Contributor
Joined
Oct 8, 2018
Messages
719
Location
MA
Tesla Owner
Model 3
Country
Country
Had an interesting alarm / alert today. On 4-lane divided 45 MPH roadway with bike lane on right. Driving in right traffic lane. Using AP with TACC set to 47. First set of bicycles I passed AP didn't seem to notice - don't know if they showed up on the animated screen. I noticed, as usual, that AP still insisted on driving down the center of the lane while passing bikes. The AZ state law says we must leave 5 feet clearance.

I forced auto-steer off before passing the next [single] bike on the right to add more room while leaving TACC on. I was at least 7 feet away from the single bike. Car starts panic beeping / alerting as if I was about to hit something.

Didn't seem to care when AP was taking the car within 3 feet of bikes.

Unpredictable and inconsistent.
Maybe it saw something on the left.
 

Mike

Legendary Member
Joined
Apr 4, 2016
Messages
2,449
Location
Batawa Ontario
Country
Country
Are they publishing all the accidents avoided by folks smart enough to stop using it? Or the stats when users canceling it at every bridge shadow or canceling it when cars/trucks are traveling to close behind? And taking credit for the HUMAN that is making judgement of only using it when it’s “safe” (or safer) to use.

The problem is the data they are collecting is very biased.

The only way to truly get a measurement of how good it is would be to not let humans intervene.

They are implicitly getting “cherry picked” data. I suspect they know this. And I suspect there is in house testing that less intervening happens to get more accurate data. Public data is very uncontrolled.
You have hit the nail on the head.

This all goes back to my old RCAF days and the concept of what was then called cockpit resource management (CRA) as it applies to studying aircraft accidents OR incidents.

The root cause of the incident of the large truck having to slam on the brakes and change lanes to avoid a crash is the phantom braking episode.

Had an actual accident occurred, an indentified contributing factor would have been the following distance between the truck and the phantom braking Tesla.

This one example of a core flaw of autopilot, phantom braking, has been hidden, statistically, because the intervention of the truck driver is not recorded by Tesla.

Until CRA techniques are applied to the near miss incidents, the product we are using will not improve to the point where in ALL scenarios it can correctly claim to be safer than human drivers.
 

Mike

Legendary Member
Joined
Apr 4, 2016
Messages
2,449
Location
Batawa Ontario
Country
Country
Had an interesting alarm / alert today. On 4-lane divided 45 MPH roadway with bike lane on right. Driving in right traffic lane. Using AP with TACC set to 47. First set of bicycles I passed AP didn't seem to notice - don't know if they showed up on the animated screen. I noticed, as usual, that AP still insisted on driving down the center of the lane while passing bikes. The AZ state law says we must leave 5 feet clearance.

I forced auto-steer off before passing the next [single] bike on the right to add more room while leaving TACC on. I was at least 7 feet away from the single bike. Car starts panic beeping / alerting as if I was about to hit something.

Didn't seem to care when AP was taking the car within 3 feet of bikes.

Unpredictable and inconsistent.
My broken record rant: until autopilot stops pedantically staying in the middle of a lane under all circumstances, it will never be ready for the real world......
 

DocScott

Well-known member
TOO Supporting Member
Joined
Mar 6, 2019
Messages
341
Location
Westchester, NY
Tesla Owner
Model 3
Country
Country
My question on the AP safety statistics is whether "accident occurred while on AP" includes being on AP a few seconds before. For example, suppose AP steers the car toward a barrier. The driver sees, and torques the wheel, disengaging AP, but not in time, and still hits the barrier. Technically, the car was under driver control when the collision occurred. But it is also clear that AP was involved in the circumstances of the accident. Reporting that as an accident while not on AP would be misleading.
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
626
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
You have hit the nail on the head.

This all goes back to my old RCAF days and the concept of what was then called cockpit resource management (CRA) as it applies to studying aircraft accidents OR incidents.

The root cause of the incident of the large truck having to slam on the brakes and change lanes to avoid a crash is the phantom braking episode.

Had an actual accident occurred, an indentified contributing factor would have been the following distance between the truck and the phantom braking Tesla.

This one example of a core flaw of autopilot, phantom braking, has been hidden, statistically, because the intervention of the truck driver is not recorded by Tesla.

Until CRA techniques are applied to the near miss incidents, the product we are using will not improve to the point where in ALL scenarios it can correctly claim to be safer than human drivers.
Tesla does not give us the data they have. They have said they do not give it out because it will be used against them. I don't think anyone can argue with that. But that doesn't mean they aren't using the data themselves to make the cars safer. How do you know what Tesla is recording and how they handle that data? They do specifically say they look at near misses.
 

Mike

Legendary Member
Joined
Apr 4, 2016
Messages
2,449
Location
Batawa Ontario
Country
Country
Tesla does not give us the data they have. They have said they do not give it out because it will be used against them. I don't think anyone can argue with that. But that doesn't mean they aren't using the data themselves to make the cars safer. How do you know what Tesla is recording and how they handle that data? They do specifically say they look at near misses.
I don't dispute what you are saying.

I do dispute the assumption that the truck drivers evasive actions were somehow captured and forwarded to the Tesla cloud for analysis.
 

aresal

New Member
Joined
Apr 23, 2019
Messages
3
Location
LA
Country
Country
My 2 cents on phantom braking. I have stretches of freeway where TACC is impossible to use. This is when I am on the 91 Express Lanes in SoCal. The two express lanes are separated from regular traffic by plastic poles. If the traffic in the regular lanes slow, the Model 3 somehow freaks out and panic brakes. I can't use NoA, because it wants to be in the right lane. Interestingly, if I use NoA with a carpool preference, it moves to the left and stays out of the lane next to stopped traffic. But NoA, I presume, can't be used when the Tesla in one lane is traveling much faster than the lane next to it. If someone has a contrary experience, let me know. Maybe one of my cameras is malfunctioning or out of alignment.

As an aside, some in SoCal may not realize that transponders are free (they work for all of California) and 91 Express Lanes are mostly free (excepting rush hour where it is half-off) for EVs. Just EVs, not plug in hybrids. Apply to the 91 Express Lanes authority for a Special Access plan, starting here: https://www.91expresslanes.com/getting-started/

[removed]
Expresslanes no longer free for CAVs: https://www.metroexpresslanes.net/en/faq/clean_air_vehicles.shtml
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
626
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
I do dispute the assumption that the truck drivers evasive actions were somehow captured and forwarded to the Tesla cloud for analysis.
How do you know that?

On Ride The Lightning podcast this week Ryan said Elon mentioned something about shadows on the E3 interview. I listened to most of that but must have missed it. Anyway, they could certainly automate capturing phantom braking events and near misses.
 

Mike

Legendary Member
Joined
Apr 4, 2016
Messages
2,449
Location
Batawa Ontario
Country
Country
How do you know that?

On Ride The Lightning podcast this week Ryan said Elon mentioned something about shadows on the E3 interview. I listened to most of that but must have missed it. Anyway, they could certainly automate capturing phantom braking events and near misses.
The phantom-braking-with-no-regard-to-what-the-following-distance/closure-rate-of-the-trailing-vehicle issue has existed, with zero improvement, since I took delivery 13 months ago.

That is what I base my point of view on, 13 months of experianced stagnation on this issue.

When this issue appreciably improves, I'll be the first to acknowledge that events behind the vehicle are now being considered prior to autopilot or TACC radically decelerating for no driver dicernable reason.
 
Joined
May 17, 2019
Messages
45
Location
Northeast USA
Tesla Owner
Model 3
Country
Country
As humans, the way we drive is we abstract visual information into a conceptual model of the world around us, and continually refine this model as new observation becomes available. When we gain information in conflict with this model, this causes a negative feedback in the form of an unpleasant sensation we refer to as "surprise". In this way, our neural networks continuously improve the accuracy of their models in order to minimize the number of surprises we encounter. Using this model and a basic understanding of physics, laws, and the characteristics of our cars and the other cars and drivers around us, we plan what we judge to be a safe and efficient path up to the limit of our visual horizon. A part of this planning involves prediction; when we approach a merge, we understand that the other cars we see around us will soon share a lane, and that we must position ourselves so that don't try to occupy the same space well in advance. In principle, the computational neural networks in the FSD computer must replicate this functionality in order to safely drive a vehicle.

If you pay any attention to the car's behavior, it is clear that the car is performing a very simple operation of keeping itself between the lines and behind other cars; it skips the entire model building and refining step. As an example, the car attempts to remain perfectly centered between the lines, even when lanes are merging. A human driver would look at the approaching merge, identify it as such, and plan a path through the area well in advance, while the autopilot computer doesn't seem to know about the merge until the moment it happens, immediately jumping to the new center as soon as the adjacent line disappears. If the lines are painted incorrectly and waver back and forth a bit in mid corner, the autopilot will follow the lines and and the car will also waver, whereas a human would have planned the entire route through the visual horizon of the corner with the understanding that it was unnecessary to follow the wiggly lines exactly as long as the car was positioned well between the lines. Even the infotainment screen showing other vehicles around the car belies a complete lack of abstraction capability; the cars surrounding you move in erratic ways which we know to violate the laws of physics. I'm certain that our brain's inputs are just as noisy as the cars inputs are, but we build and refine a model of what must be there in a method somewhat reminiscent of a kalman filter in controls theory. We know the cars aren't bouncing all over the road even if our eyes and our ears provide conflicting inputs, because cars can't do that... the occasional glance is all that is needed to compare our model of where we expected the car to be at this moment based on prior observation to current observation, and refine our model of that object. So far, the FSD computer just can't do this.

The computer isn't so much driving as it is following a painted railroad track, something which requires no abstraction at all. There is a 0% chance of self driving ever functioning properly without the ability
 

DocScott

Well-known member
TOO Supporting Member
Joined
Mar 6, 2019
Messages
341
Location
Westchester, NY
Tesla Owner
Model 3
Country
Country
As humans, the way we drive is we abstract visual information into a conceptual model of the world around us, and continually refine this model as new observation becomes available. When we gain information in conflict with this model, this causes a negative feedback in the form of an unpleasant sensation we refer to as "surprise". In this way, our neural networks continuously improve the accuracy of their models in order to minimize the number of surprises we encounter. Using this model and a basic understanding of physics, laws, and the characteristics of our cars and the other cars and drivers around us, we plan what we judge to be a safe and efficient path up to the limit of our visual horizon. A part of this planning involves prediction; when we approach a merge, we understand that the other cars we see around us will soon share a lane, and that we must position ourselves so that don't try to occupy the same space well in advance. In principle, the computational neural networks in the FSD computer must replicate this functionality in order to safely drive a vehicle.

If you pay any attention to the car's behavior, it is clear that the car is performing a very simple operation of keeping itself between the lines and behind other cars; it skips the entire model building and refining step. As an example, the car attempts to remain perfectly centered between the lines, even when lanes are merging. A human driver would look at the approaching merge, identify it as such, and plan a path through the area well in advance, while the autopilot computer doesn't seem to know about the merge until the moment it happens, immediately jumping to the new center as soon as the adjacent line disappears. If the lines are painted incorrectly and waver back and forth a bit in mid corner, the autopilot will follow the lines and and the car will also waver, whereas a human would have planned the entire route through the visual horizon of the corner with the understanding that it was unnecessary to follow the wiggly lines exactly as long as the car was positioned well between the lines. Even the infotainment screen showing other vehicles around the car belies a complete lack of abstraction capability; the cars surrounding you move in erratic ways which we know to violate the laws of physics. I'm certain that our brain's inputs are just as noisy as the cars inputs are, but we build and refine a model of what must be there in a method somewhat reminiscent of a kalman filter in controls theory. We know the cars aren't bouncing all over the road even if our eyes and our ears provide conflicting inputs, because cars can't do that... the occasional glance is all that is needed to compare our model of where we expected the car to be at this moment based on prior observation to current observation, and refine our model of that object. So far, the FSD computer just can't do this.

The computer isn't so much driving as it is following a painted railroad track, something which requires no abstraction at all. There is a 0% chance of self driving ever functioning properly without the ability
There are cases, though, where AP is clearly using a model of the kind you're referring to, albeit a fairly simple one.

One notable example is when a car changes lanes in to the Tesla's lane, in front of the Tesla, but travelling at least as fast as the Tesla. AP would normally never allow itself to be that close behind another car without slamming on the brakes, but in this case there's some "understanding" that the car in front will take care of the problem on its own.

There's also the situation where AP gets "nervous" about a car, travelling more slowly in a neighboring lane and a bit in front. It doesn't always brake in that situation, but it does brake if it suspects the car might change lanes; e.g., if it's drifting toward the lane divider, or maybe (?) if it has its turn signal on.

So there are some signs of the beginnings of predictive models.

To me, though, the challenge is greater than that.

A human driver builds a good conceptual model for a familiar environment: e.g. driving on roads around where they live in work, in, say Houston. Take that driver and put them in a snowstorm in Boston, and a lot is different: driving styles are different, road patterns are different, physical characteristics like the slipperiness of the road is different. That driver will be very stressed and is either likely to drive very conservatively or very badly, perhaps both.

AP, right now, is trying to develop a conceptual model based in part on the driver in Houston that is also supposed to work in the Boston blizzard, and vice-versa. That's a harder task than what most humans have to do.
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
626
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
There are cases, though, where AP is clearly using a model of the kind you're referring to, albeit a fairly simple one.

One notable example is when a car changes lanes in to the Tesla's lane, in front of the Tesla, but travelling at least as fast as the Tesla. AP would normally never allow itself to be that close behind another car without slamming on the brakes, but in this case there's some "understanding" that the car in front will take care of the problem on its own.

There's also the situation where AP gets "nervous" about a car, travelling more slowly in a neighboring lane and a bit in front. It doesn't always brake in that situation, but it does brake if it suspects the car might change lanes; e.g., if it's drifting toward the lane divider, or maybe (?) if it has its turn signal on.

So there are some signs of the beginnings of predictive models.

To me, though, the challenge is greater than that.

A human driver builds a good conceptual model for a familiar environment: e.g. driving on roads around where they live in work, in, say Houston. Take that driver and put them in a snowstorm in Boston, and a lot is different: driving styles are different, road patterns are different, physical characteristics like the slipperiness of the road is different. That driver will be very stressed and is either likely to drive very conservatively or very badly, perhaps both.

AP, right now, is trying to develop a conceptual model based in part on the driver in Houston that is also supposed to work in the Boston blizzard, and vice-versa. That's a harder task than what most humans have to do.
I think you're right. They are adding AI in pieces. As they add in more AI pieces the car becomes more capable and more unpredictable (at least initially.)
 
Joined
May 17, 2019
Messages
45
Location
Northeast USA
Tesla Owner
Model 3
Country
Country
The most disconcerting thing by far is the car's lack of object permanence. Surrounding vehicles disappear and reappear constantly, apparently on an almost frame by frame basis. The AI should know that a car is still there, even if it can't see it for a moment. The AI should go further than that, it should continuously try to guess where the other car is.

I fear that the computer is doing way more work than is even needed, since humans only have to make these checks against the conceptual model once every few seconds in most situations... we don't even try to analyze every frame we grab. We have, in fact, a relatively low "framerate" and a VERY limited high fidelity field of view that we have to continuously gimbal to make stereo images of objects of interest one at a time. In fact, fully 50% of our neurons used for image recognition are dedicated to the central 2º of our eyesight. We generally identify an object once, and then catalog this into our model and use the remaining wide angle "perhipheral" vision to track the object, but not continuously re-identify it, as evidenced by the fact that it takes humans a shocking amount of time to recognize when an object in their perhipheral vision has been replaced with something else entirely. Our perhipheral vision does a very simple type of pattern matching from moment to moment to determin where various patterns representing previously identified objects are presently located, and in this way we understand the position and velocity of objects in the surrounding environment. This should take a lot less horsepower than trying to re-identify everything in the entire scene from moment to moment.

In sum, from what I've seen, it looks like they are going about it all wrong. They are trying to build networks with enough power to identify all of the objects over and over again in each frame, which is orders of magnitude more difficult than what humans already do. What humans don't do is look at a truck and then forget that there is a truck less than two seconds later; we know there is a truck, we know where it was, and where it was headed, and we know where it should be in a few seconds, and that's all we need to know for now.
 
Last edited:

MelindaV

☰ > 3
Moderator
Joined
Apr 2, 2016
Messages
10,010
Location
Vancouver, WA
Tesla Owner
Model 3
Country
Country
The most disconcerting thing by far is the car's lack of object permanence. Surrounding vehicles disappear and reappear constantly, apparently on an almost frame by frame basis. The AI should know that a car is still there, even if it can't see it for a moment. The AI should go further than that, it should continuously try to guess where the other car is.
remember, what is rendered on the screen is not specifically what the computer is limited to seeing. The image on the screen is just to amuse us more than anything.
 
Joined
May 17, 2019
Messages
45
Location
Northeast USA
Tesla Owner
Model 3
Country
Country
remember, what is rendered on the screen is not specifically what the computer is limited to seeing. The image on the screen is just to amuse us more than anything.
Perhaps, but the image it shows me hardly inspires confidence. I'm not sure why Tesla would show us anything but their best given the company's constant showmanship in trying to convince investors that FSD is just around the corner.
 
Joined
Sep 8, 2018
Messages
8
Location
Austin, TX
Tesla Owner
Model 3
Country
Country
What I see leads me to think the phantom braking is a conservative overreaction because the video/radar processing pipeline is too slow and too decimated (lower frame rate) in order to accommodate to the hardware. Then in order to work around the slowness there are events that cut thru the pipeline to conservatively slow the car. Often it’s reacting to someone cutting in that is far enough ahead for them to match speed by the time the car would intersect their position, but the car won’t realize that until the processing pipeline is done and it may miss knowledge due to low frame rates. New HW 3.0 can’t come soon enough!
 
Last edited: