Suggestion Phantom braking & false positive analysis suggestions to Tesla

  • Our merch store is back! Check out our line of quality apparel and accessories featuring the TOO logo. Let us know if you'd like something specific you don't see 👍https://teespring.com/stores/tesla-owners-online-store
  • It's OK to discuss software issues here but please report bugs to Tesla directly at servicehelpna@teslamotors.com if you want things fixed.

barjohn

Well-Known Member
Joined
Aug 31, 2017
Messages
200
Location
Riverside, CA
Country
Country
#1
As a scientist/engineer that has worked on many problems where false positives are an issue to be resolved by not having more than one in 10K or 100K events, and the fact that other cars with TACC are not exhibiting the high frequency of false positives, I think this should be a priority for Tesla to resolve. Unfortunately, they don't provide the user with enough information to help them solve the issue.

For example, one of the problems I worked on was in Chemical agent detection. We needed detectors that could detect quickly enough and at low enough levels to provide time for our servicemen to put on protective gear. It is a lot of work and very uncomfortable in high heat desert environments to put on this gear based on a false positive. People naturally start to ignore the warning after a certain number of false positives or they quit using your system and revert to the famous Kuwaitie Chickens as being a more reliable means of getting a warning (canary in the coal mine is another). In our case, we needed to develop better and more reliable ways to test the systems and keep them properly calibrated.

For example Tesla could show us what the car thought it was reacting to and ask us to confirm that the danger was real or not. Then it could learn to identify the ghosts from the real threats. Without this type of real-time driver input there is no way, just looking at the data after the fact that Tesla can identify and fix the problem.
 

MelindaV

☰ > 3
Moderator
Joined
Apr 2, 2016
Messages
9,046
Location
Vancouver, WA
Tesla Owner
Model 3
Country
Country
#2
For example Tesla could show us what the car thought it was reacting to and ask us to confirm that the danger was real or not. Then it could learn to identify the ghosts from the real threats.
the issue with that though, is many drivers are not attentive enough to the surroundings to realize/understand when there was an actual threat the car was reacting to. They would end up with many replying saying there was zero threat, when in reality, there was a car coming into their lane that just failed to notice.
 

barjohn

Well-Known Member
Joined
Aug 31, 2017
Messages
200
Location
Riverside, CA
Country
Country
#3
So you are thinking that having no data is better than having some erroneous data? Remember I said that the display would identify what the car thought the threat was. It would show the type of threat and location. The driver could respond with 1. No threat observed, 2. Confirmed threat observed or 3. Not sure as I wasn't paying attention at the time.

Tesla could then throw out the uncertain responses.
 

lance.bailey

Well-Known Member
Joined
Apr 1, 2019
Messages
316
Location
cloverdale, BC
Tesla Owner
Model 3
Country
Country
#4
For example Tesla could show us what the car thought it was reacting to and ask us to confirm that the danger was real or not. Then it could learn to identify the ghosts from the real threats. Without this type of real-time driver input there is no way, just looking at the data after the fact that Tesla can identify and fix the problem.
in another thread I suggested a button that the driver could press to indicate "autonomous didn't get things correct here" which would relay the current/relevant logs/images to the mothership for analysis.

this would remove the information being incorrectly interpreted by an inattentive driver when relayed to Tesla (because the logs and images are being used for analysis, not the driver's interpretation) and gets better beta testing back to Tesla.

Step one of problem solving is understanding the problem. Often a problem can only be understood by watching it happen. After that you can solve it.
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
457
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
#5
So you are thinking that having no data is better than having some erroneous data? Remember I said that the display would identify what the car thought the threat was. It would show the type of threat and location. The driver could respond with 1. No threat observed, 2. Confirmed threat observed or 3. Not sure as I wasn't paying attention at the time.

Tesla could then throw out the uncertain responses.
If Tesla is smart there is no reason they don't have every single false positive recorded. And I don't think it would take that many resources to identify what the source was. Many could be determined algorithmically in the car I would think. I don't think user input is necessary.
 

lance.bailey

Well-Known Member
Joined
Apr 1, 2019
Messages
316
Location
cloverdale, BC
Tesla Owner
Model 3
Country
Country
#6
If Tesla is smart there is no reason they don't have every single false positive recorded ... I don't think user input is necessary .
and that is the trick. If the car is able to determine that a false positive just happened, then the car should not have reacted to the false positive. The whole point of a false positive in any system is that the system detected something that was not true and could not tell that it was not true. The system (car) truly, but incorrectly, thought there was a positive answer to the question "is there something oncoming in my path?"

False negatives are easier to determine ("nope nothing there", but wrong) because of following events (the crash, the wheel wrenching, the successive proximity alert, ...), but false positive like phantom braking are harder to confirm without another system telling you that your positive reading is false.

That's why user input is so necessary in eliminating false positives - we are that additional system, and the need for us as the additional system is the reason AP/FSD/NOA is beta and why we are told to keep the hands on the wheel and stay alert.
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
457
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
#7
and that is the trick. If the car is able to determine that a false positive just happened, then the car should not have reacted to the false positive. The whole point of a false positive in any system is that the system detected something that was not true and could not tell that it was not true. The system (car) truly, but incorrectly, thought there was a positive answer to the question "is there something oncoming in my path?"

False negatives are easier to determine ("nope nothing there", but wrong) because of following events (the crash, the wheel wrenching, the successive proximity alert, ...), but false positive like phantom braking are harder to confirm without another system telling you that your positive reading is false.

That's why user input is so necessary in eliminating false positives - we are that additional system, and the need for us as the additional system is the reason AP/FSD/NOA is beta and why we are told to keep the hands on the wheel and stay alert.
I disagree. At the time of the event the car thinks something is wrong. If you stop there then what you say is correct. But if you look at the data after that you can verify if that was correct. For example, if the gps gets off and thinks it's on a different road it may brake but soon after that it realizes it was wrong and jumps back to the right road. That seems pretty easy to identify after the fact. Same thing if it incorrectly identifies something in the road. You can also look at the driver response. If the car brakes and the driver immediately hits the throttle it's a good indicator something went wrong. Maybe they couldn't get all of the events but I would think they could get the vast majority of what I've experienced pretty easily. Especially the really bad ones.
 

lance.bailey

Well-Known Member
Joined
Apr 1, 2019
Messages
316
Location
cloverdale, BC
Tesla Owner
Model 3
Country
Country
#8
GPS thinking it's on the wrong road and braking because of that, only to correct itself to the right road and then determine based on the road correction that the earlier braking issue was a false positive is a bit of a corner case.

I would be happier if phantom braking on a freeway disappeared. So no GPS error, no obstacle in front of me, nothing but open asphalt, a false positive brake and a P.O.ed wife.
 

M3OC Rules

Top-Contributor
TOO Supporting Member
Joined
Nov 17, 2016
Messages
457
Location
Minneapolis, MN
Tesla Owner
Model 3
Country
Country
#9
I would be happier if phantom braking on a freeway disappeared. So no GPS error, no obstacle in front of me, nothing but open asphalt, a false positive brake and a P.O.ed wife.
I'm torn on this. I use AP all the time. If it was braking all the time I would not use it. But at the same time, I can totally relate to the P.O.ed wife and know others in the same boat. Meanwhile, Tesla is getting kudos for their active safety and trying to catch up and/or stay ahead of everyone else.
 

Dr. J

Private
TOO Supporting Member
Joined
Sep 1, 2017
Messages
1,056
Location
Fort Worth
Tesla Owner
Model 3
Country
Country
#10
@M3OC Rules may be on the right track. From the Investor Day presentation, I surmise that Tesla gathers data from the fleet (of drivers sharing data) to help the neural net learn from false positives. I don't know whether there's a room full of minions reviewing flagged instances of behavior described in this thread--seems doubtful, but possible--but somehow they are tracking, reviewing, and presumably fixing these issues. I don't know whether crowd sourcing this would help or hurt. The suggestions here are better than the clunky bug fix procedure, but would require at least a little driver attention, which opens up liability issues. I'm ambivalent.
 

Dr. J

Private
TOO Supporting Member
Joined
Sep 1, 2017
Messages
1,056
Location
Fort Worth
Tesla Owner
Model 3
Country
Country
#11
I'm torn on this. I use AP all the time. If it was braking all the time I would not use it. But at the same time, I can totally relate to the P.O.ed wife and know others in the same boat. Meanwhile, Tesla is getting kudos for their active safety and trying to catch up and/or stay ahead of everyone else.
My default is to wait on automation to catch up by not using certain features if someone else is in the car with me, and experiment with beta software when I'm alone and paying attention.
 

GeoJohn23

Active Member
Joined
Oct 16, 2018
Messages
30
Location
Pleasanton, CA
Tesla Owner
Model 3
Country
Country
#12
I agree, for the reason stated earlier, that relying solely on driver input that a braking event was unnecessary would not be a good idea. And while I agree that the car/Tesla likely has sufficient data for a re-analysis and then tweaking of the response. The trick is, not all (and probably not even that high a %) of the breaking events should be reviewed/analyzed for feature improvement... this is where I like the ideas of highlight what the car saw as the issue (like it does for collison warning) and provide for driver input. Perhaps this could even be Thumbs up — that’s Tesla, good catch — Thumbs down, what the H was that about? (the Thumbs down being the important input for the mothership to pull and review the data — like the VAR they do in the World Cup.... (Go ladies, awesome talent on display by all the teams).

Without some form of Thumbs down tagging, I don’t see how they could do very well at finding and thus fixing this.
 

lance.bailey

Well-Known Member
Joined
Apr 1, 2019
Messages
316
Location
cloverdale, BC
Tesla Owner
Model 3
Country
Country
#13
@M3OC Rules may be on the right track. From the Investor Day presentation, I surmise that Tesla gathers data from the fleet (of drivers sharing data) to help the neural net learn from false positives....
How would Tesla know that something is a false positive? If someone's car brake's unexpectedly and then drives on, what feedback is there to indicate a false positive? I suspect that as far as the car is concerned, a necessary braking was done.

Let's table top a side street squirrel incident.

  • car drives along
  • squirrel dashes out
  • car brakes
  • squirrel dashes back
  • driver presses the accelerator to override the braking (as the squirrel is gone)
This happens to me reasonably often (darn squirrels). Now let's table top a phantom brake.
  • car drives along
  • car "sees" something
  • car phantom brakes
  • driver swears/apologizes
  • driver presses the accelerator to override the phantom braking
In both cases, you have a drive, brake, accelerate sequence. In both cases, the car thought it was doing the right thing - there is no feedback to the car that the braking in the second case is imaginary and I suspect that as far as the car is concerned, a necessary braking was done. In the mind of the car there was no false positive, so there is no need to analyse and fix them. You can't fix what you don't know happened.


The reason I have suspicions about the car not being able to determine false positives is because of the ongoing phantom braking complaints that we have here on the forum. I truly do not believe that enough false positives are being fed to Tesla, giving them enough data to address what continues to be an issue in a lot of posts.
 

MelindaV

☰ > 3
Moderator
Joined
Apr 2, 2016
Messages
9,046
Location
Vancouver, WA
Tesla Owner
Model 3
Country
Country
#14
The reason I have suspicions about the car not being able to determine false positives is because of the ongoing phantom braking complaints that we have here on the forum. I truly do not believe that enough false positives are being fed to Tesla, giving them enough data to address what continues to be an issue in a lot of posts.
for those getting the phantom braking, are you all recording a bug report with each?
 

Dr. J

Private
TOO Supporting Member
Joined
Sep 1, 2017
Messages
1,056
Location
Fort Worth
Tesla Owner
Model 3
Country
Country
#15
How would Tesla know that something is a false positive? If someone's car brake's unexpectedly and then drives on, what feedback is there to indicate a false positive?
from the video being recorded by the car. Someone or some machine would have to collect the video [edit: of all AP braking events] and review the evidence.

Edit:
Dacia 12:44 PT: We ask the fleet to send us data focused on a problem to be solved, and that’s used to train the neural network further, Andrej explains. He previously mentioned “tunnel” problems as an example, here is using cut ins from cars coming from other lanes. The false positives and false negatives are then analyzed, used for retraining.
https://www.teslarati.com/tesla-autonomy-day-livestream-updates/

Final edit [everyone is hoping!]:
Not to the beat the dead horse, but it's not that the car can figure out whether a past automatic braking event was warranted or not. Clearly, the car thought the braking event was warranted. But the car is also recording all kinds of data about the event--place, time, speed, video before during and after--that Tesla can access and use for training the neural network for future improvements. Those improvements are incorporated into future firmware releases, which your car will then receive. Then, you will find out whether the flaw has been fixed.
 
Last edited:

lance.bailey

Well-Known Member
Joined
Apr 1, 2019
Messages
316
Location
cloverdale, BC
Tesla Owner
Model 3
Country
Country
#17
from the video being recorded by the car. Someone or some machine would have to collect the video [edit: of all AP braking events] and review the evidence.

... Clearly, the car thought the braking event was warranted. But the car is also recording all kinds of data about the event--place, time, speed, video before during and after--that Tesla can access and use for training the neural network for future improvements....
are you suggesting that every single braking event be analyzed and reviewed, including video and logs?

Doing that to be able to learn about phantom braking, is a bit of needle in a haystack search. I have dozens of braking events every commute, and only a few would I consider "bad", and I am not sure that someone else (AI, deep learning or flesh/blood) would agree with the driver if the braking was warranted, and I am not sure that examining orders of magnitude of data is ... efficient.

Doctors do not stand on the street and examine each person that walks past on the chance of finding a person with an ailment. People with ailments come to the doctor and make them aware of the problem. Much more efficient.

remember - Tesla already has millions and millions of miles of data and exabytes of data being analyzed, yet the drivers are still complaining of unnecessary braking. By definition, the drivers are the best judge to determine if the driver experienced unnecessary braking, why not feed this information into the research system?
 

Dr. J

Private
TOO Supporting Member
Joined
Sep 1, 2017
Messages
1,056
Location
Fort Worth
Tesla Owner
Model 3
Country
Country
#18
are you suggesting that every single braking event be analyzed and reviewed, including video and logs?
I wrote "AP braking," so yes, every automated braking event. I expect this is true.
Doctors do not stand on the street and examine each person that walks past on the chance of finding a person with an ailment. People with ailments come to the doctor and make them aware of the problem. Much more efficient.
That is efficient for people, yes, for computers, no. It would be mind-blowing to have a computer positioned on the street corner to scan each passer-by, diagnose and refer the sick ones to physicians. That would really do a lot for preventive medicine.
remember - Tesla already has millions and millions of miles of data and exabytes of data being analyzed, yet the drivers are still complaining of unnecessary braking. By definition, the drivers are the best judge to determine if the driver experienced unnecessary braking, why not feed this information into the research system?
I don't have a great answer for why it's taking so long, but keeping in mind that humans are error-prone, I would respectfully disagree that drivers are by definition the best judge of anything related to driving. Eyewitnesses often identify the wrong person as a suspect in crime, people are often notoriously bad at choosing life partners, and drivers in accidents will often attribute blame to anything but their own actions.
 

bwilson4web

Active Member
TOO Supporting Member
Joined
Mar 4, 2019
Messages
158
Location
Huntsville, AL
Tesla Owner
Reservation
Country
Country
#19
I have a loop that reproduces the phantom braking when the Sun is low on the horizon:
  • "SPACE" stops/starts playback
  • "," back one frame
  • "." forward one frame
With cruise control set for 18 mph, the GPS (text in lower middle) showed a dip to 9 mph before the car resumed 18 mph. Autosteer was not engaged, only TACC was used. I captured this from a better dash cam, ignore the sound as the dash cam rattles against the car. This is a loop I've replicated phantom braking with our BMW i3-REx with 'magic eye.'

This is my test loop:
west_park_topo.jpg west_park_topo.jpg
Because of the slope, in the morning I can get direct sunlight on the car as well as shadows.

Bob Wilson