# Tesla Autonomy Day Official Discussion



## TrevP (Oct 20, 2015)

Today is Tesla Autonomy day where they will show off the capabilities of their new FSD computer and we get to see how far they've have progressed since the last demo which happened in 2016

You can watch the live stream on YouTube starting at 2PM Eastern Time here






Please keep all related discussions in this thread


----------



## pdgpereira (Mar 23, 2019)

Thx Trev! What are you expecting to see?


----------



## garsh (Apr 4, 2016)

nit: Tesla Autonomy *Investor* Day
I guess Tesla themselves hasn't figured out exactly how they wanted to name this event. 

https://ir.tesla.com/news-releases/news-release-details/tesla-host-autonomy-investor-day


----------



## joakimus (Mar 16, 2019)

I ordered a Model Y today  just to lock in the FSD price. For regulatory reasons, I Can´t take delivery before 2025 or 2026 though. I am a wheelchair user and FSD is going to help me a lot with my driving  Hope to see it in the EU ASAP.
Thank you @TrevP for your guidance, much appreciated!


----------



## 3PHASE (Apr 13, 2016)

Not really sure what to expect today, but the event could potentially be more important than any hardware introduction (although I suspect the new hardware for FSD should impress).


----------



## Jeremy Rosser (Jul 30, 2017)

I have heard of and seen issues with AP and EAP related to rain and snow and being asked to take over. I only have AP and would not consider buying Full Self Driving for 10 plus k (expecting the cost to got up) without some redundancy.


----------



## Bokonon (Apr 13, 2017)

Going to be airborne with limited wi-fi during this event, so I'm very much looking forward to reading everyone's commentary here. 

Hoping for pages of wows and a cameo by the robot snake!


----------



## gary in NY (Dec 2, 2018)

I plan on watching live. I'm not sure what to expect, but wouldn't think they would do this without some important news and product developments.


----------



## iChris93 (Feb 3, 2017)

gary in NY said:


> I plan on watching live. I'm not sure what to expect, but wouldn't think they would do this without some important news and product developments.


Less than an hour!


----------



## MelindaV (Apr 2, 2016)

Bokonon said:


> Going to be airborne with limited wi-fi during this event, so I'm very much looking forward to reading everyone's commentary here.
> 
> Hoping for pages of wows and a cameo by the robot snake!


likewise - have a standing meeting every monday at 11a, and then followed today by a client meeting. So likely will not have a chance to catch up until the event is over.


----------



## jasonm163 (Sep 12, 2018)

Dont worry they never start on time. We prob have another hour even though it was supposed to start 7 minutes ago lol


----------



## Darrenf (Apr 5, 2016)

Just 8 minutes of B roll so far.


----------



## iChris93 (Feb 3, 2017)

Darrenf said:


> Just 8 minutes of B roll so far.


Don't forget the music!


----------



## Darrenf (Apr 5, 2016)

And now the b roll has started over.


----------



## garsh (Apr 4, 2016)




----------



## NEO (Jun 28, 2017)

B roll has been on loop since at 1 ET


----------



## jasonm163 (Sep 12, 2018)

"guys where is Elon?"

"idk ill call him.....hey Elon you have the presentation for the autonomy day right now, where are you?"

"oh ****, that was today? alright give me like 30min, using the bathroom then ill head over there"


----------



## timtesla (May 9, 2018)

2pm est Elon time. That means I still have enough time to go grab lunch


----------



## tivoboy (Mar 24, 2017)

Once again, can never seem to get an event to start on time. Why on earth don’t they at least have some live anchor/s to do some discussion or evangelising about the company, sustainable energy, saving the environment, etc.. I mean THIS IS EARTH DAY, you could have had some standby MC’s to try and educate people abut the EARTH rather than just sit and watching looping B roll of past promotions.


----------



## Gabzqc (Oct 15, 2016)

29mins late and counting,,,


----------



## skygraff (Jun 2, 2017)

You want on time and commentary, stick with SpaceX!
Flash and delayed starts (so they can gather viewer data and raise the suspense level - that music helps), you've got good old Tesla.
Slow and steady with Gary firmly in the lead, go for the Boring Company.

Okay, how many of us still have the video looping on our second screen (30 minutes in)?


----------



## Kizzy (Jul 25, 2016)

I skipped an errand to not pull an Elon.

We haven’t even switched to a crowd/stage view yet. Maybe they needed to run a software update on their demo car.


----------



## Gabzqc (Oct 15, 2016)

True something must be really wrong for this late a start...


----------



## iChris93 (Feb 3, 2017)

Gabzqc said:


> True something must be really wrong for this late a start...


Or herding investors is like herding cats?


----------



## tivoboy (Mar 24, 2017)

When they completely cancel this Autonomy event, it’s really going to look bad.


----------



## billionaiire (Apr 16, 2019)

Keeping the same theme going across the board. Need service? Weeks! Corporate level presentation for investors? Wait indefinitely!


----------



## shareef777 (Mar 10, 2019)

I hopped on late and came here to see if I'd actually missed an extremely short presentation. Guess not.


----------



## billionaiire (Apr 16, 2019)

tivoboy said:


> When they completely cancel this Autonomy event, it's really going to look bad.


Late is canceled at this level. Have a good night lol


----------



## GDN (Oct 30, 2017)

They are just putting the finishing touches on the S/X refresh so they can roll them out at the end, for just one more thing.


----------



## ehendrix23 (Jan 30, 2019)

GDN said:


> They are just putting the finishing touches on the S/X refresh so they can roll them out at the end, for just one more thing.


I thought Apple trademarked "One more thing"


----------



## tivoboy (Mar 24, 2017)

GDN said:


> They are just putting the finishing touches on the S/X refresh so they can roll them out at the end, for just one more thing.


IF only they actually practiced. "One more thing"...


----------



## garsh (Apr 4, 2016)

starting!
maybe?


----------



## ehendrix23 (Jan 30, 2019)

It's ON


----------



## iChris93 (Feb 3, 2017)

Something is happening!


----------



## NEO (Jun 28, 2017)

Here we go


----------



## garsh (Apr 4, 2016)

Looks like a potted plant is going to talk before Elon


----------



## tivoboy (Mar 24, 2017)

WOW, they got a TON of press and analysts there..


----------



## GDN (Oct 30, 2017)

ehendrix23 said:


> I thought Apple trademarked "One more thing"


Actually they just lost a battle to Swatch. Their second defeat in a month or so.

https://appleinsider.com/articles/1...ny-swatch-australian-one-more-thing-trademark


----------



## Mad Hungarian (May 20, 2016)

WOOHOO!
Finally...


----------



## jasonm163 (Sep 12, 2018)

DING DING DING, ITS TIME.


----------



## skygraff (Jun 2, 2017)

40 minutes!

Pretty close to on time for Tesla. Guess they decided it wasn’t worth waiting for more than 41k viewers.


----------



## Smokey S (Sep 30, 2018)

Elon must be tied up with NASA on SpaceX incident this weekend


----------



## tivoboy (Mar 24, 2017)

Took a while, but worth every minute..

I mean, we’re all geeks, right?


----------



## victor (Jun 24, 2016)

H265 hardware codec for video!


----------



## jasonm163 (Sep 12, 2018)

Well....im all for nerding out but this is like SUPER detailed about the chip lol show us something shiny!


----------



## garsh (Apr 4, 2016)

jasonm163 said:


> Well....im all for nerding out but this is like SUPER detailed about the chip lol show us something shiny!


Turn in your nerd badge, heathen!


----------



## jasonm163 (Sep 12, 2018)

garsh said:


> Turn in your nerd badge, heathen!


lol i cant be the only one, im an engineer and all i wanna see is a car drifting around a corner with no one inside

however, Elons random "hah hah"s are pretty funny


----------



## Needsdecaf (Dec 27, 2018)

jasonm163 said:


> Well....im all for nerding out but this is like SUPER detailed about the chip lol show us something shiny!


Seriously.

ZZzzzzzzzzzzz


----------



## JWardell (May 9, 2016)

jasonm163 said:


> Well....im all for nerding out but this is like SUPER detailed about the chip lol show us something shiny!


FSD chip die is literally something shiny


----------



## shareef777 (Mar 10, 2019)

Loving all the details being provided, but just wish they had someone a bit more charismatic to present. Sorry, Pete, but might as well have just handed everyone the powerpoint file and let us read it in silence.


----------



## victor (Jun 24, 2016)

Next gen chip is 2 years away (half way through). Will be 3 times better.


----------



## NEO (Jun 28, 2017)

Hopefully my Model Y will get the next Gen chip


----------



## victor (Jun 24, 2016)

Made by Samsung in Austin, TX.


----------



## GDN (Oct 30, 2017)

Love that it is coming right from Austin. We'll have to have Willie meet Elon there and smoke it up a bit !!


----------



## skygraff (Jun 2, 2017)

shareef777 said:


> Loving all the details being provided, but just wish they had someone a bit more charismatic to present. Sorry, Pete, but might as well have just handed everyone the powerpoint file and let us read it in silence.


Just imagine if Elon has been reading it. At least Pete got through the script without so many of the genius-evident verbal pauses.


----------



## jasonm163 (Sep 12, 2018)

That dude wants to get off the stage so badly lol "so should we.....wait uh wuh....."


----------



## skygraff (Jun 2, 2017)

Knew Elon was going to disprove simulated world in that answer about validity of simulations!


----------



## GDN (Oct 30, 2017)

In a few minutes I have no idea just exactly how well these demo cars are going to drive themselves, but lets just say that Tesla has peeled back the cover and put some pressure on all of the other guys to show how and why they can do this anywhere nearly as good, much less better.


----------



## shareef777 (Mar 10, 2019)

I'm starting to think that they're just explaining how complex FSD is to justify why it's being delayed.


----------



## skygraff (Jun 2, 2017)

They’re actually mining bitcoin with all of our cars!!!

That’ll pay for the next gen chips to be installed for free which will benefit them more than us. Mark my words.


----------



## slasher016 (Sep 12, 2017)

Elon said his three stages of FSD:

1: FSD feature complete by end of this year -- this means that the car can do everything it needs to do - but requires human supervision.
2: FSD human does not need to pay attention (no more steering wheel warnings) - End of Q2 2020.
3. FSD approved by regulators in some jurisdictions - End of 2020.


----------



## jasonm163 (Sep 12, 2018)

Does anyone know the speakers lasts names as well? If so would you post first and last names of the three people?


----------



## slacker775 (May 30, 2018)

slasher016 said:


> Elon said his three stages of FSD:
> 
> 1: FSD feature complete by end of this year -- this means that the car can do everything it needs to do - but requires human supervision.
> 2: FSD human does not need to pay attention (no more steering wheel warnings) - End of Q2 2020.
> 3. FSD approved by regulators in some jurisdictions - End of 2020.


With the added caveat of his opinion that long haul trucking would likely be approved first, with a physical driver present in the lead truck with a few trucks following in a convoy.


----------



## jvmoore1 (May 20, 2016)

jasonm163 said:


> Does anyone know the speakers lasts names as well? If so would you post first and last names of the three people?


Peter Bannon
andrej karpathy


----------



## shareef777 (Mar 10, 2019)

slasher016 said:


> Elon said his three stages of FSD:
> 
> 1: FSD feature complete by end of this year -- this means that the car can do everything it needs to do - but requires human supervision.
> 2: FSD human does not need to pay attention (no more steering wheel warnings) - End of Q2 2020.
> 3. FSD approved by regulators in some jurisdictions - End of 2020.


I still don't see #1 happening this year. The current AP system is still too erratic as-is with NoA on the highway where the environment wouldn't be as dynamic as local streets.

#2 is what I'm waiting for. I don't need to sleep, nor would I, but am really annoyed by the number of times I need to "nudge the wheel" even when going straight for miles. Maybe my children growing up will get compfortable with the premise of sleeping behind the wheel, but for now #2 alone would completely change the way we drive since the inception of driving.

#3 involves politicians. That's enough to realize how long that'll take before it gets done.


----------



## jasonm163 (Sep 12, 2018)

I dont think that guy understands. Elon doesnt care if you buy 10 model 3s and have your taxi fleet......its making tesla a ton of money and will continue to do so.


----------



## AugustaDriver (Jul 21, 2017)

So my Model Y is going to last a million miles?


----------



## shareef777 (Mar 10, 2019)

jasonm163 said:


> I dont think that guy understands. Elon doesnt care if you buy 10 model 3s and have your taxi fleet......its making tesla a ton of money and will continue to do so.


Yeah, I got confused what that guy was trying to say. Buy however many you want. Buy 10, then you're giving Tesla 20-30% revenue of those 10 vehicles whenever they're on the Tesla Network.

I think his initial line of questioning was along the lines, of what if Uber buys a ton of them for THEIR riding network, and it puts your network out of business. Elon answered that pretty clearly, Uber can't do that as it's again the purchase agreement.


----------



## Darrenf (Apr 5, 2016)

Anyone beginning to think that we aren’t going to see the cars drive themselves today? I feel like the people there will experience it, but we won’t see it until they get to make their own reports that we will have to chase down.


----------



## jasonm163 (Sep 12, 2018)

Darrenf said:


> Anyone beginning to think that we aren't going to see the cars drive themselves today? I feel like the people there will experience it, but we won't see it until they get to make their own reports that we will have to chase down.


I'm thinking we will get to see it. Surely they wont have all this and him say several times that everyone will get to test drive it and we wont see anything at all. That will be disappointing.


----------



## tivoboy (Mar 24, 2017)

jasonm163 said:


> I dont think that guy understands. Elon doesnt care if you buy 10 model 3s and have your taxi fleet......its making tesla a ton of money and will continue to do so.


Well, if you are referring to the "why can't I just use MY model 3 for UBER", and the answer really is no reason. If it is being operated and DRIVEN by YOU, then there should be no restriction to YOU being able to drive your car on UBER. As they do today.

In the future, when you PUT your car on the Tesla Network, you're going to be charging a driver for pickups on the TESLA network and they are going to be paying you some fee return for the ride. One COULD theoretically ALSO have the car listed on the UBER network, and pickup drivers, but in order to DO the pickup and drop off with your car AUTONOMOUSLY its going to have to be on the TESLA network. Where to pickup, drop off, how, etc. One probably A) wouldn't want to pay both networks for the same ride, and B) I'm sure that to indemnify YOU as the owner and make TESLA liable for anything happening with the self driving car (This IS the liability that the car companies have chosen to take on when the cars are either driving themselves or providing driving assistance), they are certainly not going to allow a third party UBER in this case, generate any revenue but not carry any of the liability..


----------



## skygraff (Jun 2, 2017)

jasonm163 said:


> I'm thinking we will get to see it. Surely they wont have all this and him say several times that everyone will get to test drive it and we wont see anything at all. That will be disappointing.


Yeah, I agree but the market got spooked by all the dry tech talk.

Think they should've started with the demo then done the explanation part after.


----------



## Darrenf (Apr 5, 2016)

jasonm163 said:


> I'm thinking we will get to see it. Surely they wont have all this and him say several times that everyone will get to test drive it and we wont see anything at all. That will be disappointing.


I hope you are correct.


----------



## jasonm163 (Sep 12, 2018)

Hopefully they are switching to the other cameras? lol


----------



## tivoboy (Mar 24, 2017)

Anyone think they will have any livestream from the cars and the drives?


----------



## skygraff (Jun 2, 2017)

To be clear, when I said “I agree,” I meant it will be disappointing.


----------



## Darrenf (Apr 5, 2016)

Ugh. So frustrating.


----------



## gary in NY (Dec 2, 2018)

Some pretty interesting content today. It seems ambitious, but the new computer and the whole neural net plan make sense to me. I would really like to see what the demo drives look like.


----------



## Darrenf (Apr 5, 2016)

Live stream is over. Not one demonstration!


----------



## skygraff (Jun 2, 2017)

And disappointed..

but still impressed.


----------



## skygraff (Jun 2, 2017)

I’m sure they’ll put out an official video (with a white Acura digitally removed) and there’ll probably be some unofficial videos from the test rides (provided the NDAs allow that this time).

When I finish reading the unredacted 448 pages, I’ll decide whether to invest in this company, buy FSD hardware, and/or impeach Elon.


----------



## PNWmisty (Aug 19, 2017)

I found the entire presentation to be fascinating and informative with the exception of most of the analyst's questions which I found to be mostly boneheaded.

Elon and Andre did a great job at multiple points of the presentation explaining why LIDAR is a waste of time for FSD. They absolutely decimated the position that it would be a good "back-up" to the visual cameras. Elon also has a very dim view of using precision GPS to help guide the car. I really think the car needs to be like an animal, navigating old and new environments alike, in a natural manner that can handle new environments, environments that have changed, objects in the road, etc. and Elon made the strong case that vision is far and away the best way to achieve that. He has a low opinion of "FSD" that needs to be geo-fenced. 

After watching the presentation it became readily apparent just how different this FSD technology is from what is imagined by the FSD naysayers who don't seem to understand the true nature of the technology and how quickly it's learning. This presentation made clear just how fast they are progressing and how much of the learning is automated. I think people betting against Tesla are in for a rude awakening and perhaps sooner than even most of the fanboys think.


----------



## JWardell (May 9, 2016)

The autonomous driving demo was taking place in the first 40 minutes, but some jerk in a Volkswagen was blocking the camera


----------



## Metz123 (May 8, 2018)

PNWmisty said:


> I found the entire presentation to be fascinating and informative with the exception of most of the analyst's questions which I found to be mostly boneheaded.
> 
> Elon and Andre did a great job at multiple points of the presentation explaining why LIDAR is a waste of time for FSD. They absolutely decimated the position that it would be a good "back-up" to the visual cameras. Elon also has a very dim view of using precision GPS to help guide the car. I really think the car needs to be like an animal, navigating old and new environments alike, in a natural manner that can handle new environments, environments that have changed, objects in the road, etc. and Elon made the strong case that vision is far and away the best way to achieve that. He has a low opinion of "FSD" that needs to be geo-fenced.
> 
> After watching the presentation it became readily apparent just how different this FSD technology is from what is imagined by the FSD naysayers who don't seem to understand the true nature of the technology and how quickly it's learning. This presentation made clear just how fast they are progressing and how much of the learning is automated. I think people betting against Tesla are in for a rude awakening and perhaps sooner than even most of the fanboys think.


That may be true, it may not be true. One thing I've learned throughout my life dealing with engineers is that there are always multiple ways to solve a problem. The other thing I know is that I won't consider autopilot. much less FSD, ready for prime time until I can make a road trip without my wife asking me to turn it off because the herky-jerky nature of TACC and NOA is making her sick. It needs to drive as naturally and comfortably as a good human driver to move out of beta.


----------



## epmenard (Mar 5, 2019)

skygraff said:


> I'm sure they'll put out an official video (with a white Acura digitally removed) and there'll probably be some unofficial videos from the test rides (provided the NDAs allow that this time).
> 
> When I finish reading the unredacted 448 pages, I'll decide whether to invest in this company, buy FSD hardware, and/or impeach Elon.


FAKE VIEWS!!


----------



## epmenard (Mar 5, 2019)

jasonm163 said:


> I dont think that guy understands. Elon doesnt care if you buy 10 model 3s and have your taxi fleet......its making tesla a ton of money and will continue to do so.


Also, chances are, our cars would only be scheduled for runs if no Telsa-owned vehicles are available in the area or free to take in passengers. In other words: why split the profit if you don't have to?


----------



## JWardell (May 9, 2016)

Metz123 said:


> That may be true, it may not be true. One thing I've learned throughout my life dealing with engineers is that there are always multiple ways to solve a problem. The other thing I know is that I won't consider autopilot. much less FSD, ready for prime time until I can make a road trip without my wife asking me to turn it off because the herky-jerky nature of TACC and NOA is making her sick. It needs to drive as naturally and comfortably as a good human driver to move out of beta.


This made me laugh as my wife does the same to NOA...but then I realized she also makes the same comments when I'm driving! I guess it already is as good as a human driver!


----------



## Guy Weathersby (Jun 22, 2016)

epmenard said:


> Also, chances are, our cars would only be scheduled for runs if no Telsa-owned vehicles are available in the area or free to take in passengers. In other words: why split the profit if you don't have to?


Tesla tends to take the long view on most things and favoring Tesla owned vehicles would cut way into customer owned cars on the network.


----------



## PNWmisty (Aug 19, 2017)

epmenard said:


> Also, chances are, our cars would only be scheduled for runs if no Telsa-owned vehicles are available in the area or free to take in passengers. In other words: why split the profit if you don't have to?


I think that's an unnecessarily pessimistic view of Tesla with nothing to support it. Certainly, there will be rules for who gets the fare and I suspect it will naturally go to the car that has the shortest estimated arrival time (for spontaneous fares which I suspect will make up the bulk of the business).


----------



## Guy Weathersby (Jun 22, 2016)

On the autonomy side I was very impressed, but I think that some details on the Tesla Network may need rethinking, unless I misunderstood what Mr Musk said.


In a robotaxi the rider could steer the car. I might consider allowing strangers to ride in my car, but there is no chance that I would allow them to drive it. Also, the legal questions of allowing someone to drive the car without checking driver's license and insurance seems like a nightmare. 
Robotaxis would not be geofenced. Although autonomy might not be geofenced, ride hailing services are banned in quite a few places, so I don't think that they can avoid restricting the Tesla network. 
Apparently the charging snake is still around, but no hint when they might start deploying it.


----------



## PNWmisty (Aug 19, 2017)

Metz123 said:


> The other thing I know is that I won't consider autopilot. much less FSD, ready for prime time until I can make a road trip without my wife asking me to turn it off because the herky-jerky nature of TACC and NOA is making her sick. It needs to drive as naturally and comfortably as a good human driver to move out of beta.


True. I think that day will come much, much sooner than the naysayers think. Some naysayers will still be naysaying even as it's approved and actively reducing the death and injury rate on our roads, lowering insurance costs and improving the lives of millions of people.


----------



## PNWmisty (Aug 19, 2017)

Guy Weathersby said:


> On the autonomy side I was very impressed, but I think that some details on the Tesla Network may need rethinking, unless I misunderstood what Mr Musk said.
> 
> 
> In a robotaxi the rider could steer the car. I might consider allowing strangers to ride in my car, but there is no chance that I would allow them to drive it. Also, the legal questions of allowing someone to drive the car without checking driver's license and insurance seems like a nightmare.


I agree, Musk's answer could have been clearer on this point. But there is a difference between the rider driving the car and it being physically possible to take over. And it was very clear that Musk believes the this would only be an interim situation, as the autonomy continues to improve, the steering wheel would be removed/capped.


----------



## Bokonon (Apr 13, 2017)

Elon just tweeted out a YouTube link to the whole presentation:






... and demo drives will be shared shortly too:


__ https://twitter.com/i/web/status/1120481285766623232


----------



## Guy Weathersby (Jun 22, 2016)

PNWmisty said:


> I agree, Musk's answer could have been clearer on this point. But there is a difference between the rider driving the car and it being physically possible to take over. And it was very clear that Musk believes the this would only be an interim situation, as the autonomy continues to improve, the steering wheel would be removed/capped.


Since the rider taking over seems to mean that they can drive. As long as that is possible, the car is not suitable for a ride hailing service. They do need a way to disable driving in taxi mode.


----------



## Bokonon (Apr 13, 2017)

FSD / HW3 demo drive:


----------



## JWardell (May 9, 2016)

Hyperchange has a great report of the event and describe the drive demos, which showed much more interesting stuff on screen than what was in Tesla's video:


----------



## NEO (Jun 28, 2017)

TeslaFi is now reporting for ap 3 cars


----------



## M3OC Rules (Nov 18, 2016)

Totally loved that presentation. Love that they were so open on the chip details and quite a bit on what is being done in AI and what has been done heuristically. I've always wondered that. It's an important detail now because it explains how they get from where they are to where they need to be. It was a little confusing as to exactly what parts of the new FSD software is in AI but they made it pretty clear autopilot is mostly using AI to detect objects and lanes. 

Their strategy does sound good. Relying on map data does seem very limiting and causes lots of problems now. Heavy reliance on AI should make it more human-like which matters. Hardware sounds great. I believe they do have a huge advantage with the Tesla fleet gathering data. It all sounds great and they were dripping with pride and confidence.

The demo video looks good but it's kind of hard to tell how good it is without being in the car even with the video slowed down. In reality, a demo video doesn't really mean anything and there are many out there including Tesla's. Can't wait to try it though!

I didn't think we would see new software on HW3 for quite awhile but now they got me excited.


----------



## AugustaDriver (Jul 21, 2017)

Most unnerving thing heard today was when Elon was describing the robotaxi fleet and he said that all it takes is one over the air update to activate the fleet. I hope Elon will be a benevolent overlord.


----------



## PNWmisty (Aug 19, 2017)

JWardell said:


> Hyperchange has a great report of the event and describe the drive demos, which showed much more interesting stuff on screen than what was in Tesla's video:


I'm super impressed with Galileo Russell's grasp of what's going on and his fast-paced delivery style. It makes for an informative, interesting and entertaining show. I think he's, by far, the best presenter out there on the subject of Tesla. The quickness of presentation is key because time is of the essence, change is accelerating and if you blink you might miss it. I guess that's why he calls his show "Hyperchange".

It's the only Youtube channel I don't have to run at 1.5X (or higher) speed.


----------



## BSElectrons (Dec 2, 2018)

Anyone heard when current owners with the FSD option can expect the HW update?


----------



## Kizzy (Jul 25, 2016)

BSElectrons said:


> Anyone heard when current owners with the FSD option can expect the HW update?


In a few months.

Electrek:


> Musk said that Tesla will start offering retrofits to current Tesla owners who bought the 'Full Self-Driving package' in the next few months.


----------



## garsh (Apr 4, 2016)

Bokonon said:


> FSD / HW3 demo drive:


I performed an analysis of the first FSD demo back in Feb 2017. Tesla FSD was quite raw back then, and I spotted a lot of issues. Since there's a new video, I thought I'd take another look. A sped-up video like this tends to hide a lot of issues, so I watched it at 1/4 speed.

0:07 It begins a left turn before coming to a stop. So it appears that it might be over the center line. It may still be within its lane though.
0:11 It appears to hit something on the road with the left tires. I couldn't tell what it was. Obstacle avoidance may be an issue for a while.
0:16 Fails to maintain lane completely when there are two left-turn lanes (driver in car in front did an even worse job ).
0:27 Doesn't panic when approaching car whose tires start to go over the line into its travel lane.
0:42 Gets into right lane right beside car that had pulled over. Should have waited until it was past that car.
0:46 More apparently un-avoided potholes.
0:48 Crosses into left lane at end of turn, before committing to switching into that lane. Not sure if it actually new it was safe to begin, or just took turn wide.
1:!2 Starts turning wheel left to prepare for a left turn, then reverses steering strongly to right before coming to a stop. It looks like ultrasonics were complaining about being too close to something at the left-rear - maybe that's the reason? After coming to a complete stop, steering wheel goes strongly left again to complete the turn.
1:16 Crosses solid white line to change lanes. Technically, it should have waited for it to become dotted.

Overall, it appeared to do a MUCH better job this time around. Most of the issues I listed above are relatively minor. Others might be better explained with more information available. I'm pretty excited about how this is progressing.


----------



## Tmo6 (Jul 3, 2018)

Can anyone tell where this stretch of road is? It's apparent that they're taking a loop, but would love to figure out where exactly this is. It handled stop signs, stop lights, left turns, on ramps, off ramps, and yield signs.


----------



## BostonPilot (Aug 14, 2018)

TrevP said:


> What you're seeing is the after purchase price ($CAD) since you didn't get it when your bought your car. There was a short window in early March where the price dropped to $2000 US ($2600 CAD). I jumped in on that firesale. Prices are set to increase May 1 so is advise to find a way to buy it before it goes up if you want it


Also, given their history of pricing, I suspect that at some point, given that all the hardware is already in the (newer) cars, they'll be willing to let people upgrade for far less. I mean, if they have 1,000 customers who won't pay $6,000 but would pay $2,000, that's $2 million dollars they can make for essentially no cost (except maybe lost future sales @ 6K).

I know that I personally won't pay for it given how badly AP currently works in my Model 3... I still think they have a long way to go. It works okay on a highway, but really lousy on back roads. I don't think I've ever been able to let it drive for more than 10 minutes without intervention on back roads. A week or two ago I had it on with my wife in the car as we approached the Concord rotary here in Mass. There was a lot of stop and go waiting to go through the rotary, and it does very well in stop-and-go leading up to a light, or in this case the rotary. I decided to see how it would do on the rotary. Big mistake! It started to turn right to join the rotary, and then suddenly yanked the steering wheel so hard to the left that I lost grip of the wheel for a split second. Basically it decided that instead of following the traffic circle it would turn hard left... not sure if it decided to try to reverse direction or what. Scared the crap out of my wife, definitely startled me even though I was expecting it to possibly screw up. It's not the only event like that I've experienced, thus my feeling that it's far from ready for prime time.

Unlike some of you guys who found the chip description boring, I've worked on several supercomputers so I found it very interesting. In the mid 90s I worked on software for the Intel Paragon which was the largest supercomputer in the world at the time. 143 GFlops. This was a computer used at the national labs for all the stuff you expect to run on that kind of hardware. The Tesla computer is almost exactly 1,000 times faster than the Paragon, so paint me impressed by the hardware (obviously the software still needs some work ).

One thing that surprised me was the dual redundancy. I've done work with Boeing on the 777 and worked with their safety certification guy at the time. Rule of thumb is that with 2 processors all you know is that one of them is bad (but not which one). You really need 3 to vote out the bad one. So, say the car is driving itself on an offramp and the two chips realize they disagree... What are you going to do? You don't know which one to believe... but you can't just tell the driver to take over in a level 5 car... I'll be interested to hear what their strategy is. I can think of a few, but hopefully the new computer they said they're halfway done designing will have triple redundancy.

If anyone is curious, the Boeing 777 has three processors for the flight control computer. Those 3 computer systems are constantly compared and will vote out the one that doesn't agree with the other two. But, on each of those three computer, there are actually 3 different processor chips made by different manufacturers so that even a chip bug can't cause a bad result. If any geeks are interested, there are several good write-ups about the design of the system, here's one: here


----------



## dmbooth (Nov 9, 2017)

AugustaDriver said:


> Most unnerving thing heard today was when Elon was describing the robotaxi fleet and he said that all it takes is one over the air update to activate the fleet. I hope Elon will be a benevolent overlord.


----------



## tivoboy (Mar 24, 2017)

Tmo6 said:


> Can anyone tell where this stretch of road is? It's apparent that they're taking a loop, but would love to figure out where exactly this is. It handled stop signs, stop lights, left turns, on ramps, off ramps, and yield signs.


I'm pretty sure this is from the Palo Alto HQ just east of 280, they take a back route to 280, then north, then do an exit, butterfly across the highway and back.(100% that is Sand Hill Rd. Where they do the turn around back onto 280) Part of me thinks that all those cars in the 2nd lane from the right are helpers, the ones with the lights on? They seem to sort of hold the test car in the right lane before it exits.

I will say that this route is probably easily a route that any Tesla has done more than a million times before


----------



## MelindaV (Apr 2, 2016)

garsh said:


> 1:!2 Starts turning wheel left to prepare for a left turn, then reverses steering strongly to right before coming to a stop.


In the Q&A Elon answered a question regarding adding side radar for cross traffic visibility. His answer was essentially what you are seeing here. The car will position itself coming to an intersection in a way to use the cameras to look for cross traffic.


----------



## MelindaV (Apr 2, 2016)

here's the place in the video he answers about cross traffic


----------



## SoFlaModel3 (Apr 15, 2017)

I love Tesla


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> True. I think that day will come much, much sooner than the naysayers think. Some naysayers will still be naysaying even as it's approved and actively reducing the death and injury rate on our roads, lowering insurance costs and improving the lives of millions of people.


I'd be interested in seeing how this pans out in 2-3 years. Because I simply don't see it happening by then.


----------



## AugustaDriver (Jul 21, 2017)

dmbooth said:


>


That's exactly what I was thinking, so we should be aware what we are getting in to when we accept Update 2020.66.1


----------



## garsh (Apr 4, 2016)

MelindaV said:


> In the Q&A Elon answered a question regarding adding side radar for cross traffic visibility. His answer was essentially what you are seeing here. The car will position itself coming to an intersection in a way to use the cameras to look for cross traffic.


Ah, that does make great sense! Thanks for pointing that out - it makes the behavior seem less random.
The car would start turning left because the lane actually starts curving that way. But then it decides that it would like to be 90° to the side traffic to better aim the cameras to watch for other cars.

I sometimes do something similar myself when approaching an intersection that isn't at right angles.
In particular, here's one were I do that every day. I always turn sharply to the right to get myself at a right angle before turning left onto this road.
Otherwise, it can be too hard to look back over your right shoulder to see cars coming from the right.


----------



## slasher016 (Sep 12, 2017)

AugustaDriver said:


> That's exactly what I was thinking, so we should be aware what we are getting in to when we accept Update 2020.66.1


If there's a week 66 in 2020, there is a serious problem in the first place. . .


----------



## MelindaV (Apr 2, 2016)

a 5 minute summary of yesterday's presentation

__ https://twitter.com/i/web/status/1120673893923131394


----------



## Needsdecaf (Dec 27, 2018)

Needsdecaf said:


> I'd be interested in seeing how this pans out in 2-3 years. Because I simply don't see it happening by then.


@PNWmisty Laugh all you want. You can continue to be amazed by the tech. But between a technical standpoint and more importantly a legislative one, I simply don't see it in that time frame. Eventually, absolutely, because the market is pushing it that way. But I think we have a long way to go. Maybe I've just spent too much time interacting with public officials and living in the DC Metro area, but changes like this that are 1. not universally publicly popular and 2. Have a direct impact on life safety aren't going to be easily or quickly allowed.

Sorry if you (or others) see me as Debbie downer. I certainly appreciate my car for what it does but I my engineering background and mindset is unwilling to allow me to simply buy in without more facts lining up than have been currently presented. Certainly from my direct experience, there are too many issues with NOA / EAP every day for me to have much hope that it will improve in that timeframe.


----------



## JWardell (May 9, 2016)

BostonPilot said:


> Also, given their history of pricing, I suspect that at some point, given that all the hardware is already in the (newer) cars, they'll be willing to let people upgrade for far less. I mean, if they have 1,000 customers who won't pay $6,000 but would pay $2,000, that's $2 million dollars they can make for essentially no cost (except maybe lost future sales @ 6K).
> 
> I know that I personally won't pay for it given how badly AP currently works in my Model 3... I still think they have a long way to go. It works okay on a highway, but really lousy on back roads. I don't think I've ever been able to let it drive for more than 10 minutes without intervention on back roads. A week or two ago I had it on with my wife in the car as we approached the Concord rotary here in Mass. There was a lot of stop and go waiting to go through the rotary, and it does very well in stop-and-go leading up to a light, or in this case the rotary. I decided to see how it would do on the rotary. Big mistake! It started to turn right to join the rotary, and then suddenly yanked the steering wheel so hard to the left that I lost grip of the wheel for a split second. Basically it decided that instead of following the traffic circle it would turn hard left... not sure if it decided to try to reverse direction or what. Scared the crap out of my wife, definitely startled me even though I was expecting it to possibly screw up. It's not the only event like that I've experienced, thus my feeling that it's far from ready for prime time.
> 
> ...


Do. Not. Use. Autopilot. In. Rotaries!
Especially the Concord rotary, that will chew up and spit out most human drivers!
As I've always said, AP will be FSD in California years before it can handle the roads around Boston.

Automotive processors are commonly only 2x redundant. A car can pull over and stop if something goes very wrong. In many cases, they are barely over 1x, with just a watchdog to detect a failure of a single processor.
Aircraft need 3x redundancy with voting, and are often 5x redundant...they can't just pull over!

Or get around this with automotive 2x redundant parts...plus a parachute


----------



## PNWmisty (Aug 19, 2017)

Needsdecaf said:


> Certainly from my direct experience, there are too many issues with NOA / EAP every day for me to have much hope that it will improve in that timeframe.


3 years is a long time with the rate things are improving. I wouldn't make predictions constrained to a gradual, linear improvement based on the current state of the released product. The hard work has been done and now the training data is pouring in at an accelerating rate. I liken Tesla's AP to a newborn baby. At first, it doesn't have the tools to learn very fast. But once it starts to make sense of the world around it the learning process accelerates at an exponential rate. By the age of 3, it's soaking up information all around it like a sponge.


----------



## Love (Sep 13, 2017)

Needsdecaf said:


> @PNWmisty
> Sorry if you (or others) see me as Debbie downer.


I gotta say, I don't see you as Debbie downer, quite the contrary. I've always felt like you were one that...



...needs decaf.


----------



## timtesla (May 9, 2018)

Needsdecaf said:


> @PNWmisty Laugh all you want. You can continue to be amazed by the tech. But between a technical standpoint and more importantly a legislative one, I simply don't see it in that time frame. Eventually, absolutely, because the market is pushing it that way. But I think we have a long way to go. Maybe I've just spent too much time interacting with public officials and living in the DC Metro area, but changes like this that are 1. not universally publicly popular and 2. Have a direct impact on life safety aren't going to be easily or quickly allowed.
> 
> Sorry if you (or others) see me as Debbie downer. I certainly appreciate my car for what it does but I my engineering background and mindset is unwilling to allow me to simply buy in without more facts lining up than have been currently presented. Certainly from my direct experience, there are too many issues with NOA / EAP every day for me to have much hope that it will improve in that timeframe.


I'm really excited about the tech, but I have to agree with you here. The timeline is just so ambitious and assumes there will be quick turn around with regulations, and software will improve exponentially faster than it has historically. I'm still skeptical about this sudden jump to level 5 autonomy when current EAP still struggles with seemingly basic things like parking and lane changing. Both features "work", but not nearly often enough. I have no doubt that Tesla will get there eventually, but end of 2019/2020 just seems way too hopeful. I hope I'm proven wrong

The other thing that really stood out to me is when they were discussing snow. The engineer suggested that the car knows where the lane should be based on previous data, and that a human can tell where the lane markings are even in snow. This isn't really true though. Sometimes the tire marks in the snow are way off from where the actual lane markings are. Sometimes when there's enough snow, there are not even tire marks to follow, its just a mess of white, and drivers will treat a 4 lane road like a 2 lane road since nobody can tell where the lanes are. Also, when it snows a lot my car complains about radar visibility and disables autopilot. I think there are some real hurdles regarding snow that they just aren't taking seriously enough.


----------



## evannole (Jun 18, 2018)

What I would like to know is if the new HW3 will enable a step-function improvement in EAP and NOA. I have no need or even a particular desire for FSD on city streets, but use EAP on the highway daily. As much as I like it, it still needs substantial improvement, and NOA is virtually useless, in metro Atlanta, at least. Elon's previous tweets indicated that HW3 was all about FSD and that 2.5 was more than adequate for full EAP and NOA functionality, and this is the primary reason I didn't go for FSD during the $2,000 sale.

Now, with all the hype surrounding HW3, I wonder if that's still Tesla's line of thinking. If a step-function improvement in on-highway EAP and NOA is in the cards and can be achieved only with HW3, then I might actually bite, particularly if they run another sale. I'd not, then I will probably sit tight.


----------



## MarkB (Mar 19, 2017)

timtesla said:


> Also, when it snows a lot my car complains about radar visibility and disables autopilot. I think there are some real hurdles regarding snow that they just aren't taking seriously enough.


My NoA is regularly disabled due bad weather (heavy rain). AP typically remains on. Takes much longer than expected after the weather clears before NoA resumes.

Living in greater Vancouver, rain is a frequent issue!

I'm hoping that the NN will solve this eventually.


----------



## Nautilus (Oct 10, 2018)

JWardell said:


> Do. Not. Use. Autopilot. In. Rotaries!
> Especially the Concord rotary, that will chew up and spit out most human drivers!
> As I've always said, AP will be FSD in California years before it can handle the roads around Boston.


I was thinking the same thing when I read about @BostonPilot taking a car into the Concord rotary. It was scary when I'd navigate Nautilus I through it in the late 70s, and it certainly wasn't designed for the volume of traffic it must get these days.

For lower volumes of traffic, roundabouts work quite well. I happen to live in the Roundabout Capital of the US. I think we're up to about 125 of them, and honestly they really do cut down on the time to get from point A to point B. It will be interesting to see how Tesla teaches FSD to navigate roundabouts since it is part science, part art. Here's an example near one of our hospitals:








Carmel residents used to get mocked by their friends in the neighboring towns for all the roundabouts, but I've noticed lately that those towns are now all installing roundabouts as well, so their city planners must agree with the benefits.

Of course nothing can compare with the "Magic Roundabout" in Swindon, UK:


----------



## JWardell (May 9, 2016)

Here's the FSD video slowed down and display blown up:


__
https://www.reddit.com/r/teslamotors/comments/bgb9or


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> 3 years is a long time with the rate things are improving. I wouldn't make predictions constrained to a gradual, linear improvement based on the current state of the released product. The hard work has been done and now the training data is pouring in at an accelerating rate. I liken Tesla's AP to a newborn baby. At first, it doesn't have the tools to learn very fast. But once it starts to make sense of the world around it the learning process accelerates at an exponential rate. By the age of 3, it's soaking up information all around it like a sponge.


The tech will improve. I've seen it improve in the 4 months I've had my Model 3. But I haven't yet seen one release that's 100% perfect.

But assuming the tech gets there in 3 years. Are the regulators going to allow it? I don't think they will in that time frame. And thousands and thousands of people jump into an Uber or Lyft without much second thought. Is the general public going to immediately trust jumping into a driverless car? Doubt it.

I truly believe we will get there eventually. But I don't think it'll be in the timeframes suggested by Tesla.



Lovesword said:


> I gotta say, I don't see you as Debbie downer, quite the contrary. I've always felt like you were one that...
> 
> 
> 
> ...needs decaf.


I made up the name a long time ago in a moment where I needed to chose an online name and, at the moment, really needed a cup of decaf. Over the years it has been fairly accurate, which is ironic because I drink max 2 cups of coffee a day and would probably be the last person who would be asked to settle down. 



timtesla said:


> I'm really excited about the tech, but I have to agree with you here. The timeline is just so ambitious and assumes there will be quick turn around with regulations, and software will improve exponentially faster than it has historically. I'm still skeptical about this sudden jump to level 5 autonomy when current EAP still struggles with seemingly basic things like parking and lane changing. Both features "work", but not nearly often enough. I have no doubt that Tesla will get there eventually, but end of 2019/2020 just seems way too hopeful. I hope I'm proven wrong
> 
> The other thing that really stood out to me is when they were discussing snow. The engineer suggested that the car knows where the lane should be based on previous data, and that a human can tell where the lane markings are even in snow. This isn't really true though. Sometimes the tire marks in the snow are way off from where the actual lane markings are. Sometimes when there's enough snow, there are not even tire marks to follow, its just a mess of white, and drivers will treat a 4 lane road like a 2 lane road since nobody can tell where the lanes are. Also, when it snows a lot my car complains about radar visibility and disables autopilot. I think there are some real hurdles regarding snow that they just aren't taking seriously enough.


There are a lot of variables, and it's a complex decision set. The human neural interface is pretty freaking amazing when you think about it. All of this processing power and decision making ability contained within a lump of mush that's powered by pizza and beer (or kale and kombucha if that's your thing). Not to get all trippy-hippy but sometimes the science part of my brain thinks about it and thinks through the details and goes....


----------



## MelindaV (Apr 2, 2016)

JWardell said:


> Here's the FSD video slowed down and display blown up:
> 
> 
> __
> https://www.reddit.com/r/teslamotors/comments/bgb9or


interesting that the top right corner of the display that we are used to seeing the speed limit sign at changes over to a stop sign and traffic signal when coming up to each of those.


----------



## TrevP (Oct 20, 2015)

I sure hope the augmented display makes it to production. It inspires confidence as to what the car is actually seeing!


----------



## BostonPilot (Aug 14, 2018)

Nautilus said:


> Of course nothing can compare with the "Magic Roundabout" in Swindon, UK:
> View attachment 25102


Lol, I was in Swinden on business a while back and had to drive through that without any knowledge of how I was supposed to navigate it (and while driving a stick shifting with the wrong hand on top of it!).

Basically I got on someones bumper and followed so closely that nobody would try to cut in between the two cars... and then made a break for the exit I thought I needed. Not something I would repeat without proper instruction on "how-the-hell-do-I..."

Ah, Swindon.... the last refuge of terrible UK food (at least when I was there).


----------



## BostonPilot (Aug 14, 2018)

JWardell said:


> Automotive processors are commonly only 2x redundant. A car can pull over and stop if something goes very wrong. In many cases, they are barely over 1x, with just a watchdog to detect a failure of a single processor.
> Aircraft need 3x redundancy with voting, and are often 5x redundant...they can't just pull over!


The thing is, how does a FSD car pull over if it can't sense what the road is doing? What if you're in the middle of an entrance or exit ramp? What if you're on a back twisty road? What if you're in the middle of the Concord Rotary!!! How do you not pull over into a parked car? I can think of some strategies, but none that are foolproof. I think you basically have to lock up the brakes and not change the steering angle and hope the car wasn't about to need a big steering correction to avoid a crash. I don't think you can "pull over".

Believe it or not (especially with all the 737 Max stuff) the airplane situation is much simpler. Airplanes do not commonly fly along inches from other airplanes! And none of them are FSF (full self flying). There's always a human to take over. And the airplane almost never has to make a hard left to avoid flying into a parked aircraft at 34,000 feet!


----------



## PNWmisty (Aug 19, 2017)

BostonPilot said:


> Airplanes do not commonly fly along inches from other airplanes! And none of them are FSF (full self flying). There's always a human to take over.


Don't blink, you might miss it:

https://www.wired.com/story/boeing-autonomous-plane-autopilot/


----------



## JWardell (May 9, 2016)

BostonPilot said:


> The thing is, how does a FSD car pull over if it can't sense what the road is doing? What if you're in the middle of an entrance or exit ramp? What if you're on a back twisty road? What if you're in the middle of the Concord Rotary!!! How do you not pull over into a parked car? I can think of some strategies, but none that are foolproof. I think you basically have to lock up the brakes and not change the steering angle and hope the car wasn't about to need a big steering correction to avoid a crash. I don't think you can "pull over".
> 
> Believe it or not (especially with all the 737 Max stuff) the airplane situation is much simpler. Airplanes do not commonly fly along inches from other airplanes! And none of them are FSF (full self flying). There's always a human to take over. And the airplane almost never has to make a hard left to avoid flying into a parked aircraft at 34,000 feet!


That's exactly why this is so difficult to accomplish. This is not 1970s aircraft autopilot, which has very few inputs of what is around it (if any at all...just keep this heading). And also why full dual redundancy is required so if there is a failure, the failure is detected, and the secondary system can still handle pulling over and stopping.


----------



## BostonPilot (Aug 14, 2018)

JWardell said:


> That's exactly why this is so difficult to accomplish. This is not 1970s aircraft autopilot, which has very few inputs of what is around it (if any at all...just keep this heading). And also why full dual redundancy is required so if there is a failure, the failure is detected, and the secondary system can still handle pulling over and stopping.


Right, but the point is, when the two processors disagree, which processor is wrong? The problem isn't when one stops working... it's when they calculate an answer, but they get different answers. Then what do you do?


----------



## lance.bailey (Apr 1, 2019)

joakimus said:


> I am a wheelchair user and FSD is going to help me a lot with my driving  Hope to see it in the EU ASAP.


this kind of statement makes me happy and should make Tesla proud. I recently had a relative's tremors attributed to Parkinson's (true diagnosis is postmortem) and I keep thinking.... Elon: as we age and lose abilities, Tesla is uniquely positioned to step-in to help, at a minimum I'm hoping for kick-*ss electric mobility scooters.


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> Don't blink, you might miss it:
> 
> https://www.wired.com/story/boeing-autonomous-plane-autopilot/


That article has not aged well.  I don't think the public appetite for autonomous planes is going to be too high right now, especially from Boeing!


----------



## MarkB (Mar 19, 2017)

BostonPilot said:


> And the airplane almost never has to make a hard left to avoid flying into a parked aircraft at 34,000 feet!


There's TCAS. With TCAS, the units coordinate with each other such that when 2 aircraft approach at the same altitude, one goes up and one goes down, and/or that they turn so as to increase the separation.

I eventually see something like this for vehicles, not so much for accident prevention (cars can brake and come to a stop, planes cannot), but for efficiency.

When 2 vehicles are "tied" on a merge, who goes first? Some sort of electronic "coin toss" minutes prior (before each vehicle is even in visual range of the other), where one vehicle speeds up a tenth of a km per hour, and the other slows by the same amount would eliminate the tie, and make the resulting merge seamless. It would prevent the Phantom Traffic Jams.

All vehicles would require it though....


----------



## lance.bailey (Apr 1, 2019)

MarkB said:


> All vehicles would require it though....


absolutely @MarkB there needs to be cross brand agreement on interface (API), medium (WiFi/Bluetooth) and protocol so that autonomous Volvo can negotiate with autonomous Tesla with autonomous "and so on"

only when the cars start communicating and working together can we start to see advanced driving skills such a zipper merging (which is sadly currently far too advanced for the average bear on the freeway).


----------



## PNWmisty (Aug 19, 2017)

BostonPilot said:


> Right, but the point is, when the two processors disagree, which processor is wrong? The problem isn't when one stops working... it's when they calculate an answer, but they get different answers. Then what do you do?


Humans have this problem all the time in dangerous, near-crash situations. And they only have one processor that's giving them multiple different answers. It's called "indecision". Humans just pick one option and go with it, or sometimes they freeze up and do absolutely nothing but grip the steering wheel more tightly. Sometimes they pick the right thing to do, other times it was the wrong thing to do and they crash. It happens thousands of times every day of the year.

People who cast doubt on self-driving systems based upon the fact that something will eventually go wrong, are simply spreading fear, uncertainty and doubt about self-driving systems while ignoring the very real carnage that human drivers are certain to cause every day of the year. In the end, it will be statistics that inform which is safer. It would be a real shame to end up with more death, pain, suffering and expense simply because people had an irrational fear of "what if's".


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> People who cast doubt on self-driving systems based upon the fact that something will eventually go wrong, are simply spreading fear, uncertainty and doubt about self-driving systems while ignoring the very real carnage that human drivers are certain to cause every day of the year. In the end, it will be statistics that inform which is safer. It would be a real shame to end up with more death, pain, suffering and expense simply because people had an irrational fear of "what if's".


You bring up a very good point. Overall, humans are NOT infallible when it comes to driving by any means. Statistically speaking, we will soon get to a point where autonomous vehicles are superior at avoiding crashes to humans. Probably soon, too.

However, Fear, Uncertainty and Doubt are very real things, and need to be accounted for and dealt with in order to get people to adopt new technologies. Especially in this circumstance. I believe that the heart of your point comes to responsibility and blame. When a person is at the wheel of a vehicle, no matter how good of a driver they are or are not, they are liable for the actions of that vehicle. If they freeze or make a stupid decision, people realize that THEY are responsible, and more to the point, other people recognize that that individual driver is responsible. Our governments and our courts have recognized that basic fact: you take the wheel, you are responsible for what happens. (I would say that our driver training in this country is a joke and there are way too many people with a license that shouldn't have one and that we should have better driver training, more strict standards for testing and mandatory re-testing over a certain age but that's a different discussion). Of course, people have blamed manufacturers for incidents caused by alleged "defective vehicles" and that's gets into a greyer area of responsibility.

But that grey area leads us to the bigger point. No matter how accurate or good the computers are, one of two things are going to happen. Either the computer will run up against a scenario it cannot process or it will run into a scenario in which an accident is completely unavoidable through the laws of physics. The computers WILL have accidents at some point. It's just the laws of statistics.

So then what happens? Who is to blame? You the owner of the vehicle? Tesla, the manufacturer of the vehicle? The programmers who put forth the software contained within the FSD program? It's murkier here. This is what needs to be sorted out. And that's just from a legal standpoint. What happens from a personal standpoint if you are in one of those autonomous cars and it kiills someone? If you're driving, you know deep down what happened. It's easier to process and get over mentally. But if you're not driving, then what do you think? It's not like being a passenger where you literally have no control. Or even in a fully autonomous vehicle where there is no opportunity for you to take control (like a train for instance). If you have the opportunity to take control, and you didn't, should you have? You'll have to live with that. Back to the legal side, what if you do have the opportunity to take control, and you panic and override the computer, and then crash? The manufacturer might say that "if the driver hadn't taken control, the software quite clearly states that the car would have avoided the crash, but the driver panicked and grabbed the wheel mid-maneuver but it was too late for them to do anything other than what the computer would have done so they crashed". Then what? Can Tesla wash their hands and say "well, user intervention, not our fault. If the car had been left alone, everything would have been good, however they grabbed the wheel / hit the gas / hit the brakes and that caused the crash".

Volvo has already stated that they will accept responsibility for any crash when operating their future vehicles are operating autonomously. To my knowledge, they are the only ones who have stated that. But in the case of a user override above, does that count as "operating autonomously"?

All of this needs to be rectified before AV's are fully running around. And because you have emotion involved, as well as legal liability, those things won't easily be solved any time soon. At least in my opinion, judging by how we've interacted with machinery from this standpoint over the last 100 years.


----------



## PNWmisty (Aug 19, 2017)

Needsdecaf said:


> The computers WILL have accidents at some point. It's just the laws of statistics.
> 
> So then what happens? Who is to blame? You the owner of the vehicle? Tesla, the manufacturer of the vehicle?


The manufacturer will be responsible which is why Tesla has a huge opportunity here. The disruption of the multi-billion dollar auto insurance industry. Currently, a new car buyer with a loan is required to insure the car and that cost is rolled into the monthly payment. By becoming the "insurance agent" at a time when crashes are declining they can simultaneously lower "insurance rates" and reap huge profits in a new industry. There are many ways the mechanics of this could play out so I don't care to discuss the finer points, suffice to say, there is a lot of profit to be had if you make cars that are safer than the current status quo by orders of magnitude. That's the bottom line.


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> The manufacturer will be responsible which is why Tesla has a huge opportunity here. The disruption of the multi-billion dollar auto insurance industry. Currently, a new car buyer with a loan is required to insure the car and that cost is rolled into the monthly payment. By becoming the "insurance agent" at a time when crashes are declining they can simultaneously lower "insurance rates" and reap huge profits in a new industry. There are many ways the mechanics of this could play out so I don't care to discuss the finer points, suffice to say, there is a lot of profit to be had if you make cars that are safer than the current status quo by orders of magnitude. That's the bottom line.


Again, that's only possible with L5 Autonomy WITHOUT the human being able to interact with the vehicle. You're talking about a box on wheels with no controls inside whatsoever. As long as there is the possibility that human interaction can interfere with the system, there is no way to argue that the manufacturer can be solely responsible for the actions of the vehicle. At least not the way our current legal system is set up.

As far as the $ possibility goes, I can see that by becoming the insurer, that maunfacturers will absorb that profit. I'm not sure that is fully likely anytime soon either as I'll still want to have insurance for when I am driving in a non-autonomous vehicle, and I'll want to have umbrella insurance just because this country is litigious by nature.

Where is the rest of the profit coming from?


----------



## PNWmisty (Aug 19, 2017)

Needsdecaf said:


> As long as there is the possibility that human interaction can interfere with the system, there is no way to argue that the manufacturer can be solely responsible for the actions of the vehicle. At least not the way our current legal system is set up.


Disruption does not play by the current rules, that's why it's called disruption. With connected cars, the insurer can tell how many miles are autonomous, how risky the driving, whether the car was in autonomous mode or human controlled at the time of any crash, etc, etc. etc.

The bottom line is that autonomy will dramatically reduce deaths, injuries, disabilities and property damage to such an extent that the insurer will base their rates on the percentage of time the car remains in autonomous mode. And because the manufacturer will also be the insurer, they will accept the liability for all accidents. Lower insurance rates are powerful motivators of behavior. This will also drive sales of autonomous cars.

I don't care to discuss the mechanics of exactly how this will all play out, suffice to say, capitalism works and billions of dollars in savings will benefit Tesla and their customers once autonomy is developed sufficiently for it to be approved by regulators. I'm in the camp that thinks this will happen sooner rather than later (although not on Elon's very optimistic schedule). Once autonomy is statistically safer by a significant margin, regulators would have to approve it. Doing otherwise would basically amount to negligent manslaughter on a massive scale.


----------



## tivoboy (Mar 24, 2017)

BostonPilot said:


> Right, but the point is, when the two processors disagree, which processor is wrong? The problem isn't when one stops working... it's when they calculate an answer, but they get different answers. Then what do you do?


It comes down to whichever has been RIGHT more often than not. There is an algo for that, as well as a reporting metric to indicate which one, which elements should be replaced.

Not you of course, but people often tend to this that every processor is the SAME. Every SSD or disk drive is the SAME. Every element of I/O is the SAME. It's not. There are production and model and version difficiencies that can make their way into the OUTPUT side of the equation (as many bits are woven togehter). In todays' FUTURE world, the systems themselves will be able to say "this f..king part over there isn't pulling its weight relative to what it's specs say".. And we'll get RECALLS and UPDATES.

One would think that the organic material in this world would have developed the ability to do the same by now, but they haven't and they don't. Well, unless it's about politics and well then that is all just b.s. anyway.


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> Disruption does not play by the current rules, that's why it's called disruption. With connected cars, the insurer can tell how many miles are autonomous, how risky the driving, whether the car was in autonomous mode or human controlled at the time of any crash, etc, etc. etc.
> .


Disruption still has to follow the legal rules. Business rules are different. And legal rules are much slower and more difficult to get changed.

I'm not doubting that this will happen. I can see the economic benefits and so does the rest of the automotive industry. Or at least they are moving in this direction at the direction of their shareholders which is the same thing these days. I feel that the tech needs to mature, but that won't be terribly long before it's ready. But the legality? Given the current state of politics in this country, I don't see regulatory approval happening in the near future.


----------



## BostonPilot (Aug 14, 2018)

tivoboy said:


> It comes down to whichever has been RIGHT more often than not. There is an algo for that, as well as a reporting metric to indicate which one, which elements should be replaced.


I've worked on safety critical systems, and fault tolerant systems, and I can tell you that this is not really correct, especially for non-rollback systems. We can break the systems down into two broad categories: realtime and non-realtime. For a non-realtime system (like the fault tolerant computer system I worked on, Sequoia) you can roll back the calculations and try again and hope that you make it through the calculation this time. This will help when the error was a transient error. (but then if the error persists you still have to have redundant hardware). For a realtime system, that's not generally going to work, although in the case of the Tesla computer system I could see them skipping a frame and seeing if it could continue on with subsequent frames. But if it's a hard failure (happens every time) then you don't know which one is correct. So, for a realtime system you would use triple redundancy with a voting system.

(if you're interested in the fault tolerant system I worked on: you could read this article. The chief scientist who designed that system previously designed fault tolerant computer systems for spacecraft.) It wasn't a realtime system, it was used for transaction processing for banks, etc, so it could use the rollback recovery mechanism which interestingly allowed it to recover from certain kinds of software bugs as well as hardware faults.



tivoboy said:


> Not you of course, but people often tend to this that every processor is the SAME. Every SSD or disk drive is the SAME. Every element of I/O is the SAME. It's not. There are production and model and version difficiencies that can make their way into the OUTPUT side of the equation


I'll try to clarify this statement a little. There are always going to be differences between different mask levels of an electronic component. At Sequoia each "processor" actually had two processor chips which were run lock step with their outputs compared. They had to be the same mask revision because some of the internal micro-state of the processor can make it to memory (for instance, when an exception or interrupt causes internal micro-state to be saved to the stack). For that system to work, the micro-state had to be identical (or the comparators would have signaled a fault). And certainly you are right that if they fix a bug in the chip with a new revision of the chip, that can effect the macro state. But again, the solution is that you always run identical mask revisions when you are using a hardware comparator.

If you have a microprocessor that can move non-deterministic data to memory (say, some unused bits in a register that are undefined) then generally you can't use that microprocessor in a lock-step hardware compared system because you can't really tell which differences are significant and which aren't. You can still use a processor like that if the software is written a certain way, i.e. typically you would break your work up into "frames", and at the end of each frame you have certain data output. You can then just compare the output to make sure the output from each processor is identical. That doesn't work when you are trying to make a general purpose fault tolerant system, but it works for a dedicated hardware/software system like the Boeing 777 flight control computer, or the Tesla FSD system.

But still, you need a third processor if you want to know which one is broken when they don't compare, or if you want to be able to continue computing when you hit a non-transient error.

Final (joking) note: My first law of computer errors: "It's always the power supply". (because you would be amazed at how many glitchy crashes etc. in computer systems are all traceable to bad power supplies).


----------



## BostonPilot (Aug 14, 2018)

PNWmisty said:


> Humans have this problem all the time in dangerous, near-crash situations. And they only have one processor that's giving them multiple different answers. It's called "indecision". Humans just pick one option and go with it, or sometimes they freeze up and do absolutely nothing but grip the steering wheel more tightly. Sometimes they pick the right thing to do, other times it was the wrong thing to do and they crash. It happens thousands of times every day of the year.


Right, but we can't rewire the human to work 100% of the time. We CAN produce fault tolerant computer systems that are extremely reliable. There might be reasons for Tesla to not want to do that: it might blow their power budget for the computer system, it might cost more money, it might no longer fit in the space they reserved for it, etc. etc. but the problem with NOT going triple redundant is product liability. They will get their asses handed to them in court when the FSD car plows into a crowd of pedestrians, or runs down the little kid. It's not "can they produce a perfect system" it's "could they have produced a better system and chose not to in order to save money" that will get them killed in court.

BTW, I'm not saying triple redundant is the only solution. They could, for instance, have a second computer system in "hot standby" mode, so that when the first computer system detects a fault it switches over to the second computer system long enough for that computer to get the car to a safe parking location. This has the problems of cost (doubles the cost of the system) space (needs twice as much space) but might help with the power budget (the standby system could use very little power... during startup it runs some power on self tests and then basically goes to sleep like your phone, using almost no power until it's needed.

My point is that dual-redundancy isn't sufficient for a level 5 autonomous car. They need a way to safely get the car off the road when they experience a computer failure.


----------



## M3OC Rules (Nov 18, 2016)

BostonPilot said:


> My point is that dual-redundancy isn't sufficient for a level 5 autonomous car. They need a way to safely get the car off the road when they experience a computer failure.


Great discussion. Getting off the road safely seems really important. I've always been surprised that current systems just stop in the road like if you stop paying attention to autopilot nag.

That being said if FSD is twice as safe as a human driver why would the product liability be more than insurance of the human driver?


----------



## PNWmisty (Aug 19, 2017)

BostonPilot said:


> Right, but we can't rewire the human to work 100% of the time. We CAN produce fault tolerant computer systems that are extremely reliable.


Neither humans nor computers can drive with 100% reliability. More extensive driver training is a scientifically proven way to lower the accident rate but we don't require that. In the end, the safety of any system can be appraised statistically. Safer is safer regardless of how you get there.



> My point is that dual-redundancy isn't sufficient for a level 5 autonomous car.


If it's statistically safer than human drivers it would be criminally negligent not to approve it.


----------



## Wooloomooloo (Oct 29, 2017)

BostonPilot said:


> My point is that dual-redundancy isn't sufficient for a level 5 autonomous car. They need a way to safely get the car off the road when they experience a computer failure.


Agreed, although as you imply, you don't need to have a hot/hot/hot situation with identical capabilities - just enough capability to get the car to safety. As long as the backup/redundancy can bring the vehicle to a safe stop in the event of a failure of some kind, then the solution is just fine, and in fact most critical systems have a hot/hot swap and then a kind of down-step hot swap. In most situations, dropping from a Level 5 capable machine to even just a level 3 capable machine, minus the human intervention requirement, could be enough, and obviously the hardware requirements are much lower.

If you think about something as critical as a parachute, there's the main chute, a backup (which is usually just as capable) and then an emergency chute, which lacks the ability to maneuver and will probably bounce you off the ground at 30 mph with a few broken bones, but you'll live.

Also consider scenarios where the Computer isn't the thing that fails. The front camera could fail, or the screen hit by debris, or the feed simply stops to the processor due to hardware failure. What then? Redundant cameras, redundant wiring, redundant power supply, you could go on and on. The car simply has to be able to stop quickly and get to the safest place possible.

All interesting discussion though!


----------



## Wooloomooloo (Oct 29, 2017)

BostonPilot said:


> Also, given their history of pricing, I suspect that at some point, given that all the hardware is already in the (newer) cars, they'll be willing to let people upgrade for far less. I mean, if they have 1,000 customers who won't pay $6,000 but would pay $2,000, that's $2 million dollars they can make for essentially no cost (except maybe lost future sales @ 6K).
> 
> I know that I personally won't pay for it given how badly AP currently works in my Model 3... I still think they have a long way to go. It works okay on a highway, but really lousy on back roads. I don't think I've ever been able to let it drive for more than 10 minutes without intervention on back roads. A week or two ago I had it on with my wife in the car as we approached the Concord rotary here in Mass. There was a lot of stop and go waiting to go through the rotary, and it does very well in stop-and-go leading up to a light, or in this case the rotary. I decided to see how it would do on the rotary. Big mistake! It started to turn right to join the rotary, and then suddenly yanked the steering wheel so hard to the left that I lost grip of the wheel for a split second. Basically it decided that instead of following the traffic circle it would turn hard left... not sure if it decided to try to reverse direction or what. Scared the crap out of my wife, definitely startled me even though I was expecting it to possibly screw up. It's not the only event like that I've experienced, thus my feeling that it's far from ready for prime time.
> 
> Unlike some of you guys who found the chip description boring, I've worked on several supercomputers so I found it very interesting. In the mid 90s I worked on software for the Intel Paragon which was the largest supercomputer in the world at the time. 143 GFlops. This was a computer used at the national labs for all the stuff you expect to run on that kind of hardware. The Tesla computer is almost exactly 1,000 times faster than the Paragon, so paint me impressed by the hardware (obviously the software still needs some work ).


The AP is only supposed to be used on dual-carriage highways, not single-lane backroads. The fact that you can get away with it sometimes is a testament to how well it works, not how badly, in my opinion.



> One thing that surprised me was the dual redundancy. I've done work with Boeing on the 777 and worked with their safety certification guy at the time. Rule of thumb is that with 2 processors all you know is that one of them is bad (but not which one). You really need 3 to vote out the bad one. So, say the car is driving itself on an offramp and the two chips realize they disagree... What are you going to do? You don't know which one to believe... but you can't just tell the driver to take over in a level 5 car... I'll be interested to hear what their strategy is. I can think of a few, but hopefully the new computer they said they're halfway done designing will have triple redundancy.
> 
> If anyone is curious, the Boeing 777 has three processors for the flight control computer. Those 3 computer systems are constantly compared and will vote out the one that doesn't agree with the other two. But, on each of those three computer, there are actually 3 different processor chips made by different manufacturers so that even a chip bug can't cause a bad result. If any geeks are interested, there are several good write-ups about the design of the system, here's one: here


Interesting point, and I read the replies after this but it seems some responses didn't really get what you were saying. I don't know anything about the Boeing 777, but I've worked in the AI field for some time and what I know about ML and autonomous systems is that most of them are not designed to make binary decisions, or take binary actions. In the Tesla (and I'm honestly guessing here) I suspect the two processors will both model the scenario and model multiple possible actions, with the action having the highest level of confidence between the two being the one that is acted upon.

It would be exceptionally unlikely that the two processors would model two opposing actions based on the same inputs with exactly the same level of confidence (for example one says brake, the other says accelerate).

Again, I don't know abut your Boeing, but I assume the computer is deciding things like angle of attack, or power to the engines, and the redundancy is more to ensure that one of the computers isn't taking bad readings rather than them having their own algos and coming up with difference responses to the same inputs,. such as nose-down Vs increased power in the event of a stall.

Again - educated guess and my guess is as good or bad as yours.


----------



## BostonPilot (Aug 14, 2018)

Wooloomooloo said:


> The AP is only supposed to be used on dual-carriage highways, not single-lane backroads. The fact that you can get away with it sometimes is a testament to how well it works, not how badly, in my opinion.


I don't disagree, but as far as I know, the EAP isn't a different algorithm from FSD... so when I look at how poorly EAP works on backroads, I don't see how they're going to make FSD radically better in a short period of time (1-1.5 years!) so that it can work on backroads. Also, I've used it on the highway and even there I frequently have to take over. There are so many problems I don't want to get stuck on the minutia, but on a recent drive with a friend who also has a Model 3 I had to take over maybe half a dozen times while on the highway. There is no question in my mind that they'll make the algorithms better, it's just the timeframe that Elon specified that I'm skeptical about. I think Tesla is at least 10 years away from Level 5 FSD, not 1-2 years like Elon said. That's a separate issue from the redundancy of the processor system, though.



Wooloomooloo said:


> In the Tesla (and I'm honestly guessing here) I suspect the two processors will both model the scenario and model multiple possible actions, with the action having the highest level of confidence between the two being the one that is acted upon.


No, they talked about this... the two processors are running identical code on identical data, and the results are being compared to be sure they got the exact same answer. Classic dual redundancy. This is perfectly adequate for Level 2/3 because you have a driver to take over if the computer system detects an error. For Level 5 it's not adequate because the car still needs to get to a safe place to stop, IMHO.

I can think of some strategies... for instance if you know you can stop the car at the current speed it's traveling at in 2.5 seconds (by slamming on the brakes) then maybe you always do a future path prediction of 2.5 seconds so that if the computer system detects a fault you slam the brakes on and follow the computed future path. This has the disadvantage that if things change (a child steps off the curb, say) that you won't take that into account... But the thing is, if you already have dual redundant hardware it's not THAT expensive to make it triple redundant (or have a hot spare) so the question is why wouldn't you just add the necessary hardware to be able to continue running your FSD algorithm in the face of one processor going bad?



Wooloomooloo said:


> It would be exceptionally unlikely that the two processors would model two opposing actions based on the same inputs with exactly the same level of confidence (for example one says brake, the other says accelerate).


There's no basis for saying that. When processors fail, you can't predict in which ways they are going to fail. They can fail hard (which is actually good because then you know they failed) but they can fail in subtle ways, so that you think they're producing good computations when in fact they are producing garbage. But in any case, we already know that Tesla is running the two processors in parallel and comparing the results. The question is, when there's no steering wheel what will the car do when the computer system detects that there is an error, but it doesn't know which processor is bad?

This is probably all moot because I really doubt FSD will be ready in time to run on this processor system. Elon stated that they're already halfway through the design of the next computer system (that he said would be 3 times faster than the HW3 one). Maybe that one will have triple redundancy, or maybe a hot spare. I guess we'll have to wait and see!


----------



## Wooloomooloo (Oct 29, 2017)

BostonPilot said:


> I don't disagree, but as far as I know, the EAP isn't a different algorithm from FSD... so when I look at how poorly EAP works on backroads, I don't see how they're going to make FSD radically better in a short period of time (1-1.5 years!) so that it can work on backroads. Also, I've used it on the highway and even there I frequently have to take over. There are so many problems I don't want to get stuck on the minutia, but on a recent drive with a friend who also has a Model 3 I had to take over maybe half a dozen times while on the highway. There is no question in my mind that they'll make the algorithms better, it's just the timeframe that Elon specified that I'm skeptical about. I think Tesla is at least 10 years away from Level 5 FSD, not 1-2 years like Elon said. That's a separate issue from the redundancy of the processor system, though.


Before I start, I wan atto get this out of the way. I am a FSB skeptic like you, within the timeframes and with the hardware Tesla are talking about. My background is in NLP as well as statistical modeling in risk management within financial services, so I have some idea about the sheer compute needed to model even fairly basic scenarios with any degree of confidence. I said on another forum that I thought anything close to Level 5 autonomy is 10 - 12 years away, again agreed.

Anyway - the EAP and FSD algos will be quite different, if only for the fact that the FSD will be using all 8 cameras, not just the front facing one that EAP uses. If you remember when Tesla went from AP1 to AP2, most people said it was worse



> No, they talked about this... the two processors are running identical code on identical data, and the results are being compared to be sure they got the exact same answer. Classic dual redundancy. This is perfectly adequate for Level 2/3 because you have a driver to take over if the computer system detects an error. For Level 5 it's not adequate because the car still needs to get to a safe place to stop, IMHO.
> 
> I can think of some strategies... for instance if you know you can stop the car at the current speed it's traveling at in 2.5 seconds (by slamming on the brakes) then maybe you always do a future path prediction of 2.5 seconds so that if the computer system detects a fault you slam the brakes on and follow the computed future path. This has the disadvantage that if things change (a child steps off the curb, say) that you won't take that into account... But the thing is, if you already have dual redundant hardware it's not THAT expensive to make it triple redundant (or have a hot spare) so the question is why wouldn't you just add the necessary hardware to be able to continue running your FSD algorithm in the face of one processor going bad?


I hadn't watched the presentation until last night, so I didn't know that. However a third processor with independent inputs, software and power, would obviously be 50% more expensive or use 50% more capacity at the fab. For Tesla, who seem to have basic issues in being an efficient company, that's probably a big deal. Elon did say that the chances of one of the CPUs failing is lower than the chance of someone losing consciousness, and from a hardware perspective that's probably true.



> There's no basis for saying that. When processors fail, you can't predict in which ways they are going to fail. They can fail hard (which is actually good because then you know they failed) but they can fail in subtle ways, so that you think they're producing good computations when in fact they are producing garbage. But in any case, we already know that Tesla is running the two processors in parallel and comparing the results. The question is, when there's no steering wheel what will the car do when the computer system detects that there is an error, but it doesn't know which processor is bad?


This is where you lose me a little. Of course there is a basis for what I said, for two reasons. Firstly CPU failures (or associated hardware) are almost always complete. It's extremely rare for a processor to be able to execute code and get the wrong answer without basic checksums figuring out there is an issue before hand, due to a hardware fault. Yes I know of examples of micro-code bugs where that occurs, but then that's universal in all of those processors with that microcode, but if they're identical, then something needs to occur post-installation to be subtle enough not to be noticed by integrity checks, but enough to potentially forget to carry a decimal somewhere and come up with a different answer to the other processor. I think it was fair to say that's going to be exceptionally unlikely, and with the right software checks on startup, even if it were possible, it could be easily detected before starting any journey.

The second reason is pretty much what I said above, that almost all autonomous systems constantly consider many paths/decisions continuously and actions are not binary, so I think the chances of a hardware failure in one chip resulting in a bad decision is very small - far lower than a driver doing the same thing, or a faulty camera or some other optical issue.

Your question was specifically, if the processors come up with different answers, which one wins? In a three-way you have a "minority report" type scenario where a 2-1 situation results in the 2 overriding the 1. I honestly don't know, but if I was designing the system, I'd have a few reference algos and have both CPUs calculate against it and compare the answers to the reference before every journey, so you're limiting your risk to a very precise and subtle failure during a journey.



> This is probably all moot because I really doubt FSD will be ready in time to run on this processor system. Elon stated that they're already halfway through the design of the next computer system (that he said would be 3 times faster than the HW3 one). Maybe that one will have triple redundancy, or maybe a hot spare. I guess we'll have to wait and see!


Agreed. I think it's very possible for Tesla to vastly improve on EAP with the new hardware and using more cameras, which is something to look forward to.


----------



## PNWmisty (Aug 19, 2017)

BostonPilot said:


> No, they talked about this... the two processors are running identical code on identical data, and the results are being compared to be sure they got the exact same answer. Classic dual redundancy. This is perfectly adequate for Level 2/3 because you have a driver to take over if the computer system detects an error. For Level 5 it's not adequate because the car still needs to get to a safe place to stop, IMHO.


You are missing the larger picture.

The computer replaces a fallible human. The chance of one processor failing is LOWER than the chance of the human losing consciousness! By many orders of magnitude.

You shouldn't spread Fear, Uncertainty and Doubt when the alternative will kill and injure more people. It's about RELATIVE safety, there are no absolute guarantees in this life.


----------



## TrevP (Oct 20, 2015)

I’ve said it before but the end game of Autopilot is safety but also continuous improvement. Tesla is already working on the next gen chip and will continue to do so after that and so forth. I actually believe that their systems will get so good as to predict well in advance of possible conflicts and act even more safely. This is already in its infancy with pedestrian behaviour as stated in their presentation. Combine that with triple redundancy systems, faster CPUs and better software and you have an outcome that will eventually outperform humans. Don’t believe me? Watch any YouTube video where AIs are tasked with learning video games and they eventually are way better than humans. I can see a day not to far from now where AI cars race on the track instead of humans. The focus will change from driver skill to software coding skill. 

Eventually FSD will seen as a standard safety feature much like seatbelts, airbags, AEB, antilock brakes etc... Tesla will be the first to have it standard on their cars and the rest of the industry will follow suit, it might even become legislated to require it in due time.

However I don’t see steering wheels going away anytime soon. Not until the systems have been proven for years and legislation has been passed to allow it so one can ignore all these renders of cars with retractable wheels or none at all.


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> You are missing the larger picture.
> 
> The computer replaces a fallible human. The chance of one processor failing is LOWER than the chance of the human losing consciousness! By many orders of magnitude.
> 
> You shouldn't spread Fear, Uncertainty and Doubt when the alternative will kill and injure more people. It's about RELATIVE safety, there are no absolute guarantees in this life.


Source for these orders of magnitude statistics?

Also, questioning the claims that Elon has made is not spreading FUD, sorry. What I see above is a logical and fact based discussion. That is the opposite of FUD. FUD is based on emotions and without facts and statistics. The discussion above is being given by people with experience in the type of systems and statistics which are crucial to the implementation of any kind of autonomous driving. That's not FUD.



TrevP said:


> However I don't see steering wheels going away anytime soon. Not until the systems have been proven for years and legislation has been passed to allow it so one can ignore all these renders of cars with retractable wheels or none at all.


Agreed. And until such time as the driver is completely unable to take control of the machine, you don't have true autonomous driving. And you won't be able have 100% liability on the manufacturer, or lower insurance premiums.


----------



## PNWmisty (Aug 19, 2017)

Needsdecaf said:


> Also, questioning the claims that Elon has made is not spreading FUD, sorry. What I see above is a logical and fact based discussion. That is the opposite of FUD. FUD is based on emotions and without facts and statistics.


"FUD" stands for "fear, uncertainty and doubt". When someone starts saying that two redundant computers are not reliable enough to replace a fallible human, a human that can pass out, have a heart attack, become terrified, become distracted, zone out, etc., I think that qualifies as spreading fear, uncertainty and doubt.


----------



## Needsdecaf (Dec 27, 2018)

PNWmisty said:


> "FUD" stands for "fear, uncertainty and doubt". When someone starts saying that two redundant computers are not reliable enough to replace a fallible human, a human that can pass out, have a heart attack, become terrified, become distracted, zone out, etc., I think that qualifies as spreading fear, uncertainty and doubt.


I quite understand what FUD stands for. And I will disagree with you. Having a rational discussion regarding the computing requirements needed to get to full autonomy does not qualify as FUD in my book. They seem to be pretty well versed in the technology of automated systems. I haven't seen any part of their discussion containing anyting trying to spread FUD.

Just because it doesn't agree with your viewpoint doesn't mean it's FUD. You keep reiterating that humans are fallible. I won't dispute that and I don't think most here would either. But computing systems aren't perfect either. Hell there are dozens and dozens of posts on this forum discussing the bugs in the Tesla software. Buggy backup cameras, video, displays glitching, etc. No one wants a glitch in the programming causing an accident. I'm sure you don't either. I don't claim to be any kind of expert on these systems so I enjoy listening to those who have some knowledge discuss it. I don't see anyone of the opposing view jumping in with any technical knowledge refuting what they have said so far.


----------



## PNWmisty (Aug 19, 2017)

Needsdecaf said:


> I haven't seen any part of their discussion containing anyting trying to spread FUD.


We will have to disagree on that. IMO, simply by making the argument that we need more redundancy than two computers offer in order to provide adequate safety is FUD. We don't even require humans to have a back-up in case of heart attack or death or passing out but that is perfectly acceptable? It's FUD. That's my opinion.


----------



## BostonPilot (Aug 14, 2018)

PNWmisty said:


> You are missing the larger picture.
> 
> The computer replaces a fallible human. The chance of one processor failing is LOWER than the chance of the human losing consciousness! By many orders of magnitude.
> 
> You shouldn't spread Fear, Uncertainty and Doubt when the alternative will kill and injure more people. It's about RELATIVE safety, there are no absolute guarantees in this life.


I actually don't think I'm missing the bigger picture. The question is not whether FSD will or will not be safer than human beings. The question is whether the manufacturer took reasonable and prudent steps to make the system safe. If I can point to a $8,000 piece of avionics that has triple redundancy, and then a manufacturer tries to point to a $40,000 car that only has dual redundancy, and that manufacturer tries to say they had no choice because triple redundancy would have been too expensive they will get laughed out of court. The point is, the technology is well understood from milspec and aviation industries, why wouldn't you use it in automotive?

The existing types of protection systems we see in automotive like ABS and AEB are additional systems, i.e. if your ABS system fails your brakes will still work... you just won't have ABS helping you avoid wheel lockup. If the AEB fails, it wasn't helping you avoid hitting something, but it didn't remove your ability to steer and brake the vehicle yourself to avoid an accident. Notice that nobody uses steer-by-wire or brake-by-wire in automotive applications. And "throttle-by-wire" has cost companies dearly. Critical systems shouldn't be solely electronic based unless a very high standard of fault tolerance is built in.

The measure isn't "is this better than a human" it will be "is this as good as the FSD coming from Google, Jaguar, whomever. That's where the product liability comes in and the trial lawyers start rubbing their hands together. If Google uses triple redundancy or a hot spare and Tesla doesn't, Tesla will probably have large judgements against it when there is an accident.



PNWmisty said:


> The chance of one processor failing is LOWER than the chance of the human losing consciousness! By many orders of magnitude


Again, it doesn't matter. The test in court won't be "ladies and gentlemen of the jury, the fact that the car drove onto the sidewalk and killed 15 people isn't the point... I have a reliability study here that shows this was much less likely to happen than if a human had been driving and lost consciousness".

The cross examination would be:
Lawyer: "Does Google use triple redundancy?"
Expert: Yes
Lawyer: Why didn't Tesla
Expert: Our analysis said it wasn't necessary
Lawyer: But in this case, it would have prevented this accident, right?"
Expert: Probably yes
Lawyer: No more questions!

(I have no idea whether Google is using triple redundancy, or whether they will when they finally ship self driving cars. But the point is if any of the competition uses triple redundancy or a hot backup and Tesla doesn't... even in the face of decades of milspec and aviation experience... they'll be sucessfully sued for product liability because the measure will be "could they reasonably have made it more reliable" and of course the answer will be "yes")

Here's some reference material you might find interesting:

Here's an article looking at desktop processor reliability 
AEC-Q100 here
JEDEC JESD89A here and an overview here (alpha particle type errors, probably not significant for the Tesla FSD computer)
JEDEC JEP122C here which IS probably relevant because it addresses wearout/corrosion etc. that you expect in automotive environments. They are also the sorts of problems that can cause hard failures (i.e. not transient) that can produce bad data.
And here's a great overview of reliability in aviation by the Chinese Journal of Aeronautics (they're hot to compete with Boeing and Airbus) here (but it's not a short read).


----------



## TrevP (Oct 20, 2015)

It's getting a little hot in here with differing opinions and legal issues. May I suggest we keep in on track with technical discussions and let the lawyers, authorities and Tesla deal with the legalities, after all that's why they say "regulatory approval".

Just a reminder about our posting guidelines to keep things civil and not letting things turn into a personal "soap box".

https://teslaownersonline.com/threads/rules-policies-disclaimers-for-too-forums.2430/

Thank-you


----------

