# Autopilot: Seeing the road in ways that we can't



## KarenRei (Jul 27, 2017)

So, I've generally been an Autopilot / FSD pessimist. There's just so many edge cases to self-driving - for example, how long do you think it'll be before AP realizes that "lamb on one side of the road, ewe on the other" is more dangerous than two lambs, two ewes, or a lamb and ewe on the same side of the road? We use so much logic when driving in adverse conditions - and where I am, adverse conditions are the name of the game.

However, one thing that I used to think would be a very tough nut to crack, I now have the opposite view: that is, _reading the road_.

It's not enough to wait until you're slipping to react to it. In bad conditions, a driver needs to see the road ahead and adjust their speed or lane positioning to be prepared for what's coming. Visually, this is an incredibly difficult task, even for humans. And the way "hazards" look can vary tremendously from place to place. But today, something occurred to me:










These are SAR (Synthetic Aperture Radar) images of Saturn's moon Titan. Stop for a second and ask yourself what exactly you're looking at - what do the colours mean? They're not height maps, or optical brightness - they're the "brightness" of radar returns. Now, these are in part due to the material they're made of. But they're also based on the texture of the object that's reflecting the signal, on the scale of the beam's wavelength. The shorter the wavelength, the finer the texture that's being examined, while the longer the wavelength, the coarser the texture that's being examined. Black means "very smooth" (in this case, methane seas). White means "very rough".

Here's a SAR image of Venus:










The sqiggly line is Baltis Vallis, the longest riverbed in the solar system (carved by some unknown fluid in the past); the dark colours show its smooth riverbed (relative to the chosen frequency). Various fractures in the terrain around it show up as bright, indicating rough material disturbed by tectonic activity.

These surface texture effects, of course, don't apply just to SAR; they apply to any radar. But stop and think of what it means in terms of road analysis: if properly designed, and with a proper software stack, a car's radar could analyze the surface texture of the road on multiple scales ahead of you. Things like sheer ice, standing water, loose gravel, dust, potholes, etc, should all have characteristic reflections. And even where ambiguity exists, the vehicle could correlate returns along the lines of "I drove over this sort of return recently, and here's how much my wheels slipped", and use that to interpret other such returns in the area.

In short, the vehicle could have a _lot more_ - not less - information about how slippery the road is going to be ahead of it than a human driver does, and react accordingly in advance.

There's still an awful lot of logic to incorporate - e.g. on a country road you may want to outright drive down the middle in adverse conditions. When there's a cliff off to one side of the road with no guardrail you'll want to be more cautious than when the road is in the middle of a field. Etc. But in general, I see a lot of reason to be optimistic about a self-driving vehicle's potential ability to "read the road" in terms of reading radar returns.

The other issue I'd been thinking about is "standing water". It's a very serious issue. You don't want a self-driving car happily driving into deep water, while you also don't want it slamming on the brakes on a highway due to a puddle. As humans, we use complicated means to try to assess how deep water is - and still sometimes get it wrong. We look at how much the land is sloping on all sides of the puddle and mentally approximate where it might bottom out. We look at how waves move on the surface. We pay attention to other vehicles that might have tried to go through within sight of us. All sorts of things.

Most radar sensors won't be of use here. What's a self-driving car to do?

It then hit me, though.... self-driving vehicles are data collectors. Including height at each point that they drive through. Properly implemented, assuming people have driven that road before it became flooded, _the topography of that road is a known factor_. The vehicle needs only assess the current water height, and it should know how deep the water is at the deepest point.

Some factors aren't as easy, of course. There's no way to assess road damage or submerged obstacles. Assessing currents requires nontrivial visual or radar analysis. It's of no use on places that people don't go frequently and/or change frequently, such as unbridged river crossings. Etc. But it should be more than sufficient for the general case - and more to the point, do the job much better than people. Where it feels it doesn't have enough information to make a decision, yes or no, it can stop and let the user know what it knows, and ask them to make a call.

So don't get me wrong - I'm still very much a FSD pessimist, and expect much longer timelines / more problems than most people. But in terms of these aspects - road condition and flooding assessment - I'm now convinced that there are very good, practical options available for them.


----------



## 3V Pilot (Sep 15, 2017)

So, the one thing that really hit me after reading you post was this....Is everyone in Iceland a genius because you spend so much time reading books since, well, it is called ICEland after all????....LOL, just kidding but seriously those are all great points. One thing I wonder about though is the radar has a very low "cross section" of the road because it's mounted low in the front bumper. It' won't have the kind of detail you see looking straight down at a planet. Also I doubt the quality of the radar units currently installed for TACC would have the kind of definition needed, even if the cross section limitation wasn't there. Maybe in the future as technology advances but I'd be very surprised if today's automotive radars would be capable. But, then again I'm just taking a guess, I don't live in Iceland so I probably don't know what I'm talking about


----------



## Audrey (Aug 2, 2017)

Mike Land said:


> It' won't have the kind of detail you see looking straight down at a planet.


Iceland is a misnomer, but that's besides the point. You're incorrect about the car's ability to see the world from a top down view. It may not have live data of everywhere on the planet on the fly, but the the car can use satellite image data for the kinds of assessments @KarenRei described. She's spot on the human sight is limited but computer sight does exceed ours -- and will continue to improve at that. Most importantly computers do not have bias, which is an advantage in most situations if the programming (lamb or ewe scenario) is solid.


----------



## SoFlaModel3 (Apr 15, 2017)

I am a FSD pessimist. I didn’t spend the extra $3,000 and I don’t expect it to be actually available until my next car purchase or beyond. 

I am very big on the added safety of EAP over me behind he wheel. The car sees what I can’t (and that’s the car in front of the car in front of me). That’s huge! That mixed with me remaining alert and ready to take over is a great combination. 

My driving stress is vastly reduced and I get where I’m going in a much safer fashion!


----------



## Guest (Apr 1, 2018)

Narrow AI must learn the puddle only once. And then make predictions according to
rainfall, snow melting and other variables to any specific GPS saved location.
I know every inch of the road surface near my house. You know every pothole around
your house. Now Tesla AI must know every inch wherever it has ever driven to be
as good as me around my house. I know where the pothole is, even if it filled with water.

I agree, Level 5 autonomy (FSD) will not be possible with AP2.0 hardware.
Image resolution is insufficient. Human eye has a narrow field that has extreme resolution.
I don't know why Tesla has not implemented a camera that acts like a human eye - have
extremely narrow field of vision that can change direction at very rapid rate (human eye or faster).
So a camera than can read this forum post at the distance of at least 1.5 meters.

Elon has not made enough reasonable steps to make FSD work soon. For example, since day one,
AFAIK, Tesla has never gathered data from vehicle's accelerometer. Speed bumps, potholes,
other defects - those must be learned. Even for sufficient Level 4 autonomy.


----------



## mservice (Jul 29, 2017)

I agree that the self driving car isn’t around the corner, and I doubt that we will truly see it for years. But, in my opinion it won’t necessarily be because of the tech. The one constant that is not discussed much, and is also in my opinion the biggest issue is people. 

Musk and Tesla have been telling people that Autopilot does not make the car autonomous. But, this seems to fall on deaf ears; case in point the unfortunate lose of life in the California Model X crash. Tesla has now reported from telemetry the drive was driving above the speed limit, and did not have his hands on the steering wheel after a number of warnings. 

Others have tried to over ride the cars so not to be bothered by the warnings, such as placing oranges or tennis balls in the spokes of the steering wheel to simulate their hands being on the wheel. 

If and when totally autonomous cars do become more common the one thing that will need to be out of the equation are people. Until then we will see continued issues. For every car driving on their own they will be dogged by someone doing something stupid in the human driven car that a computer won’t understand.


----------



## DrPhyzx (Nov 20, 2017)

I think you underestimate human vision: at least it operates in wavelengths that can be used to assess surface texture. Radar does not - it's resolution is useless for this. Lidar... someday.


----------



## c2c (Sep 19, 2017)

I am reminded of Elon's mention that self driving doesn't have to be perfect in order to save tens of thousands of lives, plus multiple times of serious injuries. We shouldn't wait for perfect.

But i am encouraged by looking beyond a single car. Vehicle to vehicle communications will happen "soon."
the risks or obstacles noted above usually are presented to dozens or hundreds of vehicles as the problems develop. Ice forms as temperatures lower, standing water grows over time. So long as the changing situation is updated to the networked High Definition digital map, many problems can be avoided. 
I understand that today's teslas do not read speed limit signs, for their own use. But the teslas could upload that info to a digital map. As temperatures drop or traction gets squirrelly, let the map know. If we have a mix of teslas and lidar cars updating a map, things should get safer. 
Thus, things could change faster than we might guess.
I think autopilot 2.5 is a step in the right direction. I'm still a couple months from configuring my 3, but I'm inclined to buy the full package. I'm not getting any younger and any help i can get to my reflexes are likely worth it. But i am still the pilot in command.


----------



## Gorillapaws (Jul 30, 2017)

The self driving component I have the biggest issue with is the inability to reason/anticipate situations and drive defensively. For example, if I'm driving down the road in the middle lane and I see a guy in a tricked-out sports car racing down the onramp, I can anticipate that he's probably going to cut off that truck in the right lane who might need to come into my lane abruptly. I'll change lanes or increase the distance in anticipation. Full self driving won't be able to anticipate/interpret these types of scenarios likely for a very long time. I do think it'll be able to react faster than a human, but I don't see it avoiding scenarios like a good human driver might. That said, there are plenty of terrible human drivers out there...


----------



## John (Apr 16, 2016)

I guess I'm a FSD optimist.

1. I think people are in general not that great at driving, all things considered
2. I think machine learning can quickly outstrip human capabilities, like it has so far in other things

Say what you want about what a human CAN SOMETIMES do (like assess the family structure of a sheep herd on the fly), but in reality most people see two animals, freak out, slam on the brakes too hard, spin around and take out both animals like they were bowling to pick up a split spare.

Seriously—I think in a dozen years people will laugh about how ****ty and distracted and confused and slow reacting human drivers used to be. And no more "confronting grandma to take away her keys because honestly it's a miracle she hasn't killed someone already."

And just like many people now think "programming" is typing recipes into Facebook, people in the future will think "driving" is telling your car where to take you.


----------



## mservice (Jul 29, 2017)

DrPhyzx said:


> I think you underestimate human vision: at least it operates in wavelengths that can be used to assess surface texture. Radar does not - it's resolution is useless for this. Lidar... someday.


Nope i don't underestimate human vision, I played baseball, not professionally, for a number of years and have followed the game for decades. The human eye needs to determine within 50 milliseconds what a pitched ball will do. But, my opinion isn't how well the human eye can determine things, it is how humans do dumb things. Computers use logic, humans do not regardless of their eye sight. Not paying attention to the road, and people around you increases the possibility of disaster. As I pointed out Tesla has been telling drivers to keep their hands close to the steeringwheel, and remain diligent while driving in autopilot, but do they?


----------



## m3_4_wifey (Jul 26, 2016)

There's a lot of visual and radar information for the sensors to take it. For a puddles, animals, or other moving vehicle, additional more details scans are going to occur. Just like your eye's would do. I'm curious how many of the yearly accidents occur for the simple reason that the person looked away at the wrong time and their reaction time was not fast enough. Is it higher than 50% of the time? Video cameras are always looking, they never blink, their reaction time should be faster than the human 20ms, and their anticipation should be just as good as humans given the proper programming. I would be very curious what the scenarios Tesla and other self driving companies are trying in their video game like simulations on their latest FSD software.

I would hope that when FSD is enabled, you tell the car your destination and how urgent it is for you to get there quickly. It will then tell you that is not possible to make your meeting on time, or do the best it can if you are willing to take a chance that you will have to pay for a speeding ticket. I would hope that the car would be allowed to speed some, but choose where to speed and where to be cautious (speed more on the highway rather than neighborhood road). The car can make statistical judgement calls including death of humans or animals like, "I can't speed in this section of road because this is the season where deer do cross the road often". This environmental information could be a mixture of road signs and statistical recent or seasonal history from your car or other cars.

FSD cars will get in accidents. It will be interesting if liability can be a clean cut case if someone gets killed in or by a FSD vehicle. There will be so much black box data about an accident that you would hope that no accidents need to be processed through a courtroom, but I'm sure humans will want to muddy the waters for a long time to get a piece of the litigation pie.


----------



## Audrey (Aug 2, 2017)

Gorillapaws said:


> The self driving component I have the biggest issue with is the inability to reason/anticipate situations and drive defensively. For example, if I'm driving down the road in the middle lane and I see a guy in a tricked-out sports car racing down the onramp, I can anticipate that he's probably going to cut off that truck in the right lane who might need to come into my lane abruptly. I'll change lanes or increase the distance in anticipation. Full self driving won't be able to anticipate/interpret these types of scenarios likely for a very long time. I do think it'll be able to react faster than a human, but I don't see it avoiding scenarios like a good human driver might. That said, there are plenty of terrible human drivers out there...


I think your scenario is a perfect example of when FSD excels. The computer lacks emotions or moods for such antics (as the aggressive driver in the scenario). I think mixing FSD with human drivers on the road is incredibly dangerous and will not last long. Once a critical mass of FSD vehicle options exist, I believe regulations will change so that highways or other arterial roadways do not allow any human driving.


----------



## John (Apr 16, 2016)

"Honey, be careful and watch out for people driving their own cars."


----------



## Soda Popinski (Mar 28, 2018)

Any speculation on how well Tesla's FSD research is going compared to other systems, such as Waymo? Or is the consensus the current AutoPilot is Tesla's "state of the art"?

I would hope there are further algorithms being developed to detect stopped obstacles that we haven't yet seen implemented in AP, not to mention the obvious surface street specific things, like recognizing stop lights and signage.


----------



## John (Apr 16, 2016)

Word is that the recent jump in ability is the first taste of the capabilities of the new framework that Andrej Karpathy installed when he came over from OpenAI to lead the effort, and that there are a broader set of features going through beta that are coming soon. I don't know what they are, but I'd guess sign reading, more sophisticated lane changes, maybe on/off ramp driving. Stop lights and stop signs would be huge. Differentiation on the screen of vehicle types would be reassuring, too. It's already cool seeing two vehicles ahead of you, and how well it tracks them changing lanes ahead of you.


----------



## Gorillapaws (Jul 30, 2017)

Audrey said:


> I think your scenario is a perfect example of when FSD excels.


Well, yes, if we replace all bad/aggressive/dangerous drivers with FSD, that of course would be an improvement. My point is that good, alert, human drivers are capable of certain types of situational awareness that is a long way from being solved in AI. Making inferences such as "that driver looks drunk," "he's texting and driving" or "that couch doesn't look secured well on the back of that pickup," and then making appropriate decisions on the part of the AI likely won't happen for a very long time.

I'm really looking forward to EAP to allow me to focus my attention on the road/traffic situation and less on trying to keep my speed and lane position correct. That said, I'm less optimistic about FSD in the medium-term. I do think EAP will get to be very good with the current hardware and will likely react to bad situations once they happen faster/better than a human, but I still believe that I'll be better than the AI at avoiding those situations entirely through defensive driving.

I say all of this because I'm likely an outlier in terms of driver safety. I've never caused an accident and I have 0 moving violations in my 20+ years on the road. I'm probably much more cautious than the typical driver. By definition half of all drivers are below average (and yet I suspect a good number of them probably believe themselves to be much better than they are). I certainly appreciate the logic from the other perspective.


----------



## Audrey (Aug 2, 2017)

Soda Popinski said:


> Any speculation on how well Tesla's FSD research is going compared to other systems, such as Waymo? Or is the consensus the current AutoPilot is Tesla's "state of the art"?


It depends on who you ask. A report out in January 2018 lambasted Tesla's autonomous progression and system thus far. Here are some articles about it:

New Report on Self-Driving Cars Ranks Tesla Dead Last
GM is leading the self-driving car race while Tesla lags far behind, report says
However, Elon defended radar and Tesla's direction for self-driving rather articulately during a call in February.


----------



## John (Apr 16, 2016)

Gorillapaws said:


> Well, yes, if we replace all bad/aggressive/dangerous drivers with FSD, that of course would be an improvement. My point is that good, alert, human drivers are capable of certain types of situational awareness that is a long way from being solved in AI. Making inferences such as "that driver looks drunk," "he's texting and driving" or "that couch doesn't look secured well on the back of that pickup," and then making appropriate decisions on the part of the AI likely won't happen for a very long time.
> 
> I'm really looking forward to EAP to allow me to focus my attention on the road/traffic situation and less on trying to keep my speed and lane position correct. That said, I'm less optimistic about FSD in the medium-term. I do think EAP will get to be very good with the current hardware and will likely react to bad situations once they happen faster/better than a human, but I still believe that I'll be better than the AI at avoiding those situations entirely through defensive driving.
> 
> I say all of this because I'm likely an outlier in terms of driver safety. I've never caused an accident and I have 0 moving violations in my 20+ years on the road. I'm probably much more cautious than the typical driver. By definition half of all drivers are below average (and yet I suspect a good number of them probably believe themselves to be much better than they are). I certainly appreciate the logic from the other perspective.


You make good points, but as a safe driver you can appreciate how nice it would be if there was a lot less distracted, panic-y, drunk driving going on. You know that thing where someone suddenly realizes that they are about to miss their exit and they blindly dart across lanes to get there? Or not paying constant attention and rear ending you? Easy to see how a self-driving car could improve those.

I guess it's natural to think of cases where you might be better than autopilot. But it's actually much easier to think of cases where OTHER people might be much worse than Autopilot.

Perhaps this is a little like inoculations; even if you don't think you need them, we're all better off if we all have them.


----------



## John (Apr 16, 2016)

Also, I can't wait for the day that those traffic slow downs caused by "compression waves" of people accelerating and braking come to an end.


----------



## Soda Popinski (Mar 28, 2018)

Audrey said:


> It depends on who you ask. A report out in January 2018 lambasted Tesla's autonomous progression and system thus far. Here are some articles about it:
> 
> New Report on Self-Driving Cars Ranks Tesla Dead Last
> GM is leading the self-driving car race while Tesla lags far behind, report says
> However, Elon defended radar and Tesla's direction for self-driving rather articulately during a call in February.


It appears the report mentioned is comparing AutoPilot (not actually FSD) to GM, Waymo, and Uber's FSD programs. I don't think that's a fair comparison IF (big IF), Tesla's FSD program is more than AP.


----------



## Michael Russo (Oct 15, 2016)

Sorry for a stupid question - since I am about to get my paws on a CPO with EAP1, should I consider that AP functionality to be at the optimum with the less sophisticated hardware or can we reasonably assume it may still improve via OTA upgrades too...


----------



## garsh (Apr 4, 2016)

Michael Russo said:


> Sorry for a stupid question - since I am about to get my paws on a CPO with EAP1, should I consider that AP functionality to be at the optimum with the less sophisticated hardware or can we reasonably assume it may still improve via OTA upgrades too...


Let it hereby be known that @Michael Russo has posted a question which is most definitely NOT about the Model 3 into a thread within the Model 3-specific subforum. We'll try to get him back on his meds ASAP.

But to answer your question, I think Elon Musk replied to a tweet a few months back saying that AP1 cars would continue to get AP updates.


----------



## Michael Russo (Oct 15, 2016)

garsh said:


> Let it hereby be known that @Michael Russo has posted a question which is most definitely NOT about the Model 3 into a thread within the Model 3-specific subforum. We'll try to get him back on his meds ASAP.
> (...)


Thanks! I know, it does feel like I lost it ...
Yet it's because I found it!!!


----------



## mishakim (Sep 13, 2017)

A related question I've been pondering lately is whether road signs and highway markings need to change to facilitate FSD. The recent crash in California is a worst-case example, where the markings were bad for human or AI. But what I was thinking was that our current markings are designed to be easily understood by a minimally-trained human, and sometimes they don't meet that mark. But if we remove the need for _human_ understanding, signs and markings could be designed that are better suited for an AI to unambiguously understand, while conveying more detailed information, like a QR code. Of course connected highway, car-to-car, and a GIS database of road metadata will do this someday, but markings are a quick and easy way to deploy local data.


----------



## Gorillapaws (Jul 30, 2017)

mishakim said:


> ...But if we remove the need for _human_ understanding, signs and markings could be designed that are better suited for an AI to unambiguously understand, while conveying more detailed information, like a QR code.


I don't see why you can't have both. You could have a QR codes embedded in traditional signs, and even include them on temporary signs like construction signs (e.g. "merge left, construction ahead" + QR code). One downside I can see would be the possibility of faking these systems out with pranksters printing out QR codes and putting them in their windows to mess with the AI. We'd probably need new laws to address those kinds of dangerous situations.


----------



## garsh (Apr 4, 2016)

No, don't need QR codes.

Computers have no problems reading English (or any other language) nowadays. There's no need for anything else. And humans absolutely suck at reading QR codes.


----------



## John (Apr 16, 2016)

Best if there’s “one version of the truth” in terms of signs. Human version = machine version. 

As for road quality, if it becomes abundantly clear that paint on pavement can save lives, paint will be bought.


----------



## garsh (Apr 4, 2016)

John said:


> Best if there's "one version of the truth" in terms of signs. Human version = machine version.


Plus, if signs had QR codes that computers relied upon, how long do you think it would take some kids to print out there own QR codes and tape them onto signs? People wouldn't notice that the QR code says "speed limit 25" instead of 75. Oh, the hijinks that would ensue...


----------



## John (Apr 16, 2016)

garsh said:


> Plus, if signs had QR codes that computers relied upon, how long do you think it would take some kids to print out there own QR codes and tape them onto signs? People wouldn't notice that the QR code says "speed limit 25" instead of 75. Oh, the hijinks that would ensue...


Although that's little more involved than just painting over a number...


----------



## garsh (Apr 4, 2016)

John said:


> Although that's little more involved than just painting over a number...


Not really. They probably only need to replace one or two pixels in the code. A little piece of electrical tape might do the job.


----------



## John (Apr 16, 2016)

garsh said:


> Not really. They probably only need to replace one or two pixels in the code. A little piece of electrical tape might do the job.


Maybe if QR didn't have error checking. Even at its coarsest version, it can actually correct 7% of damage.

I'd actually think that would be a great project for a science class: "What's the fewest number of black squares-of any size-that you can use to change the speed limit by at least 15 MPH?"

Versus someone changing a "7" to a "9" on a regular sign.


----------



## KarenRei (Jul 27, 2017)

You would never choose QR codes for something that you want to function basically as a transponder. You use a transponder for that. They can be actively powered (such as by solar and batteries), or passively powered (such as a cavity resonator responding to the car's radar).

Very hard to mess with. Much harder to damage, particularly the latter. If you don't have to have a power source, you could literally embed transponders in the concrete.


----------



## Vladimír Michálek (Sep 24, 2017)

On a larger scale, I think it's easier to develop FSD first, wait for wide acceptance, then upgrade it (easy) to standardized augmented reality road system, then we can remove the legacy road signs while handling the relatively-rare old vehicles with proper tools like augmented reality visor for the human driver.


----------



## Vladimír Michálek (Sep 24, 2017)

mservice said:


> Computers use logic, humans do not regardless of their eye sight. Not paying attention to the road, and people around you increases the possibility of disaster. As I pointed out Tesla has been telling drivers to keep their hands close to the steeringwheel, and remain diligent while driving in autopilot, but do they?


I think the claim "computers use logic" is not true, and even if it was, it wouldn't be any help.
The "logic" of consistently following the line which is brighter, and leading the car to crash, is an example of such logic that's not helping.

The current machine learning systems are kind of black box, too complex to understand the inner workings. In effect we create human brain like system, which can process huge amount of input to derive simple car driving output. The huge benefit is that the AI is not distracted with other tasks and it's easier to add more input channels to the AI, like IR, GPS, radar, sonar, precise accelerometers, radio receivers. Or just one camera in the top left, one in the top right corner of windshield - still better depth resolution than human eyes.

But we're not there yet. 
The Autopilot's Traffic-Aware Cruise Control seems to be only "Moving Traffic"-Aware, and maybe "Lane Assist says there's a sharp turn ahead"-Aware, but it's not "Suspicious obstacle ahead"-Aware, and especially not "Lane Assist system is not exactly sure why half of the lane looks a little like washed out gore hashing and why is the lane suddenly widening, when the map says there should be a road divide gore"-Aware.
Also, "We're heading into danger stripes sign, better slow down"-Aware Cruise Control would help.


----------



## John (Apr 16, 2016)

It's pretty cool seeing Autopilot correctly navigate highways where the pavement lines and covered up old lane lines go in a different direction from the new lane lines. And that recent video of Autopilot navigating around a construction zone was pretty cool.
https://electrek.co/2018/03/19/tesla-autopilot-handle-construction-zone-new-update/


----------



## John (Apr 16, 2016)

Autopilot already tracks cars in other lanes on Model 3.
It just doesn't clutter up the screen with them.
Try a lane change, and watch as cars in other lanes are shown to you:


----------



## M3MS (Jun 24, 2018)

I'm an amateur AI/ML/deep-learning enthusiast. Yet I consciously chose not to pay for the FSD option because, aside from skepticism over the timeframe that it would take to achieve even level 4 autonomy, I believe attempting to achieve level 5 using the current approaches (by the entire industry, not just Tesla) is actually undesirable:
(a) with today's and tomorrow's deep learning algorithms and AI hardware, it's an intractable problem - there are just too many variables due to the haphazard out-of-control nature of the city-street and highway environments. The best we can hope for is to solve 99% of the problem and rely on integrated safety envelopes to handle the remaining 1% scenarios (emergency pull-overs / remote human driver / etc.).
(b) even when something reasonable could inevitably be achieved that was superior to humans and eliminated the driver, it would have an adverse impact on range, as deep learning algorithms consume massive amounts of power, which constitutes a sub-optimal utilization of resources (in this case battery power) given (IMO) better alternatives per below.
(c) alternatively, if one defined simplifying constraints and architected an integrated system of smart and dedicated roadways and switching systems and sensor-equipped cars (in a manner conceptually similar to the Japanese Shinkansen bullet train system), the implementation of an autonomous system is greatly simplified, as the complex machine vision problem is virtually eliminated. Essentially a "Railroads 2.0" without physical rails and with software smarts. The challenge here is, of course, the human element: realizing such a system requires elaborate teamwork between cities, governments and auto manufacturers.

That said, I do use EAP carefully but extensively, and when and if Tesla offers FSD that works reasonably well and doesn't sacrifice range, I'll pay the premium and upgrade my M3 to FSD in a heartbeat.

My 2 cents worth.


----------



## John (Apr 16, 2016)

M3MS said:


> I'm an amateur AI/ML/deep-learning enthusiast. Yet I consciously chose not to pay for the FSD option because, aside from skepticism over the timeframe that it would take to achieve even level 4 autonomy, I believe attempting to achieve level 5 using the current approaches (by the entire industry, not just Tesla) is actually undesirable:
> (a) with today's and tomorrow's deep learning algorithms and AI hardware, it's an intractable problem - there are just too many variables due to the haphazard out-of-control nature of the city-street and highway environments. The best we can hope for is to solve 99% of the problem and rely on integrated safety envelopes to handle the remaining 1% scenarios (emergency pull-overs / remote human driver / etc.).
> (b) even when something reasonable could inevitably be achieved that was superior to humans and eliminated the driver, it would have an adverse impact on range, as deep learning algorithms consume massive amounts of power, which constitutes a sub-optimal utilization of resources (in this case battery power) given (IMO) better alternatives per below.
> (c) alternatively, if one defined simplifying constraints and architected an integrated system of smart and dedicated roadways and switching systems and sensor-equipped cars (in a manner conceptually similar to the Japanese Shinkansen bullet train system), the implementation of an autonomous system is greatly simplified, as the complex machine vision problem is virtually eliminated. Essentially a "Railroads 2.0" without physical rails and with software smarts. The challenge here is, of course, the human element: realizing such a system requires elaborate teamwork between cities, governments and auto manufacturers.
> ...


Good perspective, thanks for sharing.

My two cents is that people are really, really bad at driving. I doubt a self-driving car will ever:

- Say,"Oh, crap!" And cut across three lanes without looking so as not to miss an exit.

- Rear end me and total my car because it was checking texts (true, last October)

- Back into me without looking in a parking lot (also happened)

- See a Mustang and start racing it, weaving in and out of traffic

- Fall asleep

- Get drunk

- Drive 40 mph on a 65 mph freeway

- Drive 95 mph on a 65 mph freeway

- "Forget" to use turn signals

- Pass on a two lane road on a curve and almost run traffic off the road coming the other way (also happened last week to me)

Stuff like that. I could go on.

Also, we need FSD before I get too old and the kids have to roshambo to figure out who takes my keys away.


----------



## 3V Pilot (Sep 15, 2017)

John said:


> Good perspective, thanks for sharing.
> 
> My two cents is that people are really, really bad at driving. I doubt a self-driving car will ever:
> 
> ...


Very true and I'll add:

It will never drive while....

Tired
Drunk
Angry
Sad
or just after it's significant other broke up with it.

I could also go on and on as well and that is why I'm all in favor of, and purchased, Full Self Driving. I love to drive but certainly don't "need" to drive when the car can do most of the major work. Even if Tesla starts to release some minor FSD improvements like the car parking itself after I get out at a the movies or a shopping center that would be awersome! I want to be able to take advantage of everything the car can do. Also look forward to the day when all these systems are mandatory on every car and we don't have to worry about other distracted/bad drivers as much.


----------



## SoFlaModel3 (Apr 15, 2017)

John said:


> Good perspective, thanks for sharing.
> 
> My two cents is that people are really, really bad at driving. I doubt a self-driving car will ever:
> 
> ...





3V Pilot said:


> Very true and I'll add:
> 
> It will never drive while....
> 
> ...


To take this to another level. Last night I was at a concert. In the parking lot (before the concert), I saw a woman driving with her head down rolling at 5 MPH. She was probably looking for makeup, phone, tickets, etc. Well sure enough she rolled right into a parked car.


----------



## avoigt (Sep 30, 2017)

This is an interesting topic.

People do reference their ability to achieve something based on the experience and social environment they are grown up with. As humans did not see yet any other animals driving (at least not better than we do) our frame of reference is, we are just great driver at least the best that exist in the universe. That is fully true in our frame of reference but not in other frames of references.

Having more and different senses or sensors in our body would allow us to be better driver if we could process and interpret these signals in coordination with existing information effectively . That would be a new frame of reference and a new experience we could relate to.

As we do not have this additional senses or sensors and information we are unable as human beings to compare the two but still try to. This does lead to more confusion than clarity and unfortunately is a conversation with many wrong conclusions.

The issue I see here is that we try to understand a complex process and use our rather still simple process we believe to know to do it. 

Allow me to give you an example: 

I am a scuba diver and did ask myself often why the big fish does not eat the colorful small fish right in front of him although obviously hungry. I learned that the small for our eyes colorful fish cannot be recognized by the big one because he does see infrared and the color in my eyes is very different in his therefore the fish is not visually detachable. My frame of reference would kill the small fish if <I could give him color to protect him against predators.

You get the picture ?! 

My prediction is that autonomous cars will be very much better drivers once the technology is able to make sense out of enough information collected. The challenge we face is the transition phase where people will judge something they do not understand really.

When that will be and the transition phase is almost over is to be seen. Until that point we will unfortunately experience accidents, we as humans would believe we could have secured easily and claim to be better drivers but do not even comprehend what happened. 

Not to understand in detail what happened and having fatalities does create fear and that is likely the biggest opponent to FSD we have right now.


----------



## Bokonon (Apr 13, 2017)

avoigt said:


> My prediction is that autonomous cars will be very much better drivers once the technology is able to make sense out of enough information collected. The challenge we face is the transition phase where *people will judge something they do not understand* really.


I think this is the crux of the issue, because the "frame of reference" problem goes both ways.

Imagine those big predator fish looking at a scuba diver, thinking, "Wow, that strange masked creature can see things that I cannot! It is also clearly equipped with technology that my feeble mind cannot possibly understand. Therefore, this creature must be utterly perfect and infallible in its ability to see and perceive all ocean life, and in its ability to avoid all underwater obstacles and predators!"

Now imagine the predator fish sees the scuba diver scouring the ocean floor at such a great depth that there is no light on the visible spectrum. (Let's ignore the bone-crushing pressure down there. ) The predator fish sees the scuba diver clumsily bump into thermal vents and other structures that give off an infrared signature, but no visible light. "But I don't understand!" wonders the confused predator fish. "This creature can see things that I cannot and is obviously far more intelligent than I am. Why does it keep crashing into those rock pillars? Can't it navigate a simple passage?!"

I think we as humans have a tendency to simplify abstract, multi-dimensional concepts like "intelligence" and "perception" into a one-dimensional, linear scale, like mass or volume. Furthermore, the logical extension of reasoning along the lines you mentioned ("we are the only drivers, therefore we must be great drivers") is that any technology that is described as being "more intelligent" or "better than a human" at a task at which humans purportedly excel must necessarily be near-perfect at that task. Even if a large data sample conclusively shows that it can perform the task better than a human 95%+ of the time, if it fails at any point where a human would have succeeded, we call into question its capabilities, its reliability, and whether it should exist at all. Our egos are so wrapped up in the notion of humans as the apex species in all functional domains that we are blind to many opportunities to achieve incremental progress and improvement.


----------



## Soda Popinski (Mar 28, 2018)

M3MS said:


> (b) even when something reasonable could inevitably be achieved that was superior to humans and eliminated the driver, it would have an adverse impact on range, as deep learning algorithms consume massive amounts of power, which constitutes a sub-optimal utilization of resources (in this case battery power) given (IMO) better alternatives per below.


I just wanted to address this specific point. Teslas use NVidia GPUs (modified DrivePx) for autonomous driving. NVidia rates the Drive PX2 at 250 watts max. Current NVidia chips running full bore for crypto use about that much power. Say Tesla realizes they need to put in a second NVidia supercomputer (Model 3 has space for 2, with one sitting empty). That's 500 watts. Let's double that to be conservative, to 1 kW power (take into account secondary chips, cameras, etc).

So 1 hour of FSD would use up 1kWh of energy, which is about 4 miles of range. Depending on speed, that's going to be a different percentage of range removed. Say 60 mph, you lose 4 miles of range over that 60 miles, so a 7% loss. That's significant, but manageable. Sadly my daily commute is 20 miles each way, which takes an hour - so it's a much bigger loss there at 20%. Good thing I have free charging at work.


----------

