Autopilot: Seeing the road in ways that we can't

KarenRei

Top-Contributor
Joined
Jul 27, 2017
Messages
1,647
Location
Reykjavík
Country
Country
#1
So, I've generally been an Autopilot / FSD pessimist. There's just so many edge cases to self-driving - for example, how long do you think it'll be before AP realizes that "lamb on one side of the road, ewe on the other" is more dangerous than two lambs, two ewes, or a lamb and ewe on the same side of the road? We use so much logic when driving in adverse conditions - and where I am, adverse conditions are the name of the game.

However, one thing that I used to think would be a very tough nut to crack, I now have the opposite view: that is, reading the road.

It's not enough to wait until you're slipping to react to it. In bad conditions, a driver needs to see the road ahead and adjust their speed or lane positioning to be prepared for what's coming. Visually, this is an incredibly difficult task, even for humans. And the way "hazards" look can vary tremendously from place to place. But today, something occurred to me:



These are SAR (Synthetic Aperture Radar) images of Saturn's moon Titan. Stop for a second and ask yourself what exactly you're looking at - what do the colours mean? They're not height maps, or optical brightness - they're the "brightness" of radar returns. Now, these are in part due to the material they're made of. But they're also based on the texture of the object that's reflecting the signal, on the scale of the beam's wavelength. The shorter the wavelength, the finer the texture that's being examined, while the longer the wavelength, the coarser the texture that's being examined. Black means "very smooth" (in this case, methane seas). White means "very rough".

Here's a SAR image of Venus:



The sqiggly line is Baltis Vallis, the longest riverbed in the solar system (carved by some unknown fluid in the past); the dark colours show its smooth riverbed (relative to the chosen frequency). Various fractures in the terrain around it show up as bright, indicating rough material disturbed by tectonic activity.

These surface texture effects, of course, don't apply just to SAR; they apply to any radar. But stop and think of what it means in terms of road analysis: if properly designed, and with a proper software stack, a car's radar could analyze the surface texture of the road on multiple scales ahead of you. Things like sheer ice, standing water, loose gravel, dust, potholes, etc, should all have characteristic reflections. And even where ambiguity exists, the vehicle could correlate returns along the lines of "I drove over this sort of return recently, and here's how much my wheels slipped", and use that to interpret other such returns in the area.

In short, the vehicle could have a lot more - not less - information about how slippery the road is going to be ahead of it than a human driver does, and react accordingly in advance.

There's still an awful lot of logic to incorporate - e.g. on a country road you may want to outright drive down the middle in adverse conditions. When there's a cliff off to one side of the road with no guardrail you'll want to be more cautious than when the road is in the middle of a field. Etc. But in general, I see a lot of reason to be optimistic about a self-driving vehicle's potential ability to "read the road" in terms of reading radar returns.

The other issue I'd been thinking about is "standing water". It's a very serious issue. You don't want a self-driving car happily driving into deep water, while you also don't want it slamming on the brakes on a highway due to a puddle. As humans, we use complicated means to try to assess how deep water is - and still sometimes get it wrong. We look at how much the land is sloping on all sides of the puddle and mentally approximate where it might bottom out. We look at how waves move on the surface. We pay attention to other vehicles that might have tried to go through within sight of us. All sorts of things.

Most radar sensors won't be of use here. What's a self-driving car to do?

It then hit me, though.... self-driving vehicles are data collectors. Including height at each point that they drive through. Properly implemented, assuming people have driven that road before it became flooded, the topography of that road is a known factor. The vehicle needs only assess the current water height, and it should know how deep the water is at the deepest point.

Some factors aren't as easy, of course. There's no way to assess road damage or submerged obstacles. Assessing currents requires nontrivial visual or radar analysis. It's of no use on places that people don't go frequently and/or change frequently, such as unbridged river crossings. Etc. But it should be more than sufficient for the general case - and more to the point, do the job much better than people. Where it feels it doesn't have enough information to make a decision, yes or no, it can stop and let the user know what it knows, and ask them to make a call.

So don't get me wrong - I'm still very much a FSD pessimist, and expect much longer timelines / more problems than most people. But in terms of these aspects - road condition and flooding assessment - I'm now convinced that there are very good, practical options available for them.
 
Last edited:

3V Pilot

Top-Contributor
Joined
Sep 15, 2017
Messages
1,246
Location
Oro Valley, AZ
Tesla Owner
Model 3
Country
Country
#2
So, the one thing that really hit me after reading you post was this....Is everyone in Iceland a genius because you spend so much time reading books since, well, it is called ICEland after all????:)....LOL, just kidding but seriously those are all great points. One thing I wonder about though is the radar has a very low "cross section" of the road because it's mounted low in the front bumper. It' won't have the kind of detail you see looking straight down at a planet. Also I doubt the quality of the radar units currently installed for TACC would have the kind of definition needed, even if the cross section limitation wasn't there. Maybe in the future as technology advances but I'd be very surprised if today's automotive radars would be capable. But, then again I'm just taking a guess, I don't live in Iceland so I probably don't know what I'm talking about:D
 

Audrey

SR3NTY
TOO Supporting Member
Joined
Aug 2, 2017
Messages
190
Location
Beautiful Pacific Northwest
Tesla Owner
Model 3
Country
Country
#3
It' won't have the kind of detail you see looking straight down at a planet.
Iceland is a misnomer, but that's besides the point. You're incorrect about the car's ability to see the world from a top down view. It may not have live data of everywhere on the planet on the fly, but the the car can use satellite image data for the kinds of assessments @KarenRei described. She's spot on the human sight is limited but computer sight does exceed ours -- and will continue to improve at that. Most importantly computers do not have bias, which is an advantage in most situations if the programming (lamb or ewe scenario) is solid.
 

SoFlaModel3

@Teslatunity
Moderator
TOO Supporting Member
Joined
Apr 15, 2017
Messages
9,788
Location
Florida
Tesla Owner
Model 3
Country
Country
#4
I am a FSD pessimist. I didn’t spend the extra $3,000 and I don’t expect it to be actually available until my next car purchase or beyond.

I am very big on the added safety of EAP over me behind he wheel. The car sees what I can’t (and that’s the car in front of the car in front of me). That’s huge! That mixed with me remaining alert and ready to take over is a great combination.

My driving stress is vastly reduced and I get where I’m going in a much safer fashion!
 
4

4701

Guest
#5
Narrow AI must learn the puddle only once. And then make predictions according to
rainfall, snow melting and other variables to any specific GPS saved location.
I know every inch of the road surface near my house. You know every pothole around
your house. Now Tesla AI must know every inch wherever it has ever driven to be
as good as me around my house. I know where the pothole is, even if it filled with water.

I agree, Level 5 autonomy (FSD) will not be possible with AP2.0 hardware.
Image resolution is insufficient. Human eye has a narrow field that has extreme resolution.
I don't know why Tesla has not implemented a camera that acts like a human eye - have
extremely narrow field of vision that can change direction at very rapid rate (human eye or faster).
So a camera than can read this forum post at the distance of at least 1.5 meters.

Elon has not made enough reasonable steps to make FSD work soon. For example, since day one,
AFAIK, Tesla has never gathered data from vehicle's accelerometer. Speed bumps, potholes,
other defects - those must be learned. Even for sufficient Level 4 autonomy.
 

mservice

Active Member
Joined
Jul 29, 2017
Messages
115
Location
falls church, va
Tesla Owner
Model 3
Country
Country
#6
I agree that the self driving car isn’t around the corner, and I doubt that we will truly see it for years. But, in my opinion it won’t necessarily be because of the tech. The one constant that is not discussed much, and is also in my opinion the biggest issue is people.

Musk and Tesla have been telling people that Autopilot does not make the car autonomous. But, this seems to fall on deaf ears; case in point the unfortunate lose of life in the California Model X crash. Tesla has now reported from telemetry the drive was driving above the speed limit, and did not have his hands on the steering wheel after a number of warnings.

Others have tried to over ride the cars so not to be bothered by the warnings, such as placing oranges or tennis balls in the spokes of the steering wheel to simulate their hands being on the wheel.

If and when totally autonomous cars do become more common the one thing that will need to be out of the equation are people. Until then we will see continued issues. For every car driving on their own they will be dogged by someone doing something stupid in the human driven car that a computer won’t understand.
 

DrPhyzx

Active Member
Joined
Nov 20, 2017
Messages
49
Location
Menlo Park, CA
Tesla Owner
No
Country
Country
#7
I think you underestimate human vision: at least it operates in wavelengths that can be used to assess surface texture. Radar does not - it's resolution is useless for this. Lidar... someday.
 

c2c

Active Member
TOO Supporting Member
Joined
Sep 19, 2017
Messages
189
Location
Seattle, WA
Tesla Owner
Model 3
Country
Country
#8
I am reminded of Elon's mention that self driving doesn't have to be perfect in order to save tens of thousands of lives, plus multiple times of serious injuries. We shouldn't wait for perfect.

But i am encouraged by looking beyond a single car. Vehicle to vehicle communications will happen "soon."
the risks or obstacles noted above usually are presented to dozens or hundreds of vehicles as the problems develop. Ice forms as temperatures lower, standing water grows over time. So long as the changing situation is updated to the networked High Definition digital map, many problems can be avoided.
I understand that today's teslas do not read speed limit signs, for their own use. But the teslas could upload that info to a digital map. As temperatures drop or traction gets squirrelly, let the map know. If we have a mix of teslas and lidar cars updating a map, things should get safer.
Thus, things could change faster than we might guess.
I think autopilot 2.5 is a step in the right direction. I'm still a couple months from configuring my 3, but I'm inclined to buy the full package. I'm not getting any younger and any help i can get to my reflexes are likely worth it. But i am still the pilot in command.
 

Gorillapaws

Active Member
Joined
Jul 30, 2017
Messages
47
Location
richmond, va
Country
Country
#9
The self driving component I have the biggest issue with is the inability to reason/anticipate situations and drive defensively. For example, if I'm driving down the road in the middle lane and I see a guy in a tricked-out sports car racing down the onramp, I can anticipate that he's probably going to cut off that truck in the right lane who might need to come into my lane abruptly. I'll change lanes or increase the distance in anticipation. Full self driving won't be able to anticipate/interpret these types of scenarios likely for a very long time. I do think it'll be able to react faster than a human, but I don't see it avoiding scenarios like a good human driver might. That said, there are plenty of terrible human drivers out there...
 

John

Tech Founder
TOO Supporting Member
Joined
Apr 15, 2016
Messages
1,634
Location
California
Tesla Owner
Model 3
Country
Country
#10
I guess I'm a FSD optimist.

1. I think people are in general not that great at driving, all things considered
2. I think machine learning can quickly outstrip human capabilities, like it has so far in other things

Say what you want about what a human CAN SOMETIMES do (like assess the family structure of a sheep herd on the fly), but in reality most people see two animals, freak out, slam on the brakes too hard, spin around and take out both animals like they were bowling to pick up a split spare.

Seriously—I think in a dozen years people will laugh about how ****** and distracted and confused and slow reacting human drivers used to be. And no more "confronting grandma to take away her keys because honestly it's a miracle she hasn't killed someone already."

And just like many people now think "programming" is typing recipes into Facebook, people in the future will think "driving" is telling your car where to take you.
 

mservice

Active Member
Joined
Jul 29, 2017
Messages
115
Location
falls church, va
Tesla Owner
Model 3
Country
Country
#11
I think you underestimate human vision: at least it operates in wavelengths that can be used to assess surface texture. Radar does not - it's resolution is useless for this. Lidar... someday.
Nope i don’t underestimate human vision, I played baseball, not professionally, for a number of years and have followed the game for decades. The human eye needs to determine within 50 milliseconds what a pitched ball will do. But, my opinion isn’t how well the human eye can determine things, it is how humans do dumb things. Computers use logic, humans do not regardless of their eye sight. Not paying attention to the road, and people around you increases the possibility of disaster. As I pointed out Tesla has been telling drivers to keep their hands close to the steeringwheel, and remain diligent while driving in autopilot, but do they?
 

m3_4_wifey

Active Member
Joined
Jul 26, 2016
Messages
96
Location
Vermont
Tesla Owner
Model 3
Country
Country
#12
There's a lot of visual and radar information for the sensors to take it. For a puddles, animals, or other moving vehicle, additional more details scans are going to occur. Just like your eye's would do. I'm curious how many of the yearly accidents occur for the simple reason that the person looked away at the wrong time and their reaction time was not fast enough. Is it higher than 50% of the time? Video cameras are always looking, they never blink, their reaction time should be faster than the human 20ms, and their anticipation should be just as good as humans given the proper programming. I would be very curious what the scenarios Tesla and other self driving companies are trying in their video game like simulations on their latest FSD software.

I would hope that when FSD is enabled, you tell the car your destination and how urgent it is for you to get there quickly. It will then tell you that is not possible to make your meeting on time, or do the best it can if you are willing to take a chance that you will have to pay for a speeding ticket. I would hope that the car would be allowed to speed some, but choose where to speed and where to be cautious (speed more on the highway rather than neighborhood road). The car can make statistical judgement calls including death of humans or animals like, "I can't speed in this section of road because this is the season where deer do cross the road often". This environmental information could be a mixture of road signs and statistical recent or seasonal history from your car or other cars.

FSD cars will get in accidents. It will be interesting if liability can be a clean cut case if someone gets killed in or by a FSD vehicle. There will be so much black box data about an accident that you would hope that no accidents need to be processed through a courtroom, but I'm sure humans will want to muddy the waters for a long time to get a piece of the litigation pie.
 

Audrey

SR3NTY
TOO Supporting Member
Joined
Aug 2, 2017
Messages
190
Location
Beautiful Pacific Northwest
Tesla Owner
Model 3
Country
Country
#13
The self driving component I have the biggest issue with is the inability to reason/anticipate situations and drive defensively. For example, if I'm driving down the road in the middle lane and I see a guy in a tricked-out sports car racing down the onramp, I can anticipate that he's probably going to cut off that truck in the right lane who might need to come into my lane abruptly. I'll change lanes or increase the distance in anticipation. Full self driving won't be able to anticipate/interpret these types of scenarios likely for a very long time. I do think it'll be able to react faster than a human, but I don't see it avoiding scenarios like a good human driver might. That said, there are plenty of terrible human drivers out there...
I think your scenario is a perfect example of when FSD excels. The computer lacks emotions or moods for such antics (as the aggressive driver in the scenario). I think mixing FSD with human drivers on the road is incredibly dangerous and will not last long. Once a critical mass of FSD vehicle options exist, I believe regulations will change so that highways or other arterial roadways do not allow any human driving.
 

Soda Popinski

Active Member
Joined
Mar 28, 2018
Messages
25
Location
Winnetka
Tesla Owner
Reservation
Country
Country
#15
Any speculation on how well Tesla's FSD research is going compared to other systems, such as Waymo? Or is the consensus the current AutoPilot is Tesla's "state of the art"?

I would hope there are further algorithms being developed to detect stopped obstacles that we haven't yet seen implemented in AP, not to mention the obvious surface street specific things, like recognizing stop lights and signage.
 

John

Tech Founder
TOO Supporting Member
Joined
Apr 15, 2016
Messages
1,634
Location
California
Tesla Owner
Model 3
Country
Country
#16
Word is that the recent jump in ability is the first taste of the capabilities of the new framework that Andrej Karpathy installed when he came over from OpenAI to lead the effort, and that there are a broader set of features going through beta that are coming soon. I don't know what they are, but I'd guess sign reading, more sophisticated lane changes, maybe on/off ramp driving. Stop lights and stop signs would be huge. Differentiation on the screen of vehicle types would be reassuring, too. It's already cool seeing two vehicles ahead of you, and how well it tracks them changing lanes ahead of you.
 

Gorillapaws

Active Member
Joined
Jul 30, 2017
Messages
47
Location
richmond, va
Country
Country
#17
I think your scenario is a perfect example of when FSD excels.
Well, yes, if we replace all bad/aggressive/dangerous drivers with FSD, that of course would be an improvement. My point is that good, alert, human drivers are capable of certain types of situational awareness that is a long way from being solved in AI. Making inferences such as "that driver looks drunk," "he's texting and driving" or "that couch doesn't look secured well on the back of that pickup," and then making appropriate decisions on the part of the AI likely won't happen for a very long time.

I'm really looking forward to EAP to allow me to focus my attention on the road/traffic situation and less on trying to keep my speed and lane position correct. That said, I'm less optimistic about FSD in the medium-term. I do think EAP will get to be very good with the current hardware and will likely react to bad situations once they happen faster/better than a human, but I still believe that I'll be better than the AI at avoiding those situations entirely through defensive driving.

I say all of this because I'm likely an outlier in terms of driver safety. I've never caused an accident and I have 0 moving violations in my 20+ years on the road. I'm probably much more cautious than the typical driver. By definition half of all drivers are below average (and yet I suspect a good number of them probably believe themselves to be much better than they are). I certainly appreciate the logic from the other perspective.
 

Audrey

SR3NTY
TOO Supporting Member
Joined
Aug 2, 2017
Messages
190
Location
Beautiful Pacific Northwest
Tesla Owner
Model 3
Country
Country
#18
Any speculation on how well Tesla's FSD research is going compared to other systems, such as Waymo? Or is the consensus the current AutoPilot is Tesla's "state of the art"?
It depends on who you ask. A report out in January 2018 lambasted Tesla's autonomous progression and system thus far. Here are some articles about it:
However, Elon defended radar and Tesla's direction for self-driving rather articulately during a call in February.
 

John

Tech Founder
TOO Supporting Member
Joined
Apr 15, 2016
Messages
1,634
Location
California
Tesla Owner
Model 3
Country
Country
#19
Well, yes, if we replace all bad/aggressive/dangerous drivers with FSD, that of course would be an improvement. My point is that good, alert, human drivers are capable of certain types of situational awareness that is a long way from being solved in AI. Making inferences such as "that driver looks drunk," "he's texting and driving" or "that couch doesn't look secured well on the back of that pickup," and then making appropriate decisions on the part of the AI likely won't happen for a very long time.

I'm really looking forward to EAP to allow me to focus my attention on the road/traffic situation and less on trying to keep my speed and lane position correct. That said, I'm less optimistic about FSD in the medium-term. I do think EAP will get to be very good with the current hardware and will likely react to bad situations once they happen faster/better than a human, but I still believe that I'll be better than the AI at avoiding those situations entirely through defensive driving.

I say all of this because I'm likely an outlier in terms of driver safety. I've never caused an accident and I have 0 moving violations in my 20+ years on the road. I'm probably much more cautious than the typical driver. By definition half of all drivers are below average (and yet I suspect a good number of them probably believe themselves to be much better than they are). I certainly appreciate the logic from the other perspective.
You make good points, but as a safe driver you can appreciate how nice it would be if there was a lot less distracted, panic-y, drunk driving going on. You know that thing where someone suddenly realizes that they are about to miss their exit and they blindly dart across lanes to get there? Or not paying constant attention and rear ending you? Easy to see how a self-driving car could improve those.

I guess it's natural to think of cases where you might be better than autopilot. But it's actually much easier to think of cases where OTHER people might be much worse than Autopilot.

Perhaps this is a little like inoculations; even if you don't think you need them, we're all better off if we all have them.
 

John

Tech Founder
TOO Supporting Member
Joined
Apr 15, 2016
Messages
1,634
Location
California
Tesla Owner
Model 3
Country
Country
#20
Also, I can't wait for the day that those traffic slow downs caused by "compression waves" of people accelerating and braking come to an end.