# Tractor trailer invisible



## bwilson4web (Mar 4, 2019)

So I did some daylight and night tests with my Model 3, version 8.5, and realize forward object detection does not use radar:





At no time does a 'ghost image' show up in front of the car, sides yes, but not in front
Dynamic cruise control was defeated by the need for parallel lines towards the test object
In ordinary traffic, a crossing car or truck will trigger a hard brake. But why these are detected and the tractor trailer remains invisible is a mystery.

Bob Wilson


----------



## Mr. Spacely (Feb 28, 2019)

Have you sent your video to Tesla?


----------



## garsh (Apr 4, 2016)

The car is generally not good at detecting stationary things. If the trailer is moving with respect to the rest of the background, then it will more likely detect it.

This is also why there have been two fatalities with people not paying attention and hitting trailers broadside.


----------



## bwilson4web (Mar 4, 2019)

FYI, I'm testing to find the limits of Autopilot and agree stationary objects like a wall or store front appear to be a hard problem. IMHO, a working radar unit should have detected both.

Bob Wilson


----------



## garsh (Apr 4, 2016)

bwilson4web said:


> IMHO, a working radar unit should have detected both.


It's not that the radar is faulty. It's just that radar has trouble distinguishing a big trailer from a Coke can that happens to be closer. So you generally don't rely on it alone, or you get even more phantom braking.


----------



## RonAz (Oct 16, 2018)

The walls of this trailer are aluminum which the radar would see if it's beam angle is tall enough. The fairing underneath is plastic which the radar is not able to see. Plastic does not reflect radar waves. That's the downside of being in fiberglass boat in the fog.


----------



## bwilson4web (Mar 4, 2019)

Thanks!


RonAz said:


> The walls of this trailer are aluminum which the radar would see if it's beam angle is tall enough. The fairing underneath is plastic which the radar is not able to see. Plastic does not reflect radar waves. That's the downside of being in fiberglass boat in the fog.


I had notices some skirted trailers were "invisible" and others easily detected. I'll have to get out and inspect the material to find out which ones were in my ad hoc tests.

Bob Wilson


----------



## JWardell (May 9, 2016)

bwilson4web said:


> So I did some daylight and night tests with my Model 3, version 8.5, and realize forward object detection does not use radar:
> 
> 
> 
> ...


There is no malfunction here, the system is programmed to ignore stationary objects off road. 
You need to perform the same tests somehow with a trailer that is moving or maybe at least in the middle of a lane.

I highly suggest you follow greentheonly who is constantly digging into this and has access to the code.


__ https://twitter.com/i/web/status/1105996443658129408

__ https://twitter.com/i/web/status/1118330606146998272


----------



## SR22pilot (Aug 16, 2018)

bwilson4web said:


> FYI, I'm testing to find the limits of Autopilot and agree stationary objects like a wall or store front appear to be a hard problem. IMHO, a working radar unit should have detected both.
> 
> Bob Wilson


All have the same doppler shift. How do you separate a wall from a rapidly rising road? Expensive radars can do it but I think it is problematic for inexpensive units. You don't want to brake for an overhead sign either. More important than radar, the video shows that the camera object recognition has a long way to go.


----------



## Magnets! (Jan 10, 2019)

Robotaxis next year ;-)


----------



## bwilson4web (Mar 4, 2019)

Then perhaps the problem is the system does not calculate the clearance above the road?

Bob Wilson


----------



## SR22pilot (Aug 16, 2018)

bwilson4web said:


> Then perhaps the problem is the system does not calculate the clearance above the road?
> 
> Bob Wilson


There are several things going on. The first is that Tesla still relies heavily on RADAR. Next it is cheap RADAR. Consider how it looks to RADAR if you are taking an off ramp onto an overpass i.e. one that is rapidly rising. The RADAR sees something in front of it with the same Doppler shift as the road it is on. The same goes for an overhead sign. Now consider a stopped car or a semi going crossways (Lto R or R to L). They have the same Doppler shift. Now if you had a very expensive RADAR it would paint an impair an image recognition could detect that the object was vertical and not sloped. It would "see"that the truck was in the way. All of this takes fine angular resolution combined with processing power to "see" precisely what is where. I don't think the Tesla RADAR system has this capability.

Now consider what happens when following a car. The Doppler of the car is different. As the car slows to a stop the Doppler shift of the car moves closer to the road but this can be easily interpreted as a car coming to a stop. When the car in front starts up again, a Doppler shift apart form the road is detected and your car moves forward. No problem. Now consider that you are following a car at 60 and he swerves to an adjacent lane because a car infant of him is stopped. The stopped car looks like a lot of road surface reflection increased due to the car in front changing lanes. IF (big if) the RADAR had fine angular resolution and lots of processing power it would paint a solid vertical object in front and your car would stop. That doesn't happen.

This issue is huge when you are on a road with a traffic light. It is common to be following a car that moves into a turn lane and exposes a car already stopped at the light. All adaptive cruise control systems I have seen have a big problem with this scenario. Sometimes hey work but sometimes they don't.


----------



## garsh (Apr 4, 2016)

SR22pilot said:


> This issue is huge when you are on a road with a traffic light. It is common to be following a car that moves into a turn lane and exposes a car already stopped at the light. All adaptive cruise control systems I have seen have a big problem with this scenario. Sometimes hey work but sometimes they don't.


Yep, exactly. It should be possible to use the cameras as well to better tell the difference between a stopped car and a clear road ahead, but it sounds like Tesla isn't quite there yet. While Autopilot doesn't often encounter this scenario, FSD is going to have to deal with this all the time, so Tesla will have to get this working well before new FSD features begin to roll out.


----------



## JasonF (Oct 26, 2018)

I can't prove it, but from some software development experience, it looks to me like the computer vision part of the equation is why AP isn't detecting trucks properly.

Imagine a truck moving very slowly across a road. If you start blinking slowly, looking at the truck - reducing the number of visual "frames" you get per second - it looks like the truck is standing still.

Of course, _you_ have an understanding that it's a truck, and that it has the capability to move - or at the very least, that it has wheels, and it can move; but the computer vision software isn't capable of making that leap of logic yet.

The point is, you need _way_ more visual samples to see movement in something that is moving slowly. In computer vision, more samples means higher frame rate, which means more expensive cameras and more processing power.

Tesla is having to balance the expense of higher-end equipment with the capabilities of computer vision software. That means breaking the traditional rules of what's required (i.e. using a substitute for higher frame rate). Any time you do that, it takes extensive trial-and-error to get it working right, and maybe it never will work right. If it were me working on it, I would try to keep a longer history on forward video frames, and compare new frames to ones that are a few seconds old. That would take a little more processing power, but it definitely would catch objects that are moving very slowly.


----------



## garsh (Apr 4, 2016)

JasonF said:


> The point is, you need _way_ more visual samples to see movement in something that is moving slowly.


IIRC, Tesla originally made decisions based on just a single frame from all the cameras.
I think that was later increased to two frames per camera. But now they've maxed out the v2.5 hardware's ability to process that much data.
I wish I could find a reference for where I had read that, but I was unable to do so just now. Apologies if I'm mis-remembering.

Now, even though it only process one (or two) frames at a time, it can do so several thousand times per second. I was just surprised to find out that it was classifying things in autopilot and making decisions about roads based on static images rather than considering how things shift over time.


----------



## JasonF (Oct 26, 2018)

garsh said:


> Now, even though it only process one (or two) frames at a time, it can do so several thousand times per second. I was just surprised to find out that it was classifying things in autopilot and making decisions about roads based on static images rather than considering how things shift over time.


From my limited experience with OpenCV, it pretty much depends on still images to recognize things. The trick is making it process still images fast enough so it can keep up with real world motion.


----------



## JWardell (May 9, 2016)

SR22pilot said:


> There are several things going on. The first is that Tesla still relies heavily on RADAR. Next it is cheap RADAR. Consider how it looks to RADAR if you are taking an off ramp onto an overpass i.e. one that is rapidly rising. The RADAR sees something in front of it with the same Doppler shift as the road it is on. The same goes for an overhead sign. Now consider a stopped car or a semi going crossways (Lto R or R to L). They have the same Doppler shift. Now if you had a very expensive RADAR it would paint an impair an image recognition could detect that the object was vertical and not sloped. It would "see"that the truck was in the way. All of this takes fine angular resolution combined with processing power to "see" precisely what is where. I don't think the Tesla RADAR system has this capability.
> 
> Now consider what happens when following a car. The Doppler of the car is different. As the car slows to a stop the Doppler shift of the car moves closer to the road but this can be easily interpreted as a car coming to a stop. When the car in front starts up again, a Doppler shift apart form the road is detected and your car moves forward. No problem. Now consider that you are following a car at 60 and he swerves to an adjacent lane because a car infant of him is stopped. The stopped car looks like a lot of road surface reflection increased due to the car in front changing lanes. IF (big if) the RADAR had fine angular resolution and lots of processing power it would paint a solid vertical object in front and your car would stop. That doesn't happen.
> 
> This issue is huge when you are on a road with a traffic light. It is common to be following a car that moves into a turn lane and exposes a car already stopped at the light. All adaptive cruise control systems I have seen have a big problem with this scenario. Sometimes hey work but sometimes they don't.


Tesla's radar is NOT cheap simple radar. It has 2D X-Y location capabilities. It does determine the position horizontally and vertically, relative speed, and return signal strength of each object. Tesla gave some insight a few years ago that the data from the radar manufacturer didn't give enough information so they started looking at raw data instead. It may be that Tesla still has some more smarts to implement here. But it is already much smarter than radar in most other vehicles.

I like @JasonF 's thinking. We know Tesla combines the radar info with visual info to make a final object determination. Radar might just return center of mass on the truck. That mass being large white and no edges to detect, while not moving with relation to the ground in one direction (but moving across), might mean that visual determines that it is not a moving object, radar shows it approaching at the same rate as the surrounding road, and therefore it is ignored.


----------



## SR22pilot (Aug 16, 2018)

garsh said:


> Yep, exactly. It should be possible to use the cameras as well to better tell the difference between a stopped car and a clear road ahead, but it sounds like Tesla isn't quite there yet. While Autopilot doesn't often encounter this scenario, FSD is going to have to deal with this all the time, so Tesla will have to get this working well before new FSD features begin to roll out.


There's a lot FSD will have to deal with that I doubt it is close to doing. For example, I went to my daughter's graduation last night. As I exited the event parking lot and had gone a couple of blocks, the nav system had me turning right but the police were forcing all traffic to turn left. FSD will have to be able to recognize police rerouting traffic.


----------



## SR22pilot (Aug 16, 2018)

JWardell said:


> Tesla's radar is NOT cheap simple radar. It has 2D X-Y location capabilities. It does determine the position horizontally and vertically, relative speed, and return signal strength of each object. Tesla gave some insight a few years ago that the data from the radar manufacturer didn't give enough information so they started looking at raw data instead. It may be that Tesla still has some more smarts to implement here. But it is already much smarter than radar in most other vehicles.
> 
> I like @JasonF 's thinking. We know Tesla combines the radar info with visual info to make a final object determination. Radar might just return center of mass on the truck. That mass being large white and no edges to detect, while not moving with relation to the ground in one direction (but moving across), might mean that visual determines that it is not a moving object, radar shows it approaching at the same rate as the surrounding road, and therefore it is ignored.


So what is the angular resolution of the RADAR? That is usually one advantage of LIDAR i.e. the ability to paint a detailed picture of the environment.


----------



## JasonF (Oct 26, 2018)

From a software design point of view, I'd bet that it works like this:

Computer Vision (via rapid still images) is _predictive_. That's what tells the car where it needs to go in order to follow the road, obey the signs, and watch out for upcoming visually clear obstructions. The car can technically drive just based on CV telling it where to go, if the road was flat, and there were no other moving objects in the path whatsoever.

Radar is _reactive_. It helps the car react to unpredictable and non-visible changes in the environment by seeing what the cameras can't, by adding dimension to what's visible. It fills in the gaps where CV fails, mostly.

Lidar is both _predictive_ and _reactive_. It's sort of both CV and radar in one. The faults it has are it's expensive, and it doesn't handle sudden change very well because you can't increase the frames processed per second - that's limited by how fast the infrared lasers can physically scan the area.


----------



## MelindaV (Apr 2, 2016)

JasonF said:


> Lidar is both _predictive_ and _reactive_. It's sort of both CV and radar in one.


 as long as there isn't fog, heavy rain, snow, etc things in the air that make it unusable.


----------



## bwilson4web (Mar 4, 2019)

SR22pilot said:


> There are several things going on. The first is that Tesla still relies heavily on RADAR. Next it is cheap RADAR. Consider how it looks to RADAR if you are taking an off ramp onto an overpass i.e. one that is rapidly rising. The RADAR sees something in front of it with the same Doppler shift as the road it is on. The same goes for an overhead sign. Now consider a stopped car or a semi going crossways (Lto R or R to L). They have the same Doppler shift. Now if you had a very expensive RADAR it would paint an impair an image recognition could detect that the object was vertical and not sloped. It would "see"that the truck was in the way. All of this takes fine angular resolution combined with processing power to "see" precisely what is where. I don't think the Tesla RADAR system has this capability.


I'm pretty sure it is an ARS4-B by Continental because of an eBay listing and front image of a Model 3 without the bumper.

My testing shows trailers without skirts are invisible while those with skirts are detected. Also, the rear-wheel assembly is reliably detected. This suggests more is going on. I'm still researching the usual engineering sources.

Ok, now I've got a clue. The current radars have an array of parallel, vertically oriented, transmitter and receiver antenna. These generate a 'fan' shaped field, narrow in width but tall. This provides high accuracy in left-to-right (azimuth), relative return strength, relative velocity, and distance but almost no vertical (elevation) information. One paper mentioned the problem of metal construction plates giving a very strong return leading to a false indication of a stopped object. But back to the specs of the ARS4-B

+/- 0.13 m - distance accuracy
0.1 Kph - speed accuracy
+/- 0.2 degrees - azimuth accuracy
18 degrees - elevation field of view
This suggests two ARS4-B mounted adjacent to each other but rotated 90 degrees would be able to more accurately map the field in front of the car.

Bob Wilson


----------



## JWardell (May 9, 2016)

SR22pilot said:


> So what is the angular resolution of the RADAR? That is usually one advantage of LIDAR i.e. the ability to paint a detailed picture of the environment.


I don't know, probably only someone on the inside does.
It does go up pretty high though, and will even see street lights.

__ https://twitter.com/i/web/status/1133366754774851584


----------



## SR22pilot (Aug 16, 2018)

bwilson4web said:


> I'm pretty sure it is an ARS4-B by Continental because of an eBay listing and front image of a Model 3 without the bumper.
> 
> My testing shows trailers without skirts are invisible while those with skirts are detected. Also, the rear-wheel assembly is reliably detected. This suggests more is going on. I'm still researching the usual engineering sources.
> 
> ...


If I am interpreting your data correctly, this hints at the issue. If the elevation isn't resolved with the same resolution as the azimuth then you have a tradeoff between false positives and missing something. The azimuth is much better than I expected. Shows what I know (or more correctly don't know). I do think metallic side skirts on trucks would help and they would help prevent death in these types of accidents. The argument against them is the issue of going over train tracks but I think that is a solvable problem. The trucking industry resisted reflective side striping for decades.


----------



## JasonF (Oct 26, 2018)

I don't think radar is the answer to this problem. From what I've read, there is too much error involved in detecting the difference between slow moving and stationary objects with radar.

The visual CV part might be able to resolve it, if they can find some loophole to overcome the limited frame processing rate. If I were to attempt that, I might try having the CV system keep comparison data longer - a few seconds worth - so it can compare the data as you travel. Then it could catch the difference between slow moving and stationary _every time_. Maybe they've tried that, and maybe found it to be too processor intensive - I don't know the parameters they're working with.

That leaves the problem with tall objects being seen as possible to travel under. The above kind of solves that too, _if_ the object is moving, because a billboard or sign can't move (or shouldn't, at least). That's a tough one to solve without causing a lot of panic stop false alarms. Radar isn't much help, because there are too many kinds of trucks that radar can penetrate through, like an empty flatbed. This might take some clever photo recognition similar to recognizing humans. Something like the CV system being able to figure out what a truck chassis looks like.

Perhaps something like looking for a tall or long object, and then checking to see if there are wheels, and drawing a box containing all wheels below the object. If the CV sees wheels, it knows it can't travel under the stationary object, and would stop. That might still create momentary false alarms where a truck is driving under a bridge or a sign, but radar should be able to offset that by separating the two by speed.

By the way, I think part of why Tesla chose a visual based system is because it's far easier to prototype. You don't _have_ to build a vehicle with the detection systems to test it - you don't need a vehicle at all, in fact. Just a cheap webcam and a laptop is all you need to try out different types of CV detection.


----------



## JWardell (May 9, 2016)

Read through this 19+ tweet thread...lots of great experiments and video just posted by Greentheonly:


__ https://twitter.com/i/web/status/1134489966799704065


----------



## bwilson4web (Mar 4, 2019)

Looking at the engineering specs for the Continental ARS4-B, the Model 3 radar unit, reveals the beam pattern sweeps side-to-side:








There is no elevation scan which begins to explain why there can be ambiguous targets detected.

A better fix would be another radar unit rotated 90 degrees to scan in elevation. Then collect the beams into a matrix and the problem is solved.

Bob Wilson


----------

