Tesla Owners Online Forum banner
81 - 100 of 235 Posts
I can't remember where I heard this (Maybe Nvidia at CES) but they were saying that its very difficult to generate the situations it needs to handle and that simulation is much more effective. Eventually you need to test on the road but it makes me wonder if things like disengagement reports are not a very good indicator. Plus disengagements are not all equal and could be biased by who is driving. You could have one problem that causes a 100 disengagements and then fix that problem. It doesn't erase the disengagements. If all that mattered were disengagements you would be incentivized to drive less or drive on easier streets. But that would be the opposite of what an engineer would want designing the system. And videos are definitely not good indicator unless you're going to watch thousands of hours which they don't provide. Its entertaining but not going to be used to validate safety.

I think the rubber hits the road when they open it to the public. Waymo will likely be first. Who knows who will end up being best. I'm rooting for all of the above. Competition will help prices and safety. Plus it will help get it out there faster. I can't wait to reduce the $500/month it takes to own my 10 year old car and not even own a car. In the meantime bring on the Model 3!
Simulation is OK for algorithm development, but essentially useless for developing better machine vision and object recognition, which is half the problem. Simulations also only contain the scenarios we imagine and put into them, but one only knows what the most difficult scenarios are when they are encountered (taco truck, anyone?)

Disengagement reports aren't just about the rate of disengagements, but an account of how much experience is being gained. An effort that is logging hundreds of thousands of miles is learning a lot more than one that isn't logging any. Certainly, a focus on disengagements does encourage driving on easier streets, and indeed we won't see public services offered in places like SF and NYC until long after there are services in places like Chandler, AZ, where Waymo is currently operating. This is a reason I have to give props to those like GM who are testing in the hardest environments even if it doesn't generate the most favorable publicity.

Going deeper on the 2017 reports: it's hard to tell how far GM is behind Waymo because Waymo does mostly suburban miles, but I'm guessing maybe a year behind. Meanwhile, there is at least one startup that tests mostly in SF that now appears to be about a year behind GM based on these reports. However, those guys have no serious plan for how to build a fleet of cars, even though that is their stated intent. This is where any of the manufacturers (including Tesla) have an advantage:

http://www.businessinsider.com/why-...ider.com/why-automakers-have-advantage-in-building-self-driving-car-race-2018-1
 
Simulation is OK for algorithm development, but essentially useless for developing better machine vision and object recognition, which is half the problem. Simulations also only contain the scenarios we imagine and put into them, but one only knows what the most difficult scenarios are when they are encountered (taco truck, anyone?)
I think when Tesla says "simulation" at least part of those simulations are using real data from real situations from real Teslas' cameras and sensors.
 
I think when Tesla says "simulation" at least part of those simulations are using real data from real situations from real Teslas' cameras and sensors.
Yes, of course, that data can be used to improve the fidelity of simulations. I was responding under the assumption we were talking about Nvidia (as poster stated), who have their own program for self driving tech but no fleet of cars with which to gather the kind of data you are talking about.

As far as Tesla goes, I'm very surprised that they don't put a more public face on their progress in L4/L5 driving. They're not generally shy about showing off works in progress or failing in public in order to get to their goals faster. Frankly, it's worrisome.
 
Simulation is OK for algorithm development, but essentially useless for developing better machine vision and object recognition, which is half the problem.
You don't need an autonomous car to take pictures, record video, develop machine vision, develop object recognition, or do mapping. There are many different ways to get the data being employed and with cloud computing you don't need to build much infrastructure.

I don't have any inside info here but simulation seems better because you can create situations that are unsafe or difficult to generate in real life. The nVidia demo makes it look pretty sophisticated in generating different lighting conditions and simulating actual sensors/camera data given their location on the simulated vehicle. You can simulate sensors failing. You can let accidents happen. Waymo does 1000 times more simulated miles than road miles.

I wonder if there is a possibility that some small company seemingly comes out of nowhere with a superior solution. They would get snatched up by a manufacturer or large auto supplier and then take a little while to make their way into the manufacturing line, but this is basically what happened with Cruise. There are many companies like Cruise out there.
 
  • Like
Reactions: Red Sage
As far as Tesla goes, I'm very surprised that they don't put a more public face on their progress in L4/L5 driving. They're not generally shy about showing off works in progress or failing in public in order to get to their goals faster. Frankly, it's worrisome.
I agree this is interesting and maybe worrisome. Part of it may be that they already have egg on their face with no cross country demo. But its hard to not think things aren't going as well as they would like.
 
You don't need an autonomous car to take pictures, record video, develop machine vision, develop object recognition, or do mapping. There are many different ways to get the data being employed and with cloud computing you don't need to build much infrastructure.

I don't have any inside info here but simulation seems better because you can create situations that are unsafe or difficult to generate in real life. The nVidia demo makes it look pretty sophisticated in generating different lighting conditions and simulating actual sensors/camera data given their location on the simulated vehicle. You can simulate sensors failing. You can let accidents happen. Waymo does 1000 times more simulated miles than road miles.

I wonder if there is a possibility that some small company seemingly comes out of nowhere with a superior solution. They would get snatched up by a manufacturer or large auto supplier and then take a little while to make their way into the manufacturing line, but this is basically what happened with Cruise. There are many companies like Cruise out there.
The fundamental problem with development using only simulation (setting aside machine vision/perception issues) is that the system (traffic) is extremely non-linear. In real life, as soon as the machine decides what to do and takes some action, that action has an effect on the perceptions and actions of others - and so on. So, without putting the decision making of the car into action with possible errors, precision, latency, etc., the scene changes in a way that cannot be simulated by the time you get a few seconds beyond that initial decision. It's like trying to figure out what the board will look like many moves ahead in a chess game while watching someone else play your side. That kind of learning - watching someone else play while you think about what you would do - only gets you so far.

Accidents certainly do happen (just ask GM and Waymo) but you are correct that there are a lot of disengagements that perhaps precede a major learning opportunity, and without letting the scenario play out, you don't really know what would have happened. Here again, those bold enough to let the machines do as much as possible will progress faster.

As far as Nvidia goes, you can check their disengagement report at the link I posted, and it's not pretty - about 100 disengagements in only 500 miles. Their simulation of lighting and atmospheric effects may look great, but human vision is pretty special in screening away artifacts that can drive machine vision crazy without you even being aware of it, so what looks "real" to you may be a far cry from real lighting, atmospheric conditions, and textural effects as far as machine vision goes. It's even more complicated in lidar space, where the artifacts are non-intuitive for us and it's difficult to judge the fidelity required for useful simulation.

As for Cruise, you can see by the improvement they achieved since they started in the 2016 and 2017 reports, and they were basically completely in the weeds when they started public testing (about where Nvidia is now) compared to where they are now - almost a factor of 1000 different in both total miles and miles/disengagement.

There are other small startups similar to Cruise, and as I mentioned I know some people involved in one such effort that is about where GM/Cruise was roughly a year earlier. They insist that their road testing is critical. Some of these guys will probably still get bought up, but others profess to want to go all the way with their ideas. The window for startups is closing fast though. I'm not sure any of the major manufacturers are interested buying up outside IP anymore, and even the best funded among them can't go up against the big players in building and deploying large fleets of AV for ridesharing services, which is coming up fast.
 
I agree this is interesting and maybe worrisome. Part of it may be that they already have egg on their face with no cross country demo. But its hard to not think things aren't going as well as they would like.
Yeah... I'd like to see some L4/L5 progress demonstrated from Tesla soon. My worry is that the next time we see this, whether sooner or later, it's going to be with a car that has lidar, since that would be an admission that they need a more complete sensor package as many others have claimed.
 
I agree this is interesting and maybe worrisome. Part of it may be that they already have egg on their face with no cross country demo. But its hard to not think things aren't going as well as they would like.
Word on the street says things are going fine. I'm excited for what's ahead.

EDIT: Jolly good, it made the news so I don't have to play coy! Update pertains to EAP, not specifically to FSD, but is absolutely going in the right direction.

https://electrek.co/2018/02/02/tesla-autopilot-new-update-elon-musk/
 
The fundamental problem with development using only simulation (setting aside machine vision/perception issues) is that the system (traffic) is extremely non-linear.
The point I'm trying to make is that you don't need to have an autonomous car in autonomous mode to collect the data. For example Tesla drove 0 autonomous miles in California last year. That doesn't mean they aren't able to develop test their object recognition and deep learning etc. on real world data because they are collecting it with their customers cars and their own cars not driving autonomously. Furthermore because of this unless you have inside information, like you say you have in that startup, you don't know how far along they are. If we went by the disengagement report you'd say they aren't working on it.
 
The point I'm trying to make is that you don't need to have an autonomous car in autonomous mode to collect the data. For example Tesla drove 0 autonomous miles in California last year. That doesn't mean they aren't able to develop test their object recognition and deep learning etc. on real world data because they are collecting it with their customers cars and their own cars not driving autonomously. Furthermore because of this unless you have inside information, like you say you have in that startup, you don't know how far along they are. If we went by the disengagement report you'd say they aren't working on it.
I understand your point and have agreed that it's a valid one. I think you're choosing to ignore my point - that there is only so much the machine can learn about driving by watching someone else drive.

Two things stand out in examining these reports from all of the different players who have reported miles:
- Not a single one of these efforts operates with much success when they start testing on public roads
- Those efforts which become highly successful see an exponential growth in success after they begin testing.
I believe (my opinion) that indicates that such testing is fundamental to becoming highly successful and I do hope Tesla is doing this someplace, because I think it's important.

As far as inside information, the startup I know did report public miles in CA for 2017. That's why I can say it looks like they are about a year behind GM: they are about where GM was in their previous report from 2016 in terms of both miles and disengagements per mile under the same (urban SF) conditions.

Let's see what the new update brings. I'd be happy to see a big leap from Tesla here!
 
I understand your point and have agreed that it's a valid one. I think you're choosing to ignore my point - that there is only so much the machine can learn about driving by watching someone else drive.
I believe (my opinion) that indicates that such testing is fundamental to becoming highly successful and I do hope Tesla is doing this someplace, because I think it's important.
Fair enough. I just feel like I don't know what everyone is doing enough to say who is ahead of who.

Here is an analysis of the latest report.
https://spectrum.ieee.org/cars-that...-think/transportation/self-driving/have-selfdriving-cars-stopped-getting-better

I think it brings up a lot of interesting points as do some of the comments. I agree with the author's skepticism on the data.

I'm a little more optimistic but maybe that's because I really want everyone to succeed.
 
Reading the IEEE article one guys comments stuck out. I read some of his other comments such as this one:

Michael DeKort Guest 2 months ago
Yes but only if they do 99.99% of the work using aerospace level simulation and have a Scenario Matrix.
That simulation would need to be where they test the AV systems with a human in an integrated simulation. I mention this because the industry separates these two and then doesn't do integrated testing until they use a real vehicle on a track. That is a mistake. They need to integrates AV testing with Man-In-the-Loop testing in one simulation/simulator system.

Regarding the Scenario Matrix. Driving around to find all of them will not work. They need to augment that with a massive top down effort that includes experts from several domains and data sources.
He was a lead systems engineer at Lockhead Martin according to his wikipedia page, but its not clear how much he knows about these autonomous driving projects. I might have to check out the documentary on whistleblowers that he is featured in.
 
I believe now there are only two companies who have said they may have a commercially available level 4 system in 2018. Waymo and Tesla.

Elon said "I think we could probably do a coast-to-coast drive in three months, six months at the outside." and then they followed up with a great followup question "And then is it available for customers immediately? Or is there a lag?". Elon responded with "Yes, that would be something that's available for customers."

Not quite as definitive an answer as you'd like but hopeful for 2018. My prediction: odds 1:100,000 there is a firmware update enabling it in 2018. But he's telling me there's a chance.
 
Here's the thing I keep coming back to regarding full self driving. I don't see it getting released a feature or two at a time like enhanced auto pilot can. From a liability standpoint it can either completely take over the driving or it can't. I see it as an all or nothing thing, at least from a "who's at fault in an accident" standpoint. This alludes to Elon's comments about the limitations of LIDAR and why he has chosen a neural network image recognition approach. LIDAR can't read road signs or redlights. They seem to be completely dependent on specific geographic areas or routes. He is betting that the image recognition approach is a much more permanent system in all situations. This is why it is proving to be so time consuming to develop. Once the system has proven an order of magnitude improvement in safety over an average driver then I think it will be released.

Will that happen this year? ...next year? That is the question, but as long as it comes out in the next 5 or so (and I feel it definitely will) then I will order it up front.

Dan
 
Here's the thing I keep coming back to regarding full self driving. I don't see it getting released a feature or two at a time like enhanced auto pilot can. From a liability standpoint it can either completely take over the driving or it can't. I see it as an all or nothing thing, at least from a "who's at fault in an accident" standpoint.
Agreed. But it will still start off limited. It'll probably only work in good weather to start - no snow, rain, or fog, or laying water. They could also limit it geographically to start, while it's in beta.
This alludes to Elon's comments about the limitations of LIDAR and why he has chosen a neural network image recognition approach. LIDAR can't read road signs or redlights. They seem to be completely dependent on specific geographic areas or routes.
Be careful of reading too much into this statement. Other companies use cameras in addition to their LIDAR, much like Tesla uses cameras in addition to RADAR. RADAR can't read road signs either. They all make use of the cameras for image recognition. RADAR is much, much cheaper than LIDAR right now, but a little harder for humans to figure out how to interpret the "images", so Tesla has a harder job figuring out how to make use of it. But RADAR continues to work reasonably in bad weather, whereas LIDAR becomes much less effective.

Also, all of these companies will want to build up maps to use as a baseline. That way, when they're at a particular intersection, they know to expect a sign, and they can perform a quick match to confirm that it's the yield sign that always appears there. It takes a lot of computing power to create these maps, but it results in a lot less computing power needed on the vehicle itself. They'll still have to use a more complicated neural net in the vehicle to interpret new signs (like construction signs), but that information should then be uploaded and added to the "map" so that the next car traveling through that area can take advantage of the new information.
 
I think they will make LIDAR cheaper and smaller so that its integrated to not look bad over time or add too much cost. But does adding more sensors automatically make it easier? There are other companies working on LIDAR free solutions like Comma.ai and AutoX.ai. These AI chips will rapidly get better over time as well. It will be fascinating to see what happens. LIDAR is cool but I really hope its not needed. Should really hep resale value of the Model 3 and bring a lot of cash into Tesla.
 
I think they will make LIDAR cheaper and smaller so that its integrated to not look bad over time or add too much cost. But does adding more sensors automatically make it easier? There are other companies working on LIDAR free solutions like Comma.ai and AutoX.ai. These AI chips will rapidly get better over time as well. It will be fascinating to see what happens. LIDAR is cool but I really hope its not needed. Should really hep resale value of the Model 3 and bring a lot of cash into Tesla.
Also... to be clear... all of these efforts, whether Waymo, GM, Tesla or anyone else is using various AI techniques. Generally speaking that means a convoluted neural network. AI and machine learning are not unique to Tesla's effort.

Some guys I work with recently developed an ASIC for a lidar manufacturer that makes their lidar product much cheaper, and GM is moving in the direction of making LIDAR cheaper and sleeker by buying a highly respected LIDAR startup (Strobe). Meanwhile, resolution on lidar improves with each successive generation as does the ability of the algorithms to "stitch together" lidar information and visual information from the full complement of both devices that surround these cars. I believe many of these cars use radar as well, but resolution for radar is pretty poor, making it difficult to correlate with visual information. This is not a limitation that is easily overcome with a reasonable sized detector array, as it arises from the wavelength of radar signals, which are macroscopic.
 
Here's the thing I keep coming back to regarding full self driving. I don't see it getting released a feature or two at a time like enhanced auto pilot can. From a liability standpoint it can either completely take over the driving or it can't. I see it as an all or nothing thing, at least from a "who's at fault in an accident" standpoint. This alludes to Elon's comments about the limitations of LIDAR and why he has chosen a neural network image recognition approach. LIDAR can't read road signs or redlights. They seem to be completely dependent on specific geographic areas or routes. He is betting that the image recognition approach is a much more permanent system in all situations. This is why it is proving to be so time consuming to develop. Once the system has proven an order of magnitude improvement in safety over an average driver then I think it will be released.

Will that happen this year? ...next year? That is the question, but as long as it comes out in the next 5 or so (and I feel it definitely will) then I will order it up front.

Dan
I think that Tesla will get most of the way to level 5 with increasingly powerful additions to Autopilot. These do not present a legal issue since the driver is still totally responsible for the car, and will probably not present a regulartoy problem if steps are taken to ensure driver attachment. .

The two most obvious steps are dealing with traffic signs and signals, and connection to navigation. I don't think that regulators will have a problem with this sort of improvement. This would still be a very advanced level 3 since the driver would still be involved and responsible. You tell the car it goes there with increasingly less frequent need for drive intervention.

Eventually detachments from automobile will be so rare that it will be time to apply for level 5 certification. The only real difference between a fully advanced level 3 and level 5 will be that you will no longer need to be involved, or in the car.

I think that level 4 might be used on a limited basis before level 5. I could see requesting level 4 for certain known parking lots, so that the car could drop you off. The low speeds should mean little danger of death or serious injury.

One aspect required for autonomy is handling degradation, not only of the self driving system itself but of the whole car. It has to be able to deal with every car problem. In serious cases it needs to be able to get itself to as safe a location as possible and call Tesla for service.

By the way, if anyone seriously thinks that you cannot have level 5 with cameras because they get dirty, then you also cannot have level 5 on a car with pneumatic tires because they go flat. The flat and blocked sensors have the same solution.
 
The one thing that is unique to Tesla is that they will have billions of miles of real world data when others have millions. I think that may make a huge difference.
Training effectiveness is not linear in the size of the training dataset. There are diminishing returns as more data is added. It's impossible to say what proportionality constant governs this asymptotic approach, but it may well be that billions of miles are no better than millions.
 
81 - 100 of 235 Posts