# Thoughts on Level5 autonomy



## shareef777 (Mar 10, 2019)

This is a good (and quick) read on the subject. I've had the same exact thoughts well before and after owning my 3. I just don't see it happening in the next 5 years, and am just as timid about it happening in the next decade.

https://europe.autonews.com/automakers/apple-co-founder-ive-really-given-level-5


----------



## Mr. Spacely (Feb 28, 2019)

shareef777 said:


> This is a good (and quick) read on the subject. I've had the same exact thoughts well before and after owning my 3. I just don't see it happening in the next 5 years, and am just as timid about it happening in the next decade.
> https://europe.autonews.com/automakers/apple-co-founder-ive-really-given-level-5


It is happening as we speak. City driving with stop signs and stop light recognition has been working as demonstrated on Autonomy Day in April. They will roll it out to a few more folks by year end to meet the "feature complete" goal, and then the rest of us will probably get it early 2020. Sure it will take a year or two before it is out of beta and truly Level 5 and then approved by regulators...


----------



## shareef777 (Mar 10, 2019)

Mr. Spacely said:


> It is happening as we speak. City driving with stop signs and stop light recognition has been working as demonstrated on Autonomy Day in April. They will roll it out to a few more folks by year end to meet the "feature complete" goal, and then the rest of us will probably get it early 2020. Sure it will take a year or two before it is out of beta and truly Level 5 and then approved by regulators...


Tested by a few people vs being widely available across every city in the US is a very big distinction. Summon is still in Beta and I don't see that coming out of beta for another year or more.


----------



## JasonF (Oct 26, 2018)

Where Woz is right is that there are no autonomous driving systems that understand context. They have the same problem as Alexa, Siri, etc.

For an example, stand right over your Amazon device while it's playing music. Then simply say, "Can you turn it down?" The device won't know you're talking to it, nor will it know what you're referring to. That's context. Even humans aren't so good at it sometimes, but most of the time, if you ask someone to "turn it down", they'll automatically know you mean the loudest thing in the room under their control - i.e. the music.

I'll give you a very good and hazardous example of where context is _very_ important. Let's say you're driving along a highway in the rain, and you reach a section that's flooded. As a human driver, you'll see that it's flooded, and make a decision based on what you see as to whether it's safe to continue. That decision is based on what you can see of the roadway and what's beside it (if you can see how deep it is); the condition of other cars that have attempted it (whether they are stuck or damaged); and whether there is someone in some kind of authority to guide you through it.

An autonomous car would just plunge right through the water, continuing in whatever lane it was in, at close to maximum speed. Because it only knows it momentarily lost the lane markers ahead, and that there is a "puddle" in front of it. It will not be able to understand the context of water covering the roadway being dangerous, possibly deadly, or that it could severely damage the car. The A.I. would be more concerned with trying to solve the problem if getting _past_ the water and continuing on. It wouldn't care what the cars around it are doing, or that there's a police officer trying to urge everyone to the right side where the water is much shallower.

So in order to have _true_ autonomous driving, the autonomous driving A.I. would have to figure out the context of what's going on around it. Can it work without it? Sure, as long as the roads stay well mapped, and the weather isn't at extremes. But there will still be days you will have to take over for now. Don't throw away the steering wheel just yet.


----------



## sduck (Nov 23, 2017)

I don't see any version of autonomy working in Manhattan.


----------



## shareef777 (Mar 10, 2019)

My thought is that they really should focus on Level4 autonomy And map out the major cities and highways. That’s an exponentially simpler problem to work on than Level5. Level5 needs supercomputer level hardware and a true AI.


----------



## Klaus-rf (Mar 6, 2019)

Mr. Spacely said:


> It is happening as we speak. City driving with stop signs and stop light recognition has been working as demonstrated on Autonomy Day in April. They will roll it out to a few more folks by year end to meet the "feature complete" goal, and then the rest of us will probably get it early 2020. Sure it will take a year or two before it is out of beta and truly Level 5 *and then approved by regulators...*


 Unfortunately that "and then approved by regulators... " will probably take the better part of a decade AFTER code freeze which is several years away at best.

So maybe ready when my grand kids get out of college.


----------



## TI3T (Mar 30, 2019)

There's a pretty interesting episode of NOVA that I watched recently dealing with the same subject. The thing that resonates with me most is an expert in the field admitting that 'computers make mistakes'. However, humans make more mistakes so we have to decide if we can deal with the occasional 'mistake' from a computer if it eventually means we will all be safer.

i feel like the first thing we need to utilize Level 5 technology in is the trucking industry which currently can't find enough humans and the problem will only get worse. Unfortunately, the 'mistakes' I'm speaking of can be even more catastrophic if it's a semi involved.


----------



## slacker775 (May 30, 2018)

I always get rather mixed feelings with these discussions. WIthout a doubt, the long-tail for 100% autonomy is long. Like the deep water example, all of the various construction zone sorts of things, funky intersections that city planners on crack come up with, etc will always be a challenge. Making an AI fully aware of context and all of the various inputs will always be a significant hurdle. 

As the technology progresses however, there should come a time where we actually start designing roadways and signage to accommodate autonomous vehicles so that they are able to get the appropriate context clues to make the right decisions. In the end, its actually a bit foolish to pretend that we have to force a car to work like a human instead of designing the entire system to cater to the vehicle. In the short term of course, its what we have to do because there are and will be non-autonomous vehicles on the road for some time.


----------



## garsh (Apr 4, 2016)

All I really care about right now is "full autonomy" on divided highways. If I can check emails, read a book, or take a nap while the car handles the "easy" part of my trip, that would be a HUGE life changer for me, and I'm sure for many other people.

I agree that the long-tail of corner cases will take a very long time to adequately solve. And full autonomy everywhere is very important to help those who cannot drive to become more independent (ex - blind people). But in the short term, I'm happy to drive the car to the highway myself in order to let it take over. Road trips will become so much better at that point.

When that happens, I'll probably choose to drive rather than fly to all sorts of destinations, just to avoid the entire airport hassle and dealing with economy seating. I could actually get work done remotely while the car handles all of the highway driving!


----------



## Mr. Spacely (Feb 28, 2019)

On any given day people: 1. text while driving 2. drive after drinking 3. drive with sleep disorders 4. steer with their knee while eating 5. miss an exit and then back up on the highway 6. look out the window at scenery and distractions. 7. go the wrong way on the interstate. We will all be safer with self driving...


----------



## MelindaV (Apr 2, 2016)

Mr. Spacely said:


> On any given day people: 1. text while driving 2. drive after drinking 3. drive with sleep disorders 4. steer with their knee while eating 5. miss an exit and then back up on the highway 6. look out the window at scenery and distractions. 7. go the wrong way on the interstate. We will all be safer with self driving...


for multiple of these reasons, I already attempt to get in front of another Tesla when I can in stop and go traffic to lessen the chance of being rear-ended knowing the car, even at this point, will attempt to correct for our stupidities.


----------



## shareef777 (Mar 10, 2019)

garsh said:


> All I really care about right now is "full autonomy" on divided highways. If I can check emails, read a book, or take a nap while the car handles the "easy" part of my trip, that would be a HUGE life changer for me, and I'm sure for many other people.
> 
> I agree that the long-tail of corner cases will take a very long time to adequately solve. And full autonomy everywhere is very important to help those who cannot drive to become more independent (ex - blind people). But in the short term, I'm happy to drive the car to the highway myself in order to let it take over. Road trips will become so much better at that point.
> 
> When that happens, I'll probably choose to drive rather than fly to all sorts of destinations, just to avoid the entire airport hassle and dealing with economy seating. I could actually get work done remotely while the car handles all of the highway driving!


This is exactly what I meant by Tesla focusing on Level4 autonomy. I take an annual trip from Chicago to NC (800mi each way). For me to be able to hop on a highway that's a couple miles from my house and have my 3 take over (hands free) for the next 795mi, for me to take control again when arriving at my destination is something that would truly revolutionize the travel industry. I can't even imagine what would happen to the airline industry. Tesla really should focus on the major highways and get that parted sorted out to be handsfree.


----------



## John (Apr 16, 2016)

Sounds just like "a computer will never beat a chess Grandmaster," then "a computer will never beat a Go master" [more game combinations than molecules in the universe], then "a computer won't beat a champion e-gamer."

People look at last year, they look at this year, and they linearly extrapolate. "Improvement is slow! They'll never get there at this rate!"

No one "expects" an exponential improvement to happen. Instead, it's a "stunning, unexpected improvement."

Self-driving will make a series of increasingly stunning improvements. And the first stunning improvements won't prevent people from doubting the next.

This is more about psychology than technology.


----------



## shareef777 (Mar 10, 2019)

John said:


> Sounds just like "a computer will never beat a chess Grandmaster," then "a computer will never beat a Go master" [more game combinations than molecules in the universe], then "a computer won't beat a champion e-gamer."
> 
> People look at last year, they look at this year, and they linearly extrapolate. "Improvement is slow! They'll never get there at this rate!"
> 
> ...


No one is doubting that autonomy will happen. The timeline is what's being questioned. Elon has already been wrong on countless occasions for having full autonomy (Level5), and logically speaking, the amount of possible scenarios out on the road puts things into the requiring AI level of computing.


----------



## M3OC Rules (Nov 18, 2016)

shareef777 said:


> This is exactly what I meant by Tesla focusing on Level4 autonomy. I take an annual trip from Chicago to NC (800mi each way). For me to be able to hop on a highway that's a couple miles from my house and have my 3 take over (hands free) for the next 795mi, for me to take control again when arriving at my destination is something that would truly revolutionize the travel industry. I can't even imagine what would happen to the airline industry. Tesla really should focus on the major highways and get that parted sorted out to be handsfree.


They are looking at RoboTaxi competition vs other automakers when it comes to autonomy. They say they will get there by the end of 2020 and it's a race for robotaxi competitors. The huge advantage Tesla has which George Hotz points out, is that Tesla makes money along the way and doesn't have to pay drivers. So if no one really figures it out soon there will probably be a lot of consolidation since it will be clear at some point some of these people will never make back the investments. I think Tesla will start to evaluate where they are at the end of 2020 as well as the competition. If they aren't where Elon says they will be, they may start looking at interim Level 3 type stuff and start to blame regulators. Perhaps they will need to in order to keep up with traditional automaker competition. But to some degree, they boxed themselves in. They essentially can't do what you're suggesting because of their promises. If they deliver level 3 it's by no means going to let them off the hook and there is lots of pushback against the idea of level 3. They will be at 4 years at the end of 2020 for promised FSD without delivering. Lawsuits will mount at some point. If you're talking level 4 on the freeway then they basically are working on that. Perhaps focusing on freeway could get them to where you suggest faster but maybe not. It's not necessarily a problem where more resources gets it done faster.

Let's just hope they blow our minds and get there so it's a moot point.


----------



## John (Apr 16, 2016)

shareef777 said:


> No one is doubting that autonomy will happen. The timeline is what's being questioned. Elon has already been wrong on countless occasions for having full autonomy (Level5), and logically speaking, the amount of possible scenarios out on the road puts things into the requiring AI level of computing.


My comment WAS about timing. It will happen shockingly quickly.


----------



## John (Apr 16, 2016)

shareef777 said:


> No one is doubting that autonomy will happen. The timeline is what's being questioned.


I realize people hate it when other people tell them what they are thinking, but let me gently suggest this: 

_When people express extreme skepticism about seeing FSD any time soon, they are mostly expressing skepticism about FSD._​


----------



## shareef777 (Mar 10, 2019)

John said:


> I realize people hate it when other people tell them what they are thinking, but let me gently suggest this:
> ​_When people express extreme skepticism about seeing FSD any time soon, they are mostly expressing skepticism about FSD._​


I work in IT. So I see how technology evolves on a daily basis. I managed a data center loaded with 50+ cabinets of equipment 10 years ago. We now run the same compute/storage in 4 cabinets. What took weeks to deploy is now some in minutes. All that progression took over a decade. I believe FSD will happen, just not in a couple years.


----------



## John (Apr 16, 2016)

shareef777 said:


> I work in IT. So I see how technology evolves on a daily basis. I managed a data center loaded with 50+ cabinets of equipment 10 years ago. We now run the same compute/storage in 4 cabinets. What took weeks to deploy is now some in minutes. All that progression took over a decade. I believe FSD will happen, just not in a couple years.


But that is all based on software people write. This is a new paradigm: machine learning. It happens in machine time, assuming the training infrastructure and data is readily available.

In the future, people won't write much software, so our hard-won instincts about software progress and possibilities will need to adapt. What you feel is a product of the last two decades. Chuck it out, this is weird and new and wonderful: software that is trained, not coded.


----------



## shareef777 (Mar 10, 2019)

John said:


> But that is all based on software people write. This is a new paradigm: machine learning. It happens in machine time, assuming the training infrastructure and data is readily available.
> 
> In the future, people won't write much software, so our hard-won instincts about software progress and possibilities will need to adapt. What you feel is a product of the last two decades. Chuck it out, this is weird and new and wonderful: software that is trained, not coded.


You do realize ML is still software that people write/teach. People have to load the system with data, and there're so many corner cases with driving that is what's going to take time. You don't tell a ML algorithm, get me from point A to point B and don't hit anything on the way. You need to teach it the most efficient, safest, and legal route between A and B, and then there's the dilemma of teaching it what "anything" is when being instructed to not hit anything. We don't have any AI that'll do that for us. At the end of the day, FSD will still take a lot of talented people And there're limits to what they can accomplish.


----------



## John (Apr 16, 2016)

shareef777 said:


> You need to teach it the most efficient, safest, and legal route between A and B...


ML isn't software in the traditional sense of using programming code written by people. The framework it runs on is (for now), but that's not important to the discussion.

And you don't train it how to go from A to B, you teach it to drive a car. You could build a new road, and it would work fine on it.

These discussions remind me of the (sexist) joke where a mathematician and an engineer are offered the chance to meet Miss Universe, under the condition that they have to start at the opposite end of the bench, and can only move halfway closer each time they move.

"Ha!," says the mathematician. "It's a trick! We will never reach her! It's mathematically impossible!"

But the engineer did accept the offer. Because he was confident he could get close enough for practical purposes.


----------



## kataleen (Jan 28, 2019)

John said:


> ML isn't software in the traditional sense of using programming code written by people. The framework it runs on is (for now), but that's not important to the discussion.


While I agree with you, AI right now is far from what we like to think AI is. A true AI will start at almost nothing, and with only one instruction would be capable of completing tasks. And that instruction would be "Survive".
What we have right now in terms of AI is an infant-level that can only achieve anything if it's taught heavily by humans along the way. Yes, it's not software in the traditional sense using programming code written by people, but those same people hold its hand and teach it. And those same people are obviously prone to making mistakes. 
One thing is for sure, AI is growing up and I'm looking forwards to its teenage years. I seriously doubt though that it we'll be around when it will reach adulthood.


----------



## garsh (Apr 4, 2016)

John said:


> And you don't train it how to go from A to B, you teach it to drive a car.


Tesla isn't teaching AI how to drive a car, as far as I've been able to discern.

Tesla is teaching an AI how to recognize roads, signs, and obstacles. The actual driving part is still being coded by programmers.


----------



## shareef777 (Mar 10, 2019)

garsh said:


> Tesla isn't teaching AI how to drive a car, as far as I've been able to discern.
> 
> Tesla is teaching an AI how to recognize roads, signs, and obstacles. The actual driving part is still being coded by programmers.


That was my thought as well. ML is being leveraged for object detection only, not determining how to get to a destination. That's still humans, and hence why it feels like it's going to be a while to get FSD.

I really wish they'd focus on hands free driving for highways. Personally, I think that would be the greatest revolution to the travel industry since the automobile itself was created. Imagine a fleet of Model X in major cities ready to take a group of up to 6 people comfortably from say Chicago to New York. Normally that'd of been at least $1200 in airfare round trip (absolute best price). In a Tesla it'd be about $100. On top of that, there's no security to wait through and the trip would be more comfortable/personal.


----------



## garsh (Apr 4, 2016)

shareef777 said:


> That was my thought as well. ML is being leveraged for object detection only, not determining how to get to a destination.


To clarify, _routing_ most likely also makes use of machine learning. What we're saying is that the particulars of how to drive a car along a given route are currently determined by human programmers.


----------



## John (Apr 16, 2016)

Andrej gave an interesting talk (sometime before Autonomy Day at an AI symposium) where he talked about how the nets were progressively taking over the Tesla autonomy stack. When he got to Tesla, it was mostly handwritten code, with nets for free space, lanes, and objects (vehicles, signs, cars, people).

But increasingly, the nets are taking over the driving behavior parts, too. One simple example of that would be the cut-in detection, when the car interprets a neighboring vehicle's movement as an attempt to merge. But there are many of them, such as not just "this is a car," but "this is a parked car" and "I can't see the road ahead, but I predict it curves this way."

And though they still have huge resources (people, software) devoted to training, the labeling/feedback/annotation is increasingly automated. As in, any chance they get they turn to automation to speed things up when they can.

When I was in grad school at Berkeley working on control systems, one of the other grad students was focusing on bipedal locomotion. He could never quite get it to work robustly enough for anything practical. Fast forward, and now people train two-legged robots to walk quite robustly just via the (automated, in a sense) feedback of falling down. Spooky to watch a device learn to walk on its own.

It's limiting to think of NNs as just recognizers, or even as statistical analyzers. I think the best way to think of them is as predictors. They observe the world, predict what will happen next, and act accordingly. Just like we do when we drive.

If other companies are making 3D HD maps of roads and just servo-ing around on them, they are going to be in a world of hurt. Because compared to a car that can effectively interpret a road that is under construction—or a road that's never been mapped— and predict what everything around them will do, they will look really dumb. It won't take too many instances of them just stopping and giving up—or plowing through something on the road—before people want to stop using them.


----------



## garsh (Apr 4, 2016)

John said:


> Andrej gave an interesting talk (sometime before Autonomy Day at an AI symposium) where he talked about how the nets were progressively taking over the Tesla autonomy stack.


Excellent, I hadn't heard about that! Do you know of any links for that talk?


----------



## John (Apr 16, 2016)

garsh said:


> Excellent, I hadn't heard about that! Do you know of any links for that talk?


----------



## garsh (Apr 4, 2016)

John said:


>


At one point, he talks about recognizing when a turn signal is activated. I know that Tesla at one point was only making use of one or two frames from each camera as input to the NN. That isn't nearly enough frames to be able to recognize when a turn signal is activated. Two frames can be enough to distinguish a moving car from a stationary car, but it precludes recognizing a lot of "longer-lived" actions. I guess the new FSD computer is going to have enough power to digest a lot more camera frames at once. That's great news.


----------



## kataleen (Jan 28, 2019)

garsh said:


> At one point, he talks about recognizing when a turn signal is activated.


Aren't they doing that now? I noticed that many times in traffic, once a car in an adjacent lane starts signaling to change lanes into my lane and the AP starts noticing an intent to change lanes from that car, it considers it the new reference car. The other car is still in its lane when that happens.

Perhaps they check the angle and one frame that has the blinker on.


----------



## garsh (Apr 4, 2016)

kataleen said:


> Aren't they doing that now? I noticed that many times in traffic, once a car in an adjacent lane starts signaling to change lanes into my lane and the AP starts noticing an intent to change lanes from that car, it considers it the new reference car. The other car is still in its lane when that happens.
> 
> Perhaps they check the angle and one frame that has the blinker on.


I've seen AP treat cars with no turn-signal the same way. So they might be basing that on nothing more than how close the car is to the line, or maybe it appears to be approaching the line across two frames of the camera.


----------



## M3OC Rules (Nov 18, 2016)

Here is another more recent video from a Karpathy talk. Here he gives some insight into the complexity of the design and process. Fun stuff. I think it comes down to your faith in the strategy, hardware, and team.

https://slideslive.com/38917690/multitask-learning-in-the-wilderness


----------



## soupdogs (May 10, 2020)

My personal belief is that L5 won't happen until most of the cars on the road are L5 capable.


----------



## lairdb (May 24, 2018)

garsh said:


> All I really care about right now is "full autonomy" on divided highways. If I can check emails, read a book, or take a nap while the car handles the "easy" part of my trip, that would be a HUGE life changer for me, and I'm sure for many other people.
> [...]
> When that happens, I'll probably choose to drive rather than fly to all sorts of destinations, just to avoid the entire airport hassle and dealing with economy seating. I could actually get work done remotely while the car handles all of the highway driving!


This. The focus on Level 5 is silly -- Level 5 is napping while the car handles this:








...during a rain of frogs, with no-one in the car.

Somewhere between 3+ and 4- is a society changer by itself; 4- just on-ramp to off-ramp is front page news and market domination. If I can aim the car at San Francisco, get on the on-ramp, and go to sleep until it rings an alarm a couple of miles before the supercharger exit, that's the ball game.


----------



## M3OC Rules (Nov 18, 2016)

Tesla is taking an incremental approach. The question is if you limited the scope, would that change the design and development approach. How much does freeway only really limit the scope once you get to level 4? You can't sleep with a level 3 system.

Edit: For this thought experiment try to think of things that you need for city driving that you don't need on freeway driving. For example, stoplights and curbs. But there are curbs at toll booths. And there are stoplights at freeway entrances and other lights for tunnels and I'm sure many more. And there are stop signs in construction areas sometimes. Lanes are typically much clearer but that's not guaranteed. You still need to know what objects you can and can't hit. While there shouldn't be pedestrians that doesn't mean there aren't. There are definitely less edge cases if you limit to freeways but does that change the design and development approach? I know people are trying this for trucking. It would be interesting to see how different the approaches and data collection are.


----------

