Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Levandowski drives coast to coast without touching steering

This site may earn commission on affiliate links.
@Bballou Thank you for the correction, yes a person with a bicycle and I believe the bicycle also had a basket full of stuff in the front (blocked in this image). My point was the shape was a little odd...

Indeed Uber failed because of insufficiency of their vision but that’s really the thing with car responsible driving. How quickly will vision only be reliable enough for car responsible driving and even then would it benefit from Lidar’s redundancy? It will have to be able to recognize everything at least to the degree it knows to stop when it sees an obstacle.
 
Last edited:
You aren't going to have a self-driving car without cameras...


Redundancy does not mean that the secondary system can drive the car itself. It means it can save the situation in corner cases. It can validate the information from the primary system. And it can provide extra information.

Some sensors are better than humans, so I expect later on there will be conditions which the self driving car can handle better. Cameras currently in use are worse than human vision.
 
The thing that separates Lidar from camera is practically no false negatives. Camera must identify to act, but when Lidar says nothing is there, nothing is there at a certain distance. Lidar is very accurate in saying nothing is in front of the car for example.

This is why Uber failed. Their vision failed to recognize the oddly shaped shopping cart filled with stuff in the middle of the road and their Lidar was disabled. Lidar would have known with very high reliability that something is there. Plus it has none of the negatives of radar and ultrasonics that miss certain types of targets or exaggerate them.

This is what makes Lidar useful for redundancy. There are other benefits but this just from a redundancy point of view.

Maybe, it wasn't clear from my post, but I was mostly quoting Anthony's post from medium, in response to op. The full article here:
Pronto Means Ready – Pronto AI – Medium

It was funny seeing his first DARPA's attempt with a bike.

Personally, I'm rooting for all: from Waymo to Tesla. This is Waymo's 10th year and Tesla's may be 2-3 year? Here's to more years of aspiring for truly driverless cars beyond demos and half-fulfiled promises...cheers.!.
 
  • Like
Reactions: strangecosmos
Some sensors are better than humans, so I expect later on there will be conditions which the self driving car can handle better. Cameras currently in use are worse than human vision.

I want to dispute this. XKCD provides a nice graphic illustrating many of the problems with human vision. Look at that and imagine trying to build a self-driving car using human eyes for sensors.
  • no low-light detection at the center of the field of view
  • large blind spots within 10 degrees of the center
  • no color detection beyond 10 degrees of the center of view
  • debris floating everywhere
  • rapidly decreasing resolution outside of the center of the field of view
Our brains do a tremendous amount of work processing the input from these sensors. Even a cheap camera with decent resolution provides a far superior image to what our vision produces.

The difference is all in the processing. LIDAR avoids the processing problem enough to allow for minimally capable self-driving cars in well-mapped areas, which is great, but I remain unconvinced that it is necessary to make a fully-featured self-driving car.
 
This is Another blow to Tesla, Elon and the fanbase. Shows you how easy it is to do a cross country yet Tesla are still struggling.
This biggest rebuttal to this is mass producing this and getting it to a simple enough level that it can either be used widely as an autonomous ride sharing platform, or in people's driveways.
Assuming it becomes fully autonomous, that is.
Tesla is a step in front of everyone in that regard, since you can go out and purchase a car with Tesla's autopilot equipped.

this shows you that the state of the art is well beyond what AP is. NOA actually reveals how behind Tesla is.

Although it's evident that this system did something Autopilot can't, didn't Elon Musk say that they could "game" the route to make it possible? Haven't read into it, but could that be the case here? Also, from what I can read, the system was disengaged for exits

But allow me to use an analogy here:
Just as a cheap car can be souped up to beat a Ferrari in a drag race for a fraction of the price..
Could it be the same with this system?

Sure it's shown up autopilot in this regard, but who knows what Musk and the team are cooking up in the developer kitchen..

I guess what I'm saying is MAYBE we're looking at a system that may or may not be reliable enough (like the Ferrari alternative) to be put into several vehicles, and we're comparing it to Autopilot that has to be put into several hundreds of thousands of cars, many of whom have been seen to be idiots.
Is this system idiot proof, reliable enough, and was it staged like the initial AP demo?
Not asking this as an attack or to start an argument, but actually curious
 
Last edited:
  • Like
Reactions: CarlK and Bballou
I want to dispute this. XKCD provides a nice graphic illustrating many of the problems with human vision. Look at that and imagine trying to build a self-driving car using human eyes for sensors.
  • no low-light detection at the center of the field of view
  • large blind spots within 10 degrees of the center
  • no color detection beyond 10 degrees of the center of view
  • debris floating everywhere
  • rapidly decreasing resolution outside of the center of the field of view
Our brains do a tremendous amount of work processing the input from these sensors. Even a cheap camera with decent resolution provides a far superior image to what our vision produces.

The difference is all in the processing. LIDAR avoids the processing problem enough to allow for minimally capable self-driving cars in well-mapped areas, which is great, but I remain unconvinced that it is necessary to make a fully-featured self-driving car.

I recommend you to check all what he claims. I did. I checked my eyes to a 18MP camera. In the focused spot my eyes are equivalent to a 24MP camera. Tesla uses a 1 MP camera now. I compared my vision in low light conditions. I checked the colors at high angles (walk backwards and you can clearly tell what colors are coming up at the edges).
Lot of bs regarding this subject.
 
  • Informative
Reactions: Engr
> "State of the art.."
Do you really believe that? I forgot you're always correct.

Anyway, @Bladerskb said this is the state of the art.

Side note, according to state of the art:
"Deespite the vast sums of money and time dedicated to developing and rolling out autonomous vehicles, there are no real autonomous vehicles today. There are only increasingly complex and expensive demonstrations of those visions I mentioned above.

Second, true level 4 or 5 vehicles will not arrive for many more years."

Edit to add Levandowsky's post here:Pronto Means Ready – Pronto AI – Medium

I said state of the art for one guy with a small team. Meaning if Tesla cant beat a one man army. How can they compete?

That's like saying you can 100% fly a plane, even if you can't take off or land it.

So if he drove around the block before starting the drive, it will give him more credibility? Cause that's what 10 miles consists of.
 
This was a trained engineer, driving a system he engineered with a car full of specialized instruments, especially designed for this single trip. Not production ready or cost effective.

Lots of difference between what Tesla is developing. A system that is cost effective, aestetically pleasing, and works well in Urban and freeway interchanges.

No matter, the state of the art is advanced by what Levandosky has accomplished.
 
  • Like
Reactions: 1375mlm
So if he drove around the block before starting the drive, it will give him more credibility? Cause that's what 10 miles consists of.

Yes.

But for full credit, the car should dynamically navigate to refueling spots, refuel itself, and return to the highway. It should also find a spot and park itself at the destination.

If Tesla did a cross-country trip similar to this one, I just wouldn't be impressed. My Tesla can drive on highways just fine. This would represent a modest improvement in lane changing and highway junction navigation. I would actually be bothered that they were acting like this was a big deal.

The whole point of Tesla's cross-country trip milestone is to demonstrate that it would be possible for the car to drive across the country without a driver.
 
I recommend you to check all what he claims. I did. I checked my eyes to a 18MP camera. In the focused spot my eyes are equivalent to a 24MP camera. Tesla uses a 1 MP camera now. I compared my vision in low light conditions. I checked the colors at high angles (walk backwards and you can clearly tell what colors are coming up at the edges).
Lot of bs regarding this subject.
Interesting to note that Lewandowski uses low resolution cameras... Lower resolution than smart phones. I think the key is the neural net. How much resolution do you need to see a car or a person or a dog? (Don't need to see squirrels... Dangerous to stop for squirrels)
 
Tesla is a step in front of everyone in that regard, since you can go out and purchase a car with Tesla's autopilot equipped.

True. Tesla has two obvious advantages assuming their hardware ends up being sufficient for the task, one is validation/testing and another is indeed deployment. If the hardware turns out to be sufficient, Tesla can just upload an update to the fleet.

None of this of course guarantees the hardware is sufficient or that they get there in software before others but still fair to be noted.
 
True. Tesla has two obvious advantages assuming their hardware ends up being sufficient for the task, one is validation/testing and another is indeed deployment. If the hardware turns out to be sufficient, Tesla can just upload an update to the fleet.

None of this of course guarantees the hardware is sufficient or that they get there in software before others but still fair to be noted.

I'm actually not worried about Tesla. AP might not be upto Elon's hype but currently Autopilot is the most capable ADAS in the hands of customers.

I think mostly people tend to be frustrated with the system not leaving upto Elon's targets, e.g. coast to coast by so & so. This is true for especially those that tend to hung on every Elon's tweet even after his admission of been too optimistic with deadlines.
 

Attachments

  • download (2).jpeg
    download (2).jpeg
    9.6 KB · Views: 51
I'm actually not worried about Tesla. AP might not be upto Elon's hype but currently Autopilot is the most capable ADAS in the hands of customers.

I think mostly people tend to be frustrated with the system not leaving upto Elon's targets, e.g. coast to coast by so & so. This is true for especially those that tend to hung on every Elon's tweet even after his admission of been too optimistic with deadlines.

I kind of separate the customer story from the autonomous one.

As an ADAS Autopilot is interesting to own and will likely continue to be so. I find it likely Tesla will continue to improve here in interesting ways.

Who ”wins” (and how) actual car responsible autonomous is a wholly different question... I am not at all as confident for Tesla compared to others.
 
  • Like
Reactions: Inside and Engr
This was a trained engineer, driving a system he engineered with a car full of specialized instruments, especially designed for this single trip. Not production ready or cost effective.

Specialized instruments?

Off the self GPUs and 10 dollars (prob) cameras is not cost effective?
and did you even read the description? its not designed for this single trip.

Yes.

But for full credit, the car should dynamically navigate to refueling spots, refuel itself, and return to the highway. It should also find a spot and park itself at the destination.

So basically anything that Elon says is what counts, everything else is useless.
Gotcha.
 
Off the self GPUs and 10 dollars (prob) cameras is not cost effective?
and did you even read the description? its not designed for this single trip.

Although you do have valid points, @Bladerskb, i have to disagree with this

If you're a company that's making a self driving car, the last type of camera you're going to buy is a $10 one. Definitely won't look good on the spec sheet
Although article did say the resolution was inferior to smartphone cameras, smartphone cameras have come a long way. That's like saying "oh this car is so slow, its slower than a P100D"

Specialized instruments?

I'm going to take this picture directly off the article you linked to:
4032.jpg

If this isn't specialized equipment, than I'd love for you to define specialized please
 
  • Funny
Reactions: boonedocks
Although you do have valid points, @Bladerskb, i have to disagree with this

If you're a company that's making a self driving car, the last type of camera you're going to buy is a $10 one. Definitely won't look good on the spec sheet
Although article did say the resolution was inferior to smartphone cameras, smartphone cameras have come a long way. That's like saying "oh this car is so slow, its slower than a P100D"
Uhm...Mobileye uses $10 cameras... automotive cameras are very low resolution compared to smartphone cameras because they are trying to accomplish different things.


I'm going to take this picture directly off the article you linked to:
View attachment 362058
If this isn't specialized equipment, than I'd love for you to define specialized please

That's called a dev-kit. You know everything, every company in every industry uses to prototype. Has nothing to do with the actual production system design. It allows for easy debugging and troubleshooting.

I know no one here is an actual engineer, but come on!
Its a motherboard with a GPU and CPU chip on it in a computer chassis connected to cameras using Ethernet cables...like seriously?
 
So Anthony Levandowski didn’t touch the wheel while the car was on the highway, but he did every time the car needed to get on or off the highway? Is that correct?

The Guardian article says there were several failed attempts, meaning the system’s true disengagement rate is more than 1 per 3,099 miles (the distance of Levandowski’s trip from San Francisco to New York).

Sounds like Pronto AI is using a neural network for perception and then feeding the mid-level representations (i.e. the sort of information you see represented by bounding boxes, labels, etc. in verygreen’s videos) outputted from the perception network into a separate neural network for path planning:

“One network recognizes lane markings, signs, obstacles and other road users, and extracts information about their position and speed. The second takes that information and controls the driving, using digital signals and mechanical actuators for the throttle, brake and steering.”
This is the same general approach that Waymo tried with ChauffeurNet, and that I suspect Tesla is working on (based on Amir Efrati’s reporting at The Information).

I haven’t followed the legal stuff around Levandowski beyond reading a few articles. But here’s an interesting asterisk to the whole brouhaha about lidar patents:

Vigilante engineer stops Waymo from patenting key lidar technology

Our brains do a tremendous amount of work processing the input from these sensors.

Our eyes also saccade and foveate several times a second!

saccade = dart around

foveate = centre the fovea (the high-res, centre part of the eye) on what we want to see
 
Last edited:
Although you do have valid points, @Bladerskb, i have to disagree with this

If you're a company that's making a self driving car, the last type of camera you're going to buy is a $10 one. Definitely won't look good on the spec sheet
Although article did say the resolution was inferior to smartphone cameras, smartphone cameras have come a long way. That's like saying "oh this car is so slow, its slower than a P100D"



I'm going to take this picture directly off the article you linked to:
View attachment 362058
If this isn't specialized equipment, than I'd love for you to define specialized please
I think the point is that you don't need high priced, high resolution cameras. They might even be an impediment since there is more data to process.
The crucial bit is the picture you posted of the box which houses the neural net. That's what makes the whole thing work. That is very specialized equipment. (And similar to the neural net hardware Tesla is installing in its cars.)
 
  • Like
Reactions: 1375mlm