Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomy Investor Day - April 22 at 2pm ET

This site may earn commission on affiliate links.
Our understandings went full circle from vision can be as good as Lidar to vision is better to it's needed.

I’m afraid our ”understanding” didn’t really go anywhere yesterday on Lidar. One-sided claims were made by those who have a vested interest in not using Lidar (the already deployed fleet).

OK it's settled. The guy who started the Lidar stuff at Google just said Elon was right about Lidar. What else is new?

Then again the other people who want to sell a vision-based solution, MobilEye, and indeed claim to have already a full stack more capable what Tesla showed yesterday, recommend adding Lidar as redundancy.

Basically Tesla was arguing a strawman on the Lidar question and that was the most disappointing part yesterday. Nobody denies these days that going vision first is the way to go. The point has been that sensor redundancy has other benefits.

Even Tesla yesterday spoke highly of their sensor redundancy when a single pedestrian is right close in front of their car and they can see it on three cameras, radar and ultrasonics. Well that’s nice but those ultrasonics won’t work at speed or at distance, the radar cone is very narrow so it doesn’t have any redundancy towards the sides (and nothing around the car) and the same with the front narrow cameras...

Tesla has extremely little in terms of redundancy towards different directions and indeed also some blindspots low front and none of this was explained away yesterday unfortunately.
 
Last edited:
I will give Elon this: While he did not alleviate any of the in my view reasonable concerns about this suite’s Level 5 worthiness that existed in 2016, at least he did reset expectations back to where they were in October 2016.

Level 5 AP2, no geofence, Level 5 being all or most weather, I look forward to that. I have no idea how they think that can be accomplished on that suite but that’s the promise, now repeated, so I look forward to it.
 
I’m afraid our ”understanding” didn’t really go anywhere yesterday on Lidar. One-sided claims were made by those who have a vested interest in not using Lidar (the already deployed fleet).



Then again the other people who want to sell a vision-based solution, MobilEye, and indeed claim to have already a full stack more capable what Tesla showed yesterday, recommend adding Lidar as redundancy.


Basically Tesla was arguing a strawman on the Lidar question and that was the most disappointing part yesterday. Nobody denies these days that going vision first is the way to go. The point has been that sensor redundancy has other benefits.

Even Tesla yesterday spoke highly of their sensor redundancy when a single pedestrian is right close in front of their car and they can see it on three cameras, radar and ultrasonics. Well that’s nice but those ultrasonics won’t work at speed or at distance, the radar cone is very narrow so it doesn’t have any redundancy towards the sides (and nothing around the car) and the same with the front narrow cameras...

Tesla has extremely little in terms of redundancy towards different directions and indeed also some blindspots low front and none of this was explained away yesterday unfortunately.[/QUOTE]

Pretty messed up mind imo. Sorry!

You are officially on my ignore list.
 
Doesn’t LIDAR have basically the same limitations as vision? From everything I’ve read LIDAR is terrible in rain, fog and water kick-up from leading cars. It’s main benefit over traditional vision systems is accuracy evidently. But if you’re able to tune and process vision to within a reasonably parity with LIDAR then LIDAR becomes redundant. If that redundant system is also very expensive and hard to gracefully integrate into the car, then the argument is hard to make for LIDAR.

Not exactly. Lidar is an active light source so it will work in a different range of lighting conditions than camera so there is clear redundancy there. After all we are talking about potential for 360 degree view around the car with multiple Lidars and headlights will not cover all (think 90 degree turns in darkness).

But when it comes to sensor redundancy we should remeber others are also using and advocating for 360 degree radar something Tesla is not doing. (Tesla is really alone on doubling down on ultrasonics for this...)

Tesla has terribly little in terms of sensor redundancy, not even redundant vision around the car, and this is one of the main arguments in my view for Lidar — it is a redundant sensor.

But Tesla has now doubled down on the AP2 suite with FSD computer being enough for Level 5 no geofence so nothing less from them will now do of course.
 
Last edited:
  • Informative
Reactions: jsmay311
Doesn’t LIDAR have basically the same limitations as vision? From everything I’ve read LIDAR is terrible in rain, fog and water kick-up from leading cars.

Lidar has different limitations than vision. For one thing, the vertical resolution isn't great. Velodyne's Lidar has 64 vertical lasers. Since it's physically spinning, sampling rate isn't great. 15hz seems to be typical. A car moving at 70 mph will move something like 10 feet between samples. Since the laser is instantaneous, rain is a problem since if the laser happens to hit a rain drop, that'll read as an object. Coupled with the lower resolution, I could see that as being quite confusing. Vision on the other hand tends to average the sample over the exposure time, and rain drops in mid-air move so fast through a frame, they don't show up. Have you ever tried to capture rain drops in a photo?
 
  • Like
Reactions: Kmartyn
Lidar has different limitations than vision. For one thing, the vertical resolution isn't great. Velodyne's Lidar has 64 vertical lasers. Since it's physically spinning, sampling rate isn't great. 15hz seems to be typical. A car moving at 70 mph will move something like 10 feet between samples. Since the laser is instantaneous, rain is a problem since if the laser happens to hit a rain drop, that'll read as an object. Coupled with the lower resolution, I could see that as being quite confusing. Vision on the other hand tends to average the sample over the exposure time, and rain drops in mid-air move so fast through a frame, they don't show up. Have you ever tried to capture rain drops in a photo?

The point about redundancy is to build an image of the world using different sensors with different strenghts and weaknesses and using NNs and heuristics on each and deciding on the best overall interpretation and action again using NNs and heuristics.

For example if side-mounted Lidar or radar are pinging an approaching car without lights on or a deer at a 90 degree turn where the camera only sees blackness that would be one such scenario where Tesla has no redundancy in their AP2/3 robotaxi suite.
 
Several people seem to be of the view that Tesla is still miles away, because they're still using straightforward heuristics for the driving component, while working hard on the image recognition.

In my layman's terms, it takes a human child years to be able to understand the world around them and know what they are looking at. I see this every day with my own kid, shouting out "bus" every time he sees a car bigger than a Corolla. But once that same human understands the world around them, it's a relatively trivial exercise to learn how to drive (30 hours of lessons?).

Does this comparison make sense to the experts or am I completely wrong?
 
Several people seem to be of the view that Tesla is still miles away, because they're still using straightforward heuristics for the driving component, while working hard on the image recognition.

In my layman's terms, it takes a human child years to be able to understand the world around them and know what they are looking at. I see this every day with my own kid, shouting out "bus" every time he sees a car bigger than a Corolla. But once that same human understands the world around them, it's a relatively trivial exercise to learn how to drive (30 hours of lessons?).

Does this comparison make sense to the experts or am I completely wrong?

I guess one of the things that has puzzled people is that solving this quicker would basically mean handing over driving to a neural network, ie something akin to an end to end neural network solution. That is a big step to make and as far as I know something nobody has done yet outside of a simple demo.

Using NNs as part of the driving policy is of course nothing new but that still means tons of manual coding to build the overall driving policy system so that is not the quick way to ”learning” to drive if manual coding and combining of NN functionalities is needed... if you need a faster route it would have to mean handing the driving itself to an NN. That seems unprecedented in a production system so far.
 
I think the biggest difficulty has been to train the NN to understand all the variaton in trafic, in my profession getting sensitivity and spesificity right.
So they label all data manually. This is a huuge task.
But if you train the NN incrementally? The smart solution is exactly that: "shadow mode" and "fleet learning" is kind of real even if I thought it was hype.
They upload some snapshots of disengagments (they was vague on how many) and let another NN categorize them ("fleet learning"). Then take the biggest categories of problem scenarios and ask for specific more new snapshots of that scenario, and label them manually. Then retrain the NN with new more precise data, and verify performance in some (?) cars, (gradually increasing numbers?) in "shadow mode". Then deploy fleet wide. So one day the car is suddenly good at detecting cyclists, the other suddenly pedestrians etc. Owners will only notice now and then when the scenario applies.

What I don't really get is how many cars are "surveillance cars" for how the NN is performing live, all over of the world. When will they recognize the Norwegian summer roller skier correctly? Areas where there are few Teslas will have poorer performance for longer in their local challenging scenarios.
 
These investors are idiots. If you develop a full self driving car where you have complete vertical integration you can make a ridiculous amount of money in so many different ways. Who cares about the logistics of robo taxis? The question is how you're going to make a full self driving car. I want more questions on that.
Yeah, it's like someone says "we're building time machine, it will be ready next year"
-Q: what colour it is?

Who the h*ll cares?! The question is can they build it or not.
 
Yeah, it's like someone says "we're building time machine, it will be ready next year"
-Q: what colour it is?

Who the h*ll cares?! The question is can they build it or not.

Yes but overall the market must have been pondering that question yesterday. That is the $50 billion dollar question here: Is this true or is this Theranos.

Tesla now has to deliver an awful lot in a very short time. The upside is, those of us with AP2 got some great news yesterday, assuming you believe in the upside.
 
  • Like
Reactions: DiamondHands
Got that covered. I have 6 air cleaners in my house and a HEPA filter in my central HVAC system. My in house PM2.5 levels are outstanding.

I use a rinse-less wash and it works great.

I can leave my garage with a spotless clean car, encounter rain and 50 miles into my 130 mile trip lose NoA. It has happened about a dozen times and I make this trip often. I paid for FSD at maximum price. I never complained about the price but I'll keep complaining about the cameras.
Always or just in winter? I don’t have this problem, and it rains a lot here on the east coast.
 
Wow, what a great presentation. These guys are at the leading edge and to hear them confirm much of what has been deduced on here already by those with access to the system was extremely cool. Hearing what "shadow mode" really is was very enlightening.

Their whole approach to this is pretty damn impressive. It is clear that this programme has cost at least 1 Gigafactory, if not more.

The inner geek was particularly impressed with their simulation and what we saw of their debug tools. Plus x-rays of the NN chip.. awesome :)

As always, Elon could modify his style just a bit make a massive difference to the quality of the presentation. Just let the experts talk. Be a bit more modest. Don't Osborne your business by talking about cool future stuff that's in the pipeline, and don't destroy credibility by making overblown statements - discussing L3 and delivering L5 is way better than discussing L5 and delivering L3... etc. Still, Elon's the billionaire so I guess one of us is doing it right :D

Someone posted earlier that Tesla may stop selling mass-market cars, in favour of building Robotaxis. I can totally see that. The new lease model for the 3 strikes me as getting an early start here - those cars will be cost neutral when they are returned. A "free", instant taxi fleet that is just goining to generate cash... what is not to like?

Can't wait to see this technology out in the wild.
 
Wow, what a great presentation. These guys are at the leading edge and to hear them confirm much of what has been deduced on here already by those with access to the system was extremely cool. Hearing what "shadow mode" really is was very enlightening.

Parody moment:

Andrej was channeling his inner @Bladerskb when he answered that question about triggers only affecting very small, specific cases creating small amounts of specific data. In the meanwhile Elon was doing his @strangecosmos about how every mile trains the fleet. :)

Let’s go with Andrej on this one. But the evolution of the triggers was the most interesting single bit of the technical presentations for me. I think it was very flattering to the likes of @verygreen and @lunitiks who had given us insight into what ”shadow mode” really is compared to Elon’s story.

In fairness Elon did correct on misconception later in the questions when he confirmed not even HW3 will actually train any neural networks in the car. The question that brought out the Dojo.
 
ok, where's the new pricing for FSD? Or does that have to wait until after the demo drives?
What was it, how much money robotaxi generates in a year? 30k? So if we give car 50k and FSD 100k, it would have five years pay back time...
IF FSD would cost only 50k, payback time would be little over three years..

As someone wrote earlier in this thread, Tesla should stop selling cars and keep them all to themselves.
 
Last edited:
  • Like
Reactions: Kant.Ing