Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Lidar vs Camera revisited

This site may earn commission on affiliate links.
I don't understand how the more accurate point clouds of LIDAR could possibly be a disadvantage.
I don't think its a disadvantage - except when Tesla started it was. Ugly Lidar couldn't be added to consumer cars.

You would still have sensor integration issues ... but thats another story.

In spite of what Elon says - I believe Tesla would have gone with a richer suit of sensors if they were starting out today.
 
You have limited resources. You have more projects to work on than can be done. Projects must be cut. LIDAR out, radar out, etc...
Yes, but OP seems to be saying you're still going to generate a point cloud with cameras. If that problem had been solved then no on would use LIDAR at all. It's an extremely difficult problem and something that Tesla is devoting resources to.
 
You have limited resources. You have more projects to work on than can be done. Projects must be cut. LIDAR out, radar out, etc...
thing is, if tesla kept their once-stellar rep, they'd have zero problems finding people to work there.

they'd have to pay well. they'd have to stop the texas BS (turns a lot of progressives off, and we are the key demographic for your technology, DUH!), they'd have to admit when they were wrong, they'd have to start taking customer support/care seriously and they'd have to fire that madman (or just get rid of him) and find a sane leader who has realistic goals and proper use of corp funds.

none of that is going to happen any time soon.

tesla has peaked and while its sad, its a cycle and its how the world works.

what IS sad is that this was entirely preventable. if they only had competant management and not an egomaniac.
 
  • Like
Reactions: _Redshift_
I work in the field, on my 2nd self-driving company now (neither of which is/was tesla, fwiw) - and all that I'm seeing is that we are not really any closer than we were years ago.
Thats good information. It also makes you an employee of potential Tesla competitor. You should note that in your signature.

so much chaos in the real world, you can't just NN it away.
Depends on your idea of what the car error rate can be. If you think about it - there is so much chaos on the road that there should be a lot more accidents. And then you go and see real chaos on the roads of Cairo or Chennai. Then you wonder how come there aren't accidents every second in every road.

those in the field all know this, as well.
Those in the "field" also thought you can't make profitable EVs or make people actually want to buy EVs. 'nuff said.

You are just saying the same thing we have all been talking about for years here. Nothing new.

ps : working on projects that you are sure will fail, is one way to make sure the project fails.
 
  • Like
Reactions: SWIPE
... Current Lidar approaches do the feature engineering online between the input and the neural network.
I don't understand. ML feature engineering is not done this way.
Tesla are doing their feature engineering when they generate the labels offline.
Feature engineering is done by their engineers.
This saves online computation and no information is lost from the input to the neural network.
You'll have to explain what you mean by feature engineering. Are you talking about labeling?
 
Last edited:
Of course. I don't expect Level 3 or 4 consumer cars in the US anytime soon.
Agree, but the are plenty that are advertising they will upgrade to l3 and l4 when the software is ready. Chinese cars, german luxury brands, volvo etc...
Example:
 
Last edited:
  • Funny
Reactions: Daniel in SD
Agree, but the are plenty with that are advertising they will upgrade to l3 and l4 when the software is ready. Chinese cars, german luxury brands, volvo etc...
Example:
I'll believe it when it's actually released. I think we'll see L3 highway systems that use LIDAR in other countries.
 
  • Like
Reactions: DanCar
I don't understand. ML feature engineering is not done this way.

Feature engineering is done by their engineers.

You'll have to explain what you mean by feature engineering. Are you talking about labeling?
Feature engineering refers to the process of using domain knowledge to select and transform the most relevant variables from raw data when creating a predictive model using machine learning or statistical modeling. The goal of feature engineering and selection is to improve the performance of machine learning (ML) algorithms.

Examples of this could be to transform the raw sensor data into a voxel grid, do clustering, find nearest neighbors of point and generate a graph of these, NARF etc.
 
Last edited:
  • Informative
Reactions: okcazure
Feature engineering refers to the process of using domain knowledge to select and transform the most relevant variables from raw data when creating a predictive model using machine learning or statistical modeling. The goal of feature engineering and selection is to improve the performance of machine learning (ML) algorithms.
Yeah, feature engineering is done by engineers / scientists, not in the automated pipeline as your original post suggests.
Here is a video if you are in the mood for that:
 
Last edited:
Last edited:
  • Informative
Reactions: okcazure
It is developed(done) by engineers and then run in systems either online or offline in an automated pipeline.

Maybe Karpathy can explain it better:
listen at 1:30 and forward
"run in systems" means simple math. It is not feature engineering at that point. When Karpathy says training for those features, he means optimization for the goals of that neural network, which is based on features. Features can be simple features or engineered features. An engineered feature can be to lower the jerk which is the third derivative of ( distance / time ).
 
Last edited:
The thing that bugs me about relying on Vidar or Pseudo Lidar is that there is just a little too much magic involved. When you are heading directly at a unresolvable object - like a solid white wall - the technology is useless. Lidar or Radar would still say "Object Ahead".

If the situation demands life-critical sensing, I would say use the best sensor, don't Pseudo it with fancy computerized assumption software.
 
There are many costs with adding a sensor besides just the hardware of the sensor. Power, network, compute etc. Integrating into the software stack will cost a lot of man power…

Elon believes the cameras has enough information to solve FSD, it seems likely to be true given that humans with cameras can drive. We will see if his intuition was correct or not once we have a proof of it working.
Power? lol the average power consumption of lidar are 10 watts.

Compute? These cars have anywhere from hundreds of TOPs to thousands depending on the company. They are not compute constraint, secondly processing lidar input requires way less compute than cameras.

All I see is made up points to say lidar is worthless. Just like Elon. Note if anyone tells you something is worthless that everyone else uses and you couldn’t until very recently...They are pushing an agenda.

Lastly it’s good to point out that No one is of the opinion that cameras wouldn’t eventually be enough for safe FSD. But that it will take till around 2030 as ML improves. This Is another argument derived from exactly the opposite of what ppl actually believe.