Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Incorrect. Lidar returns 3d point coordinates that requires no additional processing. A picture is simply numbers that need cutting-edge sophisticated machine learning of deep neural network to understand.

Rgofrn4.png


Lidar and Radar has existed and used for hundreds of years.

Lidar has been used for dozens of years to detect and classify objects. Why? because it returns 3d data points which you can then simply plot on a graph using a simply python library called numpy for example using the xyz or any other library.

Its like plotting a graph back in elementary school. the same can't be said for picture.

pcl_data.gif



picture data just got started being reliably used to classify objects with the advent of deep learning in 2012.
Before that not even the most powerful computer in the world could tell you that a picture is a cat or a dog.




Yes it does, everything needs corrections. That's a vast difference from processed data to remove any known artifacts and noise vs. raw data. All hardware data are processed.



Then stop arguing radar over lidar when its useless in comparison!

Ford says they have their lidar working in blizzard situation.
Then video doesn't say it works in a blizzard. Lidar 3D point coordinates still need a NN (or other machine learning methods) to classify objects accurately especially with lidar's low vertical resolution.
 
  • Like
Reactions: Alketi
Then video doesn't say it works in a blizzard. Lidar 3D point coordinates still need a NN (or other machine learning methods) to classify objects accurately especially with lidar's low vertical resolution.

You can't even detect any objects at all with just a picture so there's no comparison.

They are simply numbers.

latest


How is it that you can't understand?

This shouldn't even be a question but yet again I subjected myself and easily proved that camera/picture isn't the best sensor. case closed. its actually the worst sensor of them all with Lidar being the best right out the gate.

however its the second best vision system (a system allows for post-process aka using recognition and classification models) with Lidar being the best vision system.
 
  • Disagree
Reactions: zmarty and JeffK
Lidar has been used for dozens of years to detect and classify objects.
You can't even detect any objects at all with a picture so there's no comparison.
You're right image recognition and computer vision in general is a total sham. /sarcasm


You seem to keep confusing object detection with object recognition and classification. They are different. Lidar can detect objects, radar can detect objects, camera with motion can also detect objects. For recognition, you'd need a machine learning or computer vision method on the backend.

At the end of the day all of them need at least some computation even for basic object detection tasks.
 
Last edited:
Lidar and Radar has existed and used for hundreds of years.

Hundreds of years? Really? Right about everything, huh?

Try, and this is being generous, 140 years. In reality radar, the term, was coined in 1939 and thus, actual modern radar has been around for less than 80 years.

Lidar has only been around for the past 50 years. So your statement is entirely inaccurate. I will accept your apology forthwith.
 
Hundreds of years? Really? Right about everything, huh?

Try, and this is being generous, 140 years. In reality radar, the term, was coined in 1939 and thus, actual modern radar has been around for less than 80 years.

Lidar has only been around for the past 50 years. So your statement is entirely inaccurate. I will accept your apology forthwith.

@Bladerskb is actually from the future, sent back to warn us of Tesla and their folly. This is the reason behind the 100% track record of facts and predictions.

You will also notice that machine learning was invented in 2012! Maybe Bladerskb hasn't made it back to the 90's yet to change history.
 
You're right image recognition and computer vision in general is a total sham. /sarcasm

Computer vision accuracy was so bad that most researchers wanted to quit. i know this because i was messing with opencv back then.

Then video doesn't say it works in a blizzard. Lidar 3D point coordinates still need a NN (or other machine learning methods) to classify objects accurately especially with lidar's low vertical resolution.

With Lidar you don't need to classify objects to drive since you have the actual data of the object, its dimension, speed and exact position in 3d.

How is it that you can't understand?

There are three categories, all of which raw pixels of a picture can't provide you:

#1 Object Recognition
#2 Object Classification
#3 Object-Scene Understanding

However Lidar can provide you #1 from its raw data, an okay #2 and #3 with very minimal and i mean minimal code, no machine learning involved.
 
You seem to keep confusing object detection with object recognition and classification. They are different. Lidar can detect objects, radar can detect objects, camera with motion can also detect objects. For recognition, you'd need a machine learning or computer vision method on the backend.

At the end of the day all of them need at least some computation even for basic object detection tasks.

stop comparing lidar with radar, first of all, radar only reliably detects MOVING objects, secondly radar's resolution is so low its pure garbage. check the amount of data used by radar per second.

camera with motion is garbage. its all about accuracy. Lidar can get u over 99.99% accuracy.

Lidar needs 0 computation for any object detection task. zero, nada, none, zip.

bahahaha lol @ radar
automobility.png
 
Hundreds of years? Really? Right about everything, huh?

Try, and this is being generous, 140 years. In reality radar, the term, was coined in 1939 and thus, actual modern radar has been around for less than 80 years.

Lidar has only been around for the past 50 years. So your statement is entirely inaccurate. I will accept your apology forthwith.

Where did u see modern radar in my statement? I was referring to radar's (tech not term) existence.

secondly, i lumped lidar into that one statement and then right in the next statement clarified that "Lidar has been used for dozens of years to detect and classify objects. "

I have never been wrong in my life and im definitely not starting now especially to u Tesla elites.
If i was, i will gladly get on my knees and repent. Until then...

So your statement is entirely inaccurate. I will accept your apology forthwith.

 
Lidar can get u over 99.99% accuracy

Why is that level of accuracy needed on an autonomous vehicle? If I'm measuring whether or not I hit a pedestrian in tenths of a mm then there are likely bigger problems with a self driving car.

No one is saying LiDAR can't be useful, it's just that there's a chance it might be overkill for this application. Waymo suggests their lidar units are $7500. The Pacifica they're testing on has three lidars.That's a $22k penalty, in addition to the aero penalties, if the problem could have been solved with vision/radar alone.

Lidar doesn't need interpretation, you could manually plot out a frame of lidar by hand if you wanted.
they are xyz coordinates not random numbers (rgb).
Same can be said of radar after DSP. Not sure what the point you're trying to make. Again detection != classification which you argued could be done without processing.
 
Last edited:
  • Like
Reactions: MP3Mike
@Bladerskb is actually from the future, sent back to warn us of Tesla and their folly. This is the reason behind the 100% track record of facts and predictions.

You will also notice that machine learning was invented in 2012! Maybe Bladerskb hasn't made it back to the 90's yet to change history.

When did you ever hear me ever say that?

As one researcher put it "Since 2012 when the neural network trained by two of Geoffrey Hinton’s students, Alex Krizhevsky and Ilya Sutskever, won the ImageNet Challenge by a large margin, neural networks have quickly become mainstream and made probably the greatest comeback ever in the history of AI."

Chris Bishop: Even the most sophisticated computers can't tell a dog

Even the most sophisticated computers can't tell a dog from a cat (2009)

How computers learn to recognize objects instantly

Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision systems do it with greater than 99 percent accuracy. (Ted Talk, 2017)


In Artificial Intelligence Breakthrough, Google Computers Teach Themselves To Spot Cats on YouTube

In Artificial Intelligence Breakthrough, Google Computers Teach Themselves To Spot Cats on YouTube (2012)



largescale-deep-learning-for-building-intelligent-computer-systems-a-keynote-presentation-from-google-9-638.jpg
 
Last edited:
So I guess it's about time we start gathering whatever small bits we really know about HW2.5

- So far we know that it's going to be dual gpgpu
- It has a driver-facing cam used by the driver-monitor to monitor the driver, just confirmed here: Jason Hughes on Twitter

There's other stuff I heard from elsewhere, that I guess needs a bit more verification first, but good for a start, right?
I know driver monitoring cam was a big requirement claimed for some people here nd now we know Model 3 has it.

For those who have already taken a delivery of an APH3 car (Model S or X):
1) Is there a driver-facing camera?
2) Is there any way to tell if the car has AP 2.5 hardware?

I looked through this thread, but it's so long, I might have missed the answer.
 
Lidar needs 0 computation for any object detection task. zero, nada, none, zip.

With Lidar you don't need to classify objects to drive since you have the actual data of the object, its dimension, speed and exact position in 3d.

How is it that you can't understand?
Here you go. Here's Google, NOT doing object detection with LiDAR. Because they read on a message board that knowing a blob's position in space and current speed is enough to know it's future movement. Right?

Because a pedestrian moves the same as a deer or a motorcycle? So why classify them? LiDAR does everything. For free. With zero computation required.

WoAXaqb.jpg
 
Here you go. Here's Google, NOT doing object detection with LiDAR. Because they read on a message board that knowing a blob's position in space and current speed is enough to know it's future movement. Right?

Because a pedestrian moves the same as a deer or a motorcycle? So why classify them? LiDAR does everything. For free. With zero computation required.

WoAXaqb.jpg

Don't be naive, i have posted that same video and timestamp multiple times. (heck you prob took that from one of my previous posts..im watching you!)

But we are comparing camera with lidar here, so let's stay on topic people!

Lidar doesn't NEED to know whether the object in-front of it is a pedestrian, bicyclist or a construction cone because it knows its exact dimension and position in space.

That doesn't mean you don't want to perform classification on the detected objects (raw data).

The point of this 5 page discussion is that:

1) A picture is a collection of mean-less numbers which without cutting edge deep learning is useless to a self driving car or any vision system for that matter. Therefore requiring object recognition and classification models.

2) A lidar returns xyz of the world around it, which even without any classification model, you can know that there is a object of precise dimension infront of you.

2) A radar returns data points of the world around it (although with extremely low/garbage resolution), which even without anyclassification model, you can know that there is a object of a certain size in-front of you.

Secondly, a NN needs to recognize objects in a picture and any object it can't recognize, the car CANNOT see.
This isn't the case for lidar which returns 3d coordinates.

End result is, camera is not the best sensor, lidar is.
 
Last edited:
Not sure how that would solve anything. My exposed AP1 Bosh radar does a great job at turning on the heat when ambient temp. drops below 5 or 4 Celsius (not sure if it was 5 or 4, I did some testing last year on this). And the radar gets hot!

Problem is that the heater only melts the first millimeter(s) of ice/snow that's touching the radar.

With ice/snow buildups we regularly have in Norway (and prob Canada, NA and other countries with lots of Teslas), long parts of the year, the heating element is often practically useless. You literally have to go outside and scrape of the centimetres of snow/ice.

Which is one of my biggest concerns about autonomy with the kind of sensors and sensor placement we see in Tesla and other cars

This is why alot of other car companies are now going with radar behind the windshield or a heating element on the radar to avoid snow and ice build up.

I think there are lots of loopholes in tesla's ap2.X hardware stack that people are not talking about.
 
  • Like
Reactions: DriveMe