Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Its amazing how everytime i come to this forum it reminds me of what Uber's engineer Levandowski said.

"We've got to start calling Elon on his bs... lets start giving physics lessons about stupid sh*t Elon says"

I'm not mad at the tesla fanbase anymore because i know where they get it from.

9 months ago (dec 1) when i predicted that no EAP feature will show up until Aug/Sep. I was ridiculed and laughed to scorn and cursed at.
Now everything i said is a reality. When i said Tesla was only using one active camera initially, again i was ridiculed until that was proven and then they added another camera.

When I said Tesla haven't started any data collection and that the whole 2 billion fleet learning was a sham and that mobileye didn't allow recording of video data and that not only that but that Tesla had no capability of recording mobileye camera data, i was ridiculed and laughed at. But now its been proven as Tesla began their image collection.

and many more...

Now I'm yet again telling you how useless Tesla's radar is and that will be evident in due time. although its easily apparent to a novice.
But i know people would rather believe whatever garbage elon pushes than objective analysis.
 
Last edited:
There have been alot of posts here about people radar failing.

I think Tesla moved the radar from AP1 to AP2 to make it harder to gunk up but, yes, I think actual ice storms or very wet snow might create more of an issue than the run of the mill snow. Powder, especially, wouldn't stick. That being said, I found the AP2 radar to be surprisingly robust. I only had the car from Jan until present, so it wasn't a full winter but it was long enough to get a good feel (though AP2 was pretty much garbage at the time, it might protest more now that its better).
 
Its amazing how everytime i come to this forum it reminds me of what Uber's engineer Levandowski said.

"We've got to start calling Elon on his bs... lets start giving physics lessons about stupid sh*t Elon says"

I'm not mad at the tesla fanbase anymore because i know where they get it from.

9 months ago (dec 1) when i predicted that no EAP feature will show up until Aug/Sep. I was ridiculed and laughed to scorn and cursed at.
Now everything i said is a reality. When i said Tesla was only using one active camera initially, again i was ridiculed until that was proven and then they added another camera.

When I said Tesla haven't started any data collection and that the whole 2 billion fleet learning was a sham and that mobileye didn't allow recording of video data and that not only that but that Tesla had no capability of recording mobileye camera data, i was ridiculed and laughed at. But now its been proven as Tesla began their image collection.

and many more...

Now I'm yet again telling you how useless Tesla's radar is and that will be evident in due time. although its easily apparent to a novice.
But i know people would rather believe whatever garbage elon pushes than objective analysis.

Do you think the Continental radar is an improvement that will help AP2.5 outperform AP2?
 
Interesting paper from 2012 on radar-only pedestrian detection without using machine learning here.

Lots of pictures, examples and sample data, so worth reading. Some highlights:

Capture21.jpg


Radial moving pedestrians in line of sight can be recognized up to 95.3 % of the samples. The recognition rate drops to 39.5 % for lateral moving pedestrians and to 29.4 % for lateral moving pedestrians occluded in a gap between two parked cars.

The radar sensor’s main limitations in this use case turned out to be insufficient Doppler shift measurement resolution and antenna side lobe effects. Possibly, a finer spatial resolution would also improve the sensor’s performance in recognizing occluded pedestrians. Our findings indicate that pedestrian recognition using this low-level approach is limited. Only radar data with higher Doppler frequency measurement resolution (independent of the used technology, e.g. 24 GHz, 77 GHz) would make major recognition improvements possible.
 
I don't understand the claims about the radar can't detect/respond to stationary objects in traffic.

My AP1-car certainly does in the following scenarios:

(1)
When queuing in slow traffic, and the car in front of me comes to a complete stop: My car stops. When the car ahead resumes speed, my car picks up and follows.

(2)
When driving at normal speed (40-60 km/h), and I meet a stopped car in front of a traffic light: My car slows down and stops. When the car resumes speed, my car picks up and follows. And no, I was not following the car beforehand in "homing" TACC mode.

So is the claim that none of this would work if, say, there was smoke or snow or sunlight obscuring the MobilEye camera?

(This is somewhat difficult to test, as TACC/AS obviuosly is designed to have radar+cam work in tandem - if one is completely blind then TACC/AS normally disables. My question is if this is just a precautionary measure or if it's absolutely necessary. Is it the "stationaryness" of the car ahead that allegedly makes it impossible for the radar?)
 
  • Like
Reactions: boonedocks
For what it's worth, the scenario I most commonly encounter AP2 TACC NOT detecting stationary objects is when I pull off at freeway speeds and then pull up to a set of traffic lights, so 100km/h | 60mph approaching a stationary vehicle.

I find that AP2 is now comfortably able to detect non-tracked objects at < 50mph. I am still a bit jittery and hover my foot over the brake but it does a fairly smooth job consistently. I've never tried above 50mph but I would doubt it. Until recently it was only able to handle 35mph or below.
 
Not true. A camera can't see a object, it can't tell the difference between a pedestrian, animal, rock, whatever.
A picture is a collection of RGB pixels (aka numbers). So no camera is no where near the best sensor for confirmation of what an object is.
Don't give me that BS. Yes a picture is a collection of numbers, so is lidar data, and radar data. Attach a computer to a camera and you can tell the difference between all sorts of things. To say otherwise is just a lie and you contradict yourself in previous posts.

Everything is based on corrective algorithm. EVERYTHING.
Radar doesn't need the same corrections to measure distance through fog, snow, rain, etc. Lidar has to throw away data from laser reflections on nearby objects whereas radar can go through water.

AP1 radar fails all the time when it gets dirty or when it gets snow/ice build up on it.
Stop praising Elon's and Tesla's terrible engineering job.
Ice or dust layers too thick will block radar, lidar, and cameras for that matter. All autonomous vehicles are affected. If those same things block human vision then humans are affected too. That has nothing to do with Elon or Tesla's engineering.

Again incorrect, lidar works in heavy rain and snow.
You keep mentioning radar. how is it you don't understand that radar is useless.
RADAR is NOT a primary sensor? how can you not understand that?
seriously, that baffles me.
No one said radar is the primary sensor, all current autonomous vehicles use cameras. Waymo's implementation uses eight cameras and 360 degree radar, in addition to it's lidar that it likes to show in presentations.
As far as lidar working in heavy rain and snow do you have a source that explicitly says heavy rain or heavy snow?
 
Last edited:
  • Like
Reactions: EinSV and lunitiks
There's a reason radars are configured to only pick up moving objects.. its because they can't differentiate a static object from the environment.

Delphi-LiDAR_RADAR-compare.jpg
You realize that what you just posted says very clearly that radar can detect an object that could be a deer thus proving @stopcrazypp correct even though you claim it's incorrect. Read the picture you posted.

Radar is used to see different sizes of aircraft all the time...
 
Last edited:
  • Like
  • Love
Reactions: EinSV and croman
I think Tesla moved the radar from AP1 to AP2 to make it harder to gunk up but, yes, I think actual ice storms or very wet snow might create more of an issue than the run of the mill snow. Powder, especially, wouldn't stick. That being said, I found the AP2 radar to be surprisingly robust. I only had the car from Jan until present, so it wasn't a full winter but it was long enough to get a good feel (though AP2 was pretty much garbage at the time, it might protest more now that its better).

Need I say more??
Bjørn Nyland on Twitter
.
 

Attachments

  • CyMTTNnW8AA8kPa.jpg
    CyMTTNnW8AA8kPa.jpg
    188.2 KB · Views: 1,345
  • Like
Reactions: Ski_ and croman
For what it's worth, the scenario I most commonly encounter AP2 TACC NOT detecting stationary objects is when I pull off at freeway speeds and then pull up to a set of traffic lights, so 100km/h | 60mph approaching a stationary vehicle.

I have some data on this, with the latest update I received .34 xyz, two times I pulled off an exit ramp at 70 mph with untracked cars at the stop sign at the end of the exit and AP2 started slowing about 150 - 200' away from the car and was very smooth, almost too conservative, but the fact that I was essentially flying and it picked up and started to slow for untracked stopped cars was really good. Also, had the same experience with a few stop lights and untracked cars with the same behavior. I've only had the build for 24hrs or so and more testing is needed, but so far it's looking good.

However, it still struggles with the damn curves, and wants to drift into the outside lane, but I'll take what I can get.

Oh and it's now showing two cars in front.
 
Needs a heating element for sure.
Not sure how that would solve anything. My exposed AP1 Bosh radar does a great job at turning on the heat when ambient temp. drops below 5 or 4 Celsius (not sure if it was 5 or 4, I did some testing last year on this). And the radar gets hot!

Problem is that the heater only melts the first millimeter(s) of ice/snow that's touching the radar.

With ice/snow buildups we regularly have in Norway (and prob Canada, NA and other countries with lots of Teslas), long parts of the year, the heating element is often practically useless. You literally have to go outside and scrape of the centimetres of snow/ice.

Which is one of my biggest concerns about autonomy with the kind of sensors and sensor placement we see in Tesla and other cars
 
To be honest, I can understand the desire to create a vision only system... the argument that we drive just fine (well, mostly) with just vision isn't too wide of the mark, and in a lot of ways cameras can supersede human vision... surround vision, night vision, wide dynamic range etc. The argument that they can't see at night or into direct sunlight is true of some cameras today, but not all and certainly not all in the future.

Remember that Elon is a futurist... we tend to think of TeslaVision as being developed exclusively for us - or at least to drive our Tesla cars around. So, for us... we look at a car and go "add a Lidar! What's the deal! Why is this not released yet!" or whatever...

But, no doubt TeslaVision is being developed with a bigger end goal - much more than just cars in mind - probably as part of a scalable system that provides vision processing for all sorts of vehicles and scenarios. Even the Tesla semi would be a more complex case to add sensors beyond cameras to, especially where long, interchangeable trailers are involved. Moving beyond cars and trucks entirely, factory robots could be dramatically more efficient to deploy if they could drive themselves around the factory without requiring LIDAR and/or other sensors.

So, whilst it's easy to look at what other manufacturers are doing, who are probably only focused on the car, it's worth bearing in mind that the TeslaVision team's ultimate ambition might be broader in scope than that. I suppose the ultimate vision stack would be scalable for systems with however many cameras, in whatever position, with various capabilities.

I suppose we also presume they're busy trying to replicate the functionality of a simple rain sensor for wipers. Of course, they may be looking beyond that and working out various different scenarios where the cameras can't see, so that the system can take action accordingly. Do you even need windscreen wipers when you're talking about small cameras? If a bug splats on the windscreen, it'll need to know to activate the washer fluid, and then the wipers. If it's truly autonomous and the windshield is iced up, it'll need to know to activate the heaters etc etc.

If it can't see, it should know how to clear the view, or whether or not it's simply not safe to proceed - all the talk about cameras being able to see in heavy snow and fog... sometimes, even if you have a sensor that could see in heavy snow, the car shouldn't proceed - it's still a risk to others (since they can't see) or itself (the road isn't drivable due to ice or whatever)

Of course, I just want EAP and FSD on my Tesla... heck, I just want AP2 to be actually as good as AP1. But, I suppose if you're building this kind of system from scratch, you might as well build it with the big picture in mind.
 
The NN model did not change since June (i.e. since that 17.26.xx release), yet there were driving behavior changes, the silky smooth, the showing/not showing the second car, ...
There are other signs that basically tell us that NN is just for image recognition, but the actual driving is outside of NN and in regular cpu code.

@verygreen - Have you looked into to the latest 17.34.xx release files as well?
I've been thinking about this. We have received several updates, at least monthly, but the NN model is 2-3 months old now...
Did Andrej Karpathy decide when he came on board in June it needed to be done properly and start again from scratch? Or is it finished for EAP (I doubt it)? Or something else?
Anyone else care to speculate? ;-)
 
@verygreen - Have you looked into to the latest 17.34.xx release files as well?
I've been thinking about this. We have received several updates, at least monthly, but the NN model is 2-3 months old now...
Did Andrej Karpathy decide when he came on board in June it needed to be done properly and start again from scratch? Or is it finished for EAP (I doubt it)? Or something else?
Anyone else care to speculate? ;-)
NN in 17.34 is the same as before.
A new binary appeared named "clip_logger", I guess I really need to install it to try it out now.

Also somewhat unexpectedly the changelog is completely empty for everybody which I don't think I saw before usually there were always conditional changes listed.
 
NN in 17.34 is the same as before.
A new binary appeared named "clip_logger", I guess I really need to install it to try it out now.

Also somewhat unexpectedly the changelog is completely empty for everybody which I don't think I saw before usually there were always conditional changes listed.

@verygreen - You commented that the PID was written in Python which, assuming you're not having fun with us, is a bit shocking. Do you know what platform the NN was developed on? Tensorflow, Theano, Torch, or something of Tesla's own creation?
 
The FOV of the radar still misses over 95% of all driving tasks.



This is MAJOR incorrect.

There's a reason radars are configured to only pick up moving objects.. its because they can't differentiate a static object from the environment.

Delphi-LiDAR_RADAR-compare.jpg
You are looking at a very simplistic marketing picture. Radar can be used to detect stationary objects. It's just many systems filter them out because there are typically many stationary objects in a scene, and the application is only interested in moving objects (for example a police radar is only concerned with speeding vehicles, most ACC systems are only concerned with following a moving car).

@DamianXVI did an analysis of the 32 objects that the Bosch radar returns and it is able to return a signature for a lamp post.
AP2.0 Cameras: Capabilities and Limitations?

Using the raw radar signature, even more can be done. A human that is moving their arms and legs when walking has a distinct signature in the doppler image and this can be used to classify them. Read the paper I linked.
https://www.adv-radio-sci.net/10/45/2012/ars-10-45-2012.pdf

And in your example, you are comparing a stationary object (like a rock) with a moving one (like a deer), which is even easier for the radar to differentiate.