Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
The disparity in that image (and others) I put together may be a bit exaggerated because of a delay between camera captures (the images are not completely in sync with each other):
AP2.0 Cameras: Capabilities and Limitations?
(@verygreen can confirm?)

I did not realize that. By any chance do you know what which of the main and narrow were taken first? Any idea of the time difference?

Edit: which raises a point I hadn't considered, which is that if you want to do binocular you need to keep the time disparity between the frames to a minimum. Do we know if the cameras are synchronized?
 
Edit: which raises a point I hadn't considered, which is that if you want to do binocular you need to keep the time disparity between the frames to a minimum. Do we know if the cameras are synchronized?
cameras are on the same deserializer so I don't think you can take the picture from both at exactly the same time. While the images I chose are 0.05s apart, these are from 1fps stream. The ones from 30fps stream are 0.003s apart (nanosecond timestamps from a sample I have are 1463298269984 and 1463297959776). 1463 is the whole seconds in both cases.
 
So it needs another millions of stopped cars occurances to accurately understand ? Until this is solved FSD is pipedream

It should have enough resolution to detect a vehicle directly ahead of you at 100 meters pretty easily. I have also noticed that it comes in pretty hot on stopped cars and my interpretation has been that autopilot wants to see radar confirmation if possible. The problem with stopped cars is that they are very hard to see on a doppler radar.

Just in case you haven't heard this stuff before: doppler radar sorts things by relative speed. Most of the stuff around you (called the background) has the same relative speed (trees, road, buildings) so there's a lot of clutter and noise in that category, which makes it hard get a usable signal for stuff that is stopped. This is probably why the autopilot manual warns users to be wary of stopped vehicles on the road - because the radar has a hard time with stopped objects.
s an
 
So it needs another millions of stopped cars occurances to accurately understand ? Until this is solved FSD is pipedream


s an

Tesla already has two solutions to this planned. The first, which is already mostly working in all Autopilot cars today, is sensor fusion - do object recognition against the front camera image and link the cars in that to radar returns.

The second plan is to map all of the ground clutter that's big enough to be mistaken for stopped cars and download tiles to the car with this whitelist for the area they are driving in. If there's a big stationary return that's not on the list, it's probably a stopped car.

We were told the cars started making this list last winter, but I'm not sure when they will start using it out if there will be an indication to the driver of that.
 
We were told the cars started making this list last winter, but I'm not sure when they will start using it out if there will be an indication to the driver of that.
I've seen about zero evidence anything like this was collected and uploaded to the mothership.
Granted the sqlite database has "CREATE TABLE radar_targets", but it's not clear how is that data collected on their side AND there's a comment this is going away soon.

Edit: I wonder if unlike what I am thinking the car is supposed to populate the database with radar and other data and upload it to the mothership at fist. I certainly see a bunch of insert statements in the code, but I did not really investigate what path do they come from.
 
I've seen about zero evidence anything like this was collected and uploaded to the mothership.
Granted the sqlite database has "CREATE TABLE radar_targets", but it's not clear how is that data collected on their side AND there's a comment this is going away soon.

Perhaps the only evidence we have is the new job posting at tesla for graphic artist to represent small objects, etc.... I can't remember if it was @lunitiks who posted it. @verygreen, do you think the AEB increased to 90 mph will improve the recognition of stopped cars? Or is this likely a different system for emergency braking. This is an intriguing problem for sure... I'll be very interested in how they solve this issue, and how they avoid the false positives.
 
I've seen about zero evidence anything like this was collected and uploaded to the mothership.
Granted the sqlite database has "CREATE TABLE radar_targets", but it's not clear how is that data collected on their side AND there's a comment this is going away soon.

Edit: I wonder if unlike what I am thinking the car is supposed to populate the database with radar and other data and upload it to the mothership at fist. I certainly see a bunch of insert statements in the code, but I did not really investigate what path do they come from.

What do you mean there's a comment it's going away soon? I've never heard anything like that... A comment written into the code? Or a comment you found somewhere else?
 
This is super interesting. I skimmed the paper but I couldn’t find how accurate it is. Do you know how accurate depth from context is for a self-driving car application?
I would argue it only needs to be as accurate as a set of human eyes at judging size and distance, which isn't really that accurate at all. As long as they can determine relative size and distance form successive images as people have stated above, FSD is doable (still concerned about the field of view though). It's pretty much how the human brain works. I certainly don't have a Lidar in my head providing super accurate measurements.
 
Indeed, they are shooting for twice as safe as human drivers. People suck at estimating distance at anything more than 20-30 feet away. I have no doubt pretty simple depth from context measurements would be an order of magnitude more accurate than a human’s ranking of “far away”, “getting closer fast”, “oh crap!”. Sure the engineer in me wants super accurate measurements from the car but as they say, perfect is the enemy of good enough...
 
  • Like
Reactions: calisnow
Have a look at any unencrypted binary or library in a hex editor and you'll see lots of readable text. Even better if you have a debugger.

Yes, you'll see lots of readable characters. However, those readable characters are not comments. They are strings used by the binary to log messages, open files, request URLS, etc, etc. You'll also see strings used reference symbols in other modules, and to export symbols from the library you're looking at. If you are super lucky, you'll also have debug symbols (like function names for non-exported functions, variables etc). I've never seen source code (eg, comments) included in a binary.

I suspect @verygreen was talking about something interpreted, like shell, perl, or python, etc.