Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world in autopilot, part deux

This site may earn commission on affiliate links.
Thats not how it work. Sergey statement was based on actual progress and current status. It's not about being right or wrong.
It's about whether your statement was based on material information or you made it up.

All predictions are made up to some extent. Nobody actually knows the future. That's why seeing the future is seen as a supernatural power, relegated to mystical figures like the Oracle of Delphi and comic book characters.

It seems to me that Elon's wrong prediction was based as much on general progress in AI and internal R&D efforts as Sergey's wrong prediction was.

This is why the SEC sued and exposed elon as a fraud.

This should be discussed in another thread, but the issue is more complicated than you're saying. Elon admitted no fault in the settlement and the accuracy of the SEC's claims was never determined in court. Opinion seems divided among former SEC people who have spoken publicly about the issue. For example, here's a former SEC senior counsel, Thomas Gorman, who said he didn't think the SEC's claims had merit:


The SEC acknowledges that Elon had an offer from the Saudi Arabian Public Investment Fund (planned to have $2 trillion in assets under management by 2030) to take Tesla private at any terms Elon named, as long as they were reasonable. The SEC said if Elon had clearly communicated that, there would be no grounds for an enforcement action. Because his statement was vague, it was misleading to some investors and the confusion created market disruption. The SEC didn't claim that Elon lied. SEC regulations have requirements about disclosure that go beyond simply making truthful statements.

Elon stood up in a conference and said EAP will be done in 3 months and here we are 2 years later and still nothing.

Source? I don't remember that statement.

He made a statement while Tesla had 0 software written for eap.

Do you know this for a fact or are you just speculating? How do you know what software Tesla had written when? As far as I know, nobody outside of Tesla has access to that information.

Without consulting ANYONE.

Source? Again, do you actually know this or are you just speculating?

Furthermore all his level 5 statement has been made without consulting anyone and with 0 software being written.

Source?

Infact his head of autopilot sterling was shocked when the blog post went out referring to ap2 as fully self driving. This was another elon call without even consulting with his head of autopilot.

Again, this sounds like speculation on your part. The Wall Street Journal quoted Sterling Anderson as saying "This was Elon's decision" to sell Full Self-Driving Capability in HW2 cars. This suggests that Sterling didn't agree with the decision. (Especially now that his company Aurora is using lidar.) However, there is no indication in the WSJ article, or anywhere else I have seen, that Sterling was not aware of Elon's decision ahead of time, or that he was not consulted.
 
Last edited:
I’m also curious about this!
there's certainly more map something in there, but I am not 100% sure what's going on but it's pretty clear that there are three maps sources. one is HD maps and one is the "adas map tiles" we knew about for a while and then the TeslaMaps stuff (my car got a new batch on Thursday, updated to early September version it seems).
ape talks to cid to get cid's idea of the maps and then also talks to aws to get the HD maps.

BTW it's interesting that on IC display when you have nav on it shows you a different color for some roads and exists taht looks like to mean "this is where you could have used drive by nav"- this is in addition to direction on the map showing which turns the drive by nav could take by itself and which it could not.
 
« Shadow mode means that the car is not taking any action, but it is registering when it would take an action, and when it would not take an action. »

- Elon Musk (timestamp 26:18):


Interpretation A:
Every production car with AP2 hardware is running image detection, video and odometry logging all the time, regardless of whether or not autosteer is engaged. So even when AP's off, the NNs are running, detecting lanes, cars, trucks, pedestrians, their relative speeds, their lane positions, if they're in your collision path, etc. On a daily, weekly or monthly basis (depending on various factors), the same Tesla cars upload vision detections, raw images/video frames, odometry, vehicle hardware info, all CANBUS-messages sent to/from the drive unit, brake booster, steering ecu +++, to Mothership, depending on what data Mothership is interested in. For example, a request can be made for « send me pictures if you see strong shadows on the road », « send me video data if you see a highway fork situation », « send me logs of ap disengages by stalk », etc.

Here's some examples of actual requester names:
ap-abort
ap-disengage
ap-diseng-stalk-cancel
autopilot-finish
autopilot-trip-log
calibration-histogram-json
img-accel-intervention
img-backup
img-brake-intervention
img-curve-biased-slow
img-hwy-fork
img-hwy-fork-flicker
img-hwy-fork-prim-flicker
img-hwy-motorcycle-lsplit
img-hwy-motorcycle-pass
img-hwy-motorcycle-rsplit
img-hwy-multi-fork
img-hwy-rain-night
img-hwy-rain-night-fork
img-hwy-re-unstable
img-hwy-s25-50-close-15-cutin
img-lateral-intervention
img-lb-curve-shadow-ray
img-lb-gore-lane
img-lb-hard-curve
img-lb-hard-lanes2
img-lb-hwy-ego-merge
img-lb-hwy-wide-split
img-lb-intersection-lanes
img-lb-int-hwy-fork
img-lb-sharp-curves
img-lb-strong-shadows
img-lb-traffic-light
img-main-narrow-depth
img-nprim-fork
img-sensor-blind
img-split-curve
img-wiper-soft-fp
inertiator-pselect-timeout
self-calibration-json
starfish-trajectory
telemetry-finish
tempmon-overheat
vehicle-driven
wifi-connected
(...)

The uploads are called « snapshots » and they’re most likely requested for various different reasons. Many of them are probably used for validation purposes. We can't rule out that some are used for simulation / training purposes. More to the point, however, as far as I can see there's nothing stopping Tesla from using the data to analyze what the driver did in a particular situation (timestamp and location), and compare that to what the vision NN detected - or didn't detect. There's certainly enough data for it.

Interpretation B:
A small, green elf - « Mr Autopilot » - whose hands and feet are tied and mouth gagged, is residing inside a mysterious autopilot machine in the vicinity of your glovebox. Mr Autopilot sees what you see (actually, he's seeing far more than you can see, because he's got radar and sh**) and, more importantly, he’s judging your every move. He's « registering » and taking notes whenever you're doing something that he wouldn't do, and when you're not doing something that he would do. Mr Autopilot is a super smart elf, make no mistake about it, but he's also got the ability to learn tricks from you. Yes, that’s right, if he makes the wrong call, he’ll look at your moves and « register » those. That way, he's got the maximum experience and confidence possible built up before you unleash him with the double-tap on the AP stalk.

So @buttershrimp, which one is it?
 
Interpretation B:
A small, green elf - « Mr Autopilot » - whose hands and feet are tied and mouth gagged, is residing inside a mysterious autopilot machine in the vicinity of your glovebox. Mr Autopilot sees what you see (actually, he's seeing far more than you can see, because he's got radar and sh**) and, more importantly, he’s judging your every move. He's « registering » and taking notes whenever you're doing something that he wouldn't do, and when you're not doing something that he would do. Mr Autopilot is a super smart elf, make no mistake about it, but he's also got the ability to learn tricks from you. Yes, that’s right, if he makes the wrong call, he’ll look at your moves and « register » those. That way, he's got the maximum experience and confidence possible built up before you unleash him with the double-tap on the AP stalk.

They actually put neural snapshots of people in there.


Reference
 
  • Love
Reactions: lunitiks
One world think smart summon is right around the corner with the new cameras processing. Elon mentioned NNs for the B pillar cameras as a further point of refinement -- is that only for redundancy? It seems most useful for urban areas and not highways.
 
  • Like
Reactions: mongo
One world think smart summon is right around the corner with the new cameras processing. Elon mentioned NNs for the B pillar cameras as a further point of refinement -- is that only for redundancy? It seems most useful for urban areas and not highways.
I think it would require geofencing on private property only. And this would be very limited because there is no technology to solve for parking structures with human attendants or accepting automated parking ticket vouchers for payment, validation, etc. Parking structures are just one example.

Like normal summon, there would be a limited use case for smart summon now. But, it would be cool to use at your home or private residence (a gated community), etc.
 
  • Like
Reactions: croman and mongo
Hm, looks like Tesla really hates us peeking under the hood in v9, they even created an extra special blacklist just for us, how nice of them.

Anyway, while I still work on useful v9 footage from all cams, at least I can publish some more of the footage I had previously that I withheld for privacy reasons. And more to come since they've been turning features on and off in v8.1

But right now we are going to explore the depth of their visual detection set that seems to be not without corner cases some of which are surprising to see. For example they know about pedestrians, but what if a pedestrian is doing something? Like say they are pushing a shopping cart? Well, if you are doing something like that, Tesla does not see you. They even get the radar return from the shopping card and proceed to ignore it due to no corresponding visual match. See the video below (the other part of it is a regular pedestrian on a late evening to show it's not just lack of light that plays tricks with us).

 
But right now we are going to explore the depth of their visual detection set that seems to be not without corner cases some of which are surprising to see. For example they know about pedestrians, but what if a pedestrian is doing something? Like say they are pushing a shopping cart? Well, if you are doing something like that, Tesla does not see you. They even get the radar return from the shopping card and proceed to ignore it due to no corresponding visual match. See the video below (the other part of it is a regular pedestrian on a late evening to show it's not just lack of light that plays tricks with us).


Huh? It does recognize that there is an obstacle based on the green driveable area (they are clearly out out bounds), it just didn't put a characterization box (personCart) around it.
Admittedly, that could allow the car to get too close depending on the keep away distance...
 
  • Like
Reactions: zmarty
The amount of False Positives and False Negatives in Telsa's NN at this stage of the game is alarming!

By the way you should do a three camera setup with visual overlays similar to what Zoox did.

Left Repeater | Fisheye | Right Repeater
or
Left Pillar | Main | Right Pillar

If you really want to collab shoot me a pm and i can maybe generate a 3d view using the vector info you have.



Huh? It does recognize that there is an obstacle based on the green driveable area (they are clearly out out bounds), it just didn't put a characterization box (personCart) around it.
Admittedly, that could allow the car to get too close depending on the keep away distance...


Not quite, it recognized it as the edge of the road and then as the edge of the lane. it however doesn't recognize that there is a obstacle. That's not good. What semantic free space does is color every pixel that looks like the road. I don't want a car resorting to using me as a boundary of a lane/road.
 
Last edited:
  • Disagree
  • Helpful
Reactions: jaguar36 and croman
Not quite, it recognized it as the edge of the road and then as the edge of the lane. it however doesn't recognize that there is a obstacle. That's not good. What semantic free space does is color every pixel that looks like the road. I don't want a car resorting to using me as a boundary of a lane/road.

I agree with you, being properly classified is best, and it did so at short range (annotated with no radar sig). But I'd still rather be judged a non-drivable surface over not being seen.
 
  • Like
Reactions: croman and Enginerd
Hm, looks like Tesla really hates us peeking under the hood in v9, they even created an extra special blacklist just for us, how nice of them.

Per usual, I'm curious about this fun new blacklist that you all discovered. So you have the general "how dare you peak behind the curtains" blacklist, and now a "you spoiled v9" blacklist? What's the difference?
 
  • Informative
Reactions: Cirrus MS100D