Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2016 FSD video: there was an accident with roadside barrier on Tesla property

This site may earn commission on affiliate links.
I don't understand why you would want to believe that CJ told the DMV that Elon's tweet matched engineering reality...
Wouldn't it be worse if he thought it did?
The DMV memo has CJ's redacted statement and then a longer spiel about how Tesla is currently at Level 2, what needs to happen to achieve L5 capability, how Elon is extrapolating on the rates of improvement, and then finishes with "Tesla couldn't say if the rate of improvement would make it to L5 by the end of the calendar year."

I'd say it's pretty obvious what happened there... CJ likely spoke off the cuff and then either he or someone else walked back that blunt statement.
 
I'd say it's pretty obvious what happened there... CJ likely spoke off the cuff and then either he or someone else walked back that blunt statement.
Seems like CJ said Elon is wrong (which of course turned out to be true). The DMV redacted that sentence when they released the document (to comply with a FOIA request) because it could be interpreted as contradicting the CEO and they're not trying to get people fired. Clearly the fact that people here are so upset by it shows that they were right to redact it (though they made the mistake of being incompetent!). I wonder if this had anything to do with CJ later leaving the company...
 
  • Like
Reactions: Terminator857
There is no frontal Lidar so you're entirely reliant on the visual spectrum.
As far as I understand Lidar is no different from camera in its ability to be able to see. Radar is better but it is lower resolution even in theory not to mention that the actual radars do not really build a picture at all.

There is no stereo camera so you're faced with doing SW tricks to derive 3D data
Isn't all stereo camera is doing is SW tricks? Besides, Teslas have 3 front looking cameras so that already gives you a stereo picture and on top of that you have additional SW tricks based on differing focal distances...

In addition to these the car lacks the HW to make semi-autonomous driving easier like there is no dedicated driving monitoring system so the customer has to deal with the torque sensor.
FSD beta already has driver monitoring with camera and you can be kicked out of autopilot even if you keep applying torque.

The car even lacks of basic rain sensor and instead has to rely on its self-driving computer for that functionality which to do this day still doesn't compare with a cheap sensor.
Yea... This is the worst thing about the car... It virtually never works for me...
 
Humans can move their heads
Eyes are expensive so we have to do with only two of them...

Humans can close and open their eye lids
The front looking cameras have the wipers and some of the others are partially protected by the forward motion but some system to clean them up would have helped but might not be mandatory - we will have to see.

Humans have magnitudes more processing power available to them
Humans currently are much more efficient with the available processing power but I'm not so sure how much brain's processing power would be involved in driving. I would not be surprised if it is less than what is available to the car. There's very little we know about the brain's processing.

Science also tells us that humans are weary of autonomous driving, and it needs to be significantly better than human drivers. ... The science so far says no.
If by "science" you mean "experience"...
 
FSD beta already has driver monitoring with camera and you can be kicked out of autopilot even if you keep applying torque.
It's not designed for the task of driver monitoring.

Instead its an cabin camera repurposed for the task of driver monitoring despite not being specifically designed for the task. That means its going to have a lot of weaknesses.

Whether it works well is yet to be determined. We do know they're making changes in new cars and they changes seemed to be intended to help the camera with the task of driver monitoring.
 
It's not designed for the task of driver monitoring.

Instead its an cabin camera repurposed for the task of driver monitoring despite not being specifically designed for the task. That means its going to have a lot of weaknesses.

Whether it works well is yet to be determined. We do know they're making changes in new cars and they changes seemed to be intended to help the camera with the task of driver monitoring.
If you show me footage from the cabin camera I'm pretty sure I can tell if the driver is paying attention. So the "humans can do it"/ "first principles" argument popular around here applies to the cabin camera too. :p
 
  • Like
Reactions: apsen
Isn't all stereo camera is doing is SW tricks? Besides, Teslas have 3 front looking cameras so that already gives you a stereo picture and on top of that you have additional SW tricks based on differing focal distances...

Stereo imaging with identical focal distances is pretty straight forwards to process, and allows high accuracy depth perception at pretty good ranges.

During the last autonomy day I was hoping to see what kind of accuracy Tesla would have at long range detection with their Pure Vision System, but they left out this key information from their presentation. So I can't compare whatever SW tricks they're doing versus what a standard Stereo Setup like Subaru's Eyesight can do.

What's key here is being able to accurately detect the distance to some object or pile of objects that it's never been trained on. Something that Time of Flight Systems can do really well at (like Lidar).

I feel like Karpathy himself has been pretty honest about the challenges Tesla faces with their Camera setup. He does feel like they're solvable using their approaches like Vidar, and Birds Eye view networks. But, its unclear to customers like myself what's actually running in FSD Beta.

There is often excitement with seeing what's showcased on autonomy day followed up with disappointment when the latest, and greatest in FSD Beta still ends up crashing into something like a garbage can at night.
 
If you show me footage from the cabin camera I'm pretty sure I can tell if the driver is paying attention. So the "humans can do it"/ "first principles" argument popular around here applies to the cabin camera too. :p

I sent the FSD beta team and email requesting the ability to save footage from the interior camera.

There is no way for me as an FSD Beta tester to test this system to determine if its working or not working.

From what I saw from VeryGreen (who posted images from the interior camera) is mixed. Where it seemed to work well in some circumstances, and poor in others. From those images I don't have a lot of of confidence that Tesla will move to Cabin Camera only for the driver monitoring while its L2. They'll likely always stick to the torque Sensor. I hope not as I dislike the torque sensor.
 
Is there a suggestion where to put another cabin camera? If put it high in front of driver then baseball cap will block knowing where person is looking. I wonder if current location will show the right eye, even if wearing a baseball cap.
 
If you show me footage from the cabin camera I'm pretty sure I can tell if the driver is paying attention. So the "humans can do it"/ "first principles" argument popular around here applies to the cabin camera too. :p
Yes, eventually. In good lighting ;)

ps : You should have seen some of my classmates in college. They had perfected the art of looking attentive when sleeping in the class.
 
  • Like
Reactions: pilotSteve
I sent the FSD beta team and email requesting the ability to save footage from the interior camera.

There is no way for me as an FSD Beta tester to test this system to determine if its working or not working.

From what I saw from VeryGreen (who posted images from the interior camera) is mixed. Where it seemed to work well in some circumstances, and poor in others. From those images I don't have a lot of of confidence that Tesla will move to Cabin Camera only for the driver monitoring while its L2. They'll likely always stick to the torque Sensor. I hope not as I dislike the torque sensor.
They could always have a human look at low confidence clips. Other AV companies have attention monitoring systems in their prototype vehicles but I'm sure they also look at recordings and fire people based on that.
Keeping your hands on the wheel is more than just attention monitoring when you have system that can randomly swerve in to oncoming traffic (though randomly swerving into oncoming traffic is probably the most effective way to maintain a driver's attention!).
 
Also, how does pointing the finger at their vendor help the customer?

I agree with this statement, but I'm confused by its source.

The only one pointing the finger at the vendor (MobileEye) is you.

You accused MobileEye for a weakness in the Ford Blue Cruise system. Whether are not that weakness is due to MobileEye is irrelevant to the customer. I only chimed in because I felt like you didn't understand how ADAS systems are implemented into vehicles. That there is often a lot of decisions up to manufactures in how features will work, and what's supported. There can also be a lot of customization.
 
  • Disagree
Reactions: mark95476
Keeping your hands on the wheel is more than just attention monitoring when you have system that can randomly swerve in to oncoming traffic (though randomly swerving into oncoming traffic is probably the most effective way to maintain a driver's attention!).
Ha, that's definitely true.

I've never in my life been more attentive with both hands on the steering wheel than while FSD Beta testing.

The only attention risk is the stupidly small report button.
 
  • Funny
Reactions: EVNow
During the last autonomy day I was hoping to see what kind of accuracy Tesla would have at long range detection with their Pure Vision System, but they left out this key information from their presentation. So I can't compare whatever SW tricks they're doing versus what a standard Stereo Setup like Subaru's Eyesight can do.
You seem to be assuming that if you do not know then it is bad... But anyway, it all comes down to simple triangle math - you have base and two angles. And to calculate the angle you do need the focal distance whether it is the same for both cameras or not... On top of that you could use additional information that could be derived from the focal distance difference.
 
  • Funny
Reactions: Daniel in SD
At night the car has lights. What other low visibility situations do you refer to? Fog? Lidar does not help with that.

Aren't you even a bit alarmed by the fact that pure vision forces autobrights on?

Sure it still works if you force them off. But, clearly the fact that AP with purevision turns on the autobrights means its attempting to optimize its performance with something that a lot of us simply can't use because there is too much traffic.

What I'm alarmed by is how much worse pure vision is in the rain than what I had with Vision+Radar.

It was so bad that the car even slowed down under TACC only due to weather. It wasn't even raining that much.

Sure it might be a temporary thing as the AP/FSD team works out issues with PureVision.

But, its alarming.
 
Last edited:
  • Informative
Reactions: pilotSteve
You seem to be assuming that if you do not know then it is bad... But anyway, it all comes down to simple triangle math - you have base and two angles. And to calculate the angle you do need the focal distance whether it is the same for both cameras or not... On top of that you could use additional information that could be derived from the focal distance difference.

It comes down to how the accuracy of the vision system compares to Radar+Vision or Radar+Vision+Lidar.

The autonomy day video did an excellent job comparing Vision only with the old Vision+Radar system, but then they left out medium to long distances.

If someone leaves something on an otherwise excellent presentation it does make one wonder why?

Maybe it was poor and instead of reporting the numbers they felt like they'd work on it some more before publishing the numbers.

The fact that its a work in progress does support that. Like right now its limited to 80mph versus 90mph of the radar+vision implementation. The fact that it hasn't reached parity means any comparison is a bit premature.

So here we are almost in 2022 and there still isn't a conclusion on how well pure vision will work for even TACC. At this point its not even about Lidar, but the question of whether we're going backwards. If we're giving up performance in the rain and other mild to moderate weather condition for the fantasy of pure vision.