Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

UK - Transitioning to Tesla Vision - removal of Radar

This site may earn commission on affiliate links.
Interesting presentation from head of Tesla AI (Andrej Karpathy) regarding Tesla's move to vision only FSD.


Bridges and phantom braking featured :)

Worth a watch if you have both the time and patience and I'll let you draw your own conclusions but it left me a little more optimistic

I still struggle to see how it will manage the 4 lanes (with barely visible lane markings) 'roundabout' of confusion known as the Basingstoke Black Dam roundabout - Christ humans can't even navigate it never mind computers.

Can see the AI is going to have some serious learning to do :p

No timescales mentioned of course.
What's interesting is that it looks like they might need to train their neural network for each country individually, and this is why the UK is still running the off-the-shelf software from ten years ago. The USA has different road surfaces, different infrastructure styles, different driving styles, narrower lanes, different lane markings, different roadworks, different vehicles etc etc, and what works there probably wouldn't work anywhere else.
 
Moderator comment - can we please keep to the topic of the video content and not opinions on FSD itself - there are dozens of other places to discuss why or why not you would buy EAP/FSD and whether current performance is indicative of future performance
 
What's interesting is that it looks like they might need to train their neural network for each country individually, and this is why the UK is still running the off-the-shelf software from ten years ago. The USA has different road surfaces, different infrastructure styles, different driving styles, narrower lanes, different lane markings, different roadworks, different vehicles etc etc, and what works there probably wouldn't work anywhere else.
I wonder if that rework for the UK will ever happen? I have my doubts. But even if it does happen, it's probably a 6-12 month project in itself. There's little real incentive to pay for FSD until that happens IMHO.
 
What's interesting is that it looks like they might need to train their neural network for each country individually

I did not see that from the presentation.

Whilst there will be a need for some localisation relating to signs and local rules/laws for each country, the fundamentals of the NN dealing with things like velocity distance, rain etc will remain the same..

The auto labelling is the interesting bit, coupled with dojo could lead to an exponential increase in the rate of learning / advancement... I hope so!
 
  • Like
Reactions: Adopado
I watched it and it worried me more than not.

He acknowledged radar was brilliant at distance and they were trying to match it, and rather than fix the issues with radar (the odd spike, which they could do but decided not to) they want to only use vision. It reminded me of the windscreen wipers and getting rid of a sensor because they have the belief that enough data can be trained to answer anything, which they've already demonstrated isn't true.

The preamble was Lidar and HD maps v vision, but the debate isn't about lidar, its about radar for forward range detection. Its just seemed to be a spurious argument suggesting that dropping radar was because it had the similar weaknesses as lidar.

The wide angle camera also seemed to be obscured at the edges, and the camera resolution of 1280 x 960 doesn't seem particularily high either.

I just have a horrible feeling we've seen it all before, hopefully I'll be wrong but I saw nothing to convince me including the videos at the end. It might be better than their outgoing implementation using radar, but it doesn't mean its as good as a good implementation of radar

Anybody else find "meat computer" a bit offensive too?
 
Last edited:
  • Like
Reactions: KalJoMoS
I'm surprised about that video only seemingly moving to vision for velocity etc so late. And only 4 months. I assumed they've been on this for years.

And the framerate seems slow - 36Hz for the cameras. Although they have lots of cameras to process, thats 33ms between frames and you'd need a few frames to build 'objects' and track them which is quite slow.

but agree - the core vision capturing objects and allowing them to be tracked, and recognising basic roads etc should be fairly global, and then its more about how the different country rules need to be interpreted with those moving objects
 
What's interesting is that it looks like they might need to train their neural network for each country individually, and this is why the UK is still running the off-the-shelf software from ten years ago. The USA has different road surfaces, different infrastructure styles, different driving styles, narrower lanes, different lane markings, different roadworks, different vehicles etc etc, and what works there probably wouldn't work anywhere else.

I would say that the thrust of this presentation was exactly the opposite! The LIDAR approach would have the problems you describe and they are trying to come up with a system that is not dependent on detailed localised mapping.

[As I commented on the Youtube presentation: It appears that even having a brain the size of a planet does not necessarily enable you to spell "brake" correctly when referring to the process of slowing a car. It's amazing that people who are routinely using computer code, which has zero tolerance for spelling errors, don't use the same high standards in normal text communication!]
 
  • Like
Reactions: CyberGus
My reason for being a little more optimistic was all related to the reminder that Tesla has an every increasing number of cars globally contributing to the 'edge case' database with real world data.

The auto labelling is a big step and the use of everybody's FSD computer (where you have paid for FSD or not) in shadow mode to verify the builds is impressive.

But the reason for 'little' optimism was the blah blah is all great but meaningless until:
  • It turns up
  • It works
  • UK laws tweaked to allow it's actual use.
 
I'm surprised about that video only seemingly moving to vision for velocity etc so late. And only 4 months. I assumed they've been on this for years.

The way I heard this was that they had taken the decision to remove radar then a team spent 4 months concentrating on getting the accuracy from the NN up to an acceptable level to remove the radar hardware.
And the framerate seems slow - 36Hz for the cameras. Although they have lots of cameras to process, thats 33ms between frames and you'd need a few frames to build 'objects' and track them which is quite slow.
The claimed human reaction latency in the presentation is 250ms, whereas the FSD latency, including this frame rate and processing, has latency of <100ms - so significantly quicker.
 
The claimed human reaction latency in the presentation is 250ms, whereas the FSD latency, including this frame rate and processing, has latency of <100ms - so significantly quicker.
He said <100ms was the goal He then said the refresh interval was 33ms. They have also talked about 4D processing where they take a time component so you need multiple frames to work (so your detection is not within 1 33ms frame), And of course when the clock starts may differ as a human may pick up on earlier queues

Time will tell whether it works, but take a look at this from 2 years ago - 2h 17min talking about depth from vision etc, its told to dismiss lidar (presented as a bionary choice between vision or lidar), but they were thinking of it back then, but its sets some idea of the speed of progress

 
Moderator comment - post moved from "Phantom braking getting bad. (TACC/AP) [Not AEB]"

Andrej Karpathy just did a presentation about Tesla Vision and how it's the radar that has been causing phantom breaking so removing it should fix this.

 
Last edited by a moderator:
  • Like
Reactions: cybergates
Andrej Karpathy just did a presentation about Tesla Vision and how it's the radar that has been causing phantom breaking so removing it should fix this.

I tried to post the YouTube link but there was an error message about it not playing not on Youtube. I'm sure someone can figure out how to do it.

If you search YouTube for "

Workshop on Autonomous Driving at CVPR'21"​


Yes indeed (he couldn't spell braking either ;) ). The short version is that the it's the radar that has been messing up ...so this is why Tesla is moving to a completely visual way of operating. Then there was an explanation about all the data the Neural Networks have been sifting through to refine the model ... so all is going to be well .. in ... the very near future ... !
 
I watched the vid through... all very clever but really says that conflicts between radar and vision caused problems and it would be lots of work to sort the radar out so let's not bother. Nothing about how other brand cars manage with radar without phantom braking.
Then lots of guff about how well we're doing and the resources and what's in the pipeline for the future - or in other words, 'it don't work right now and we don't know when but it will'
I liked his reference to Will Smith driving the robotic car - shame he missed the point that robots were trying to kill the driver...
 
I watched the vid through... all very clever but really says that conflicts between radar and vision caused problems and it would be lots of work to sort the radar out so let's not bother. Nothing about how other brand cars manage with radar without phantom braking.
Then lots of guff about how well we're doing and the resources and what's in the pipeline for the future - or in other words, 'it don't work right now and we don't know when but it will'
I liked his reference to Will Smith driving the robotic car - shame he missed the point that robots were trying to kill the driver...
Which other cars? Most are using Mobileye with a pure radar based TACC and a camera based Lane Keep. They aren't combining the two to visualising the scene like Autopilot, and have far simpler and less effective active safety features as a result. It works as a TACC and Lane Keep, but isn't on a path toward full self driving.
 
Which other cars? Most are using Mobileye with a pure radar based TACC and a camera based Lane Keep. They aren't combining the two to visualising the scene like Autopilot, and have far simpler and less effective active safety features as a result. It works as a TACC and Lane Keep, but isn't on a path toward full self driving.
Being on a path and getting there aren't the same thing. Mobileye also has a clear path that’s vision based. The real trick is not to charge folk for a dysfunctional system until it works....back it up with proven tech.
 
Moderator comment - thread merged from "Tesla vision, a fascinating talk"

Apologies if this has been posted elsewhere, but I don't often leave the confines of the UK and Ireland section. I thought there might be others like me missing out on this fascinating talk by Dr. Andrej Kaparthy about Teska vision, the neural net and how and why your car reacts as it does. It explains a lot about the "Why is it doing that?" moments. I was watching almost open mouthed at the amount of processing the car achieves a really good, eye opening 35 minute video. Enjoy
 
Last edited by a moderator: