Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Morgan Stanley analyst says Tesla's FSD is "undervalued"

This site may earn commission on affiliate links.

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
12,704
18,667
USA
"It appears that some of Tesla’s investors might not be seeing the big picture when it comes to the innate value of the company’s full self-driving technology. This was a point highlighted by Morgan Stanley analyst Adam Jonas in a note on Tuesday, where he argued that Tesla’s capabilities and progress in the autonomous vehicle market might very well be underappreciated.

Taking a stance that is notably different from his bearish note earlier this month, when he insisted that Tesla is no longer seen as a growth story, Jonas’ new note struck a more optimistic tone. “We believe investors underappreciate/undervalue Tesla’s Autonomy business. Many investors to whom we speak do not explicitly include Tesla’s Autonomy business in their valuation of the company,” he stated.

The analyst also listed down other critical areas of Tesla’s business that are “underappreciated” by investors. Among these are Tesla’s vast infrastructure of charging stations, the company’s solar and energy storage products, a potential business of selling EV batteries to other carmakers, and the opportunities presented by the Tesla Semi. Jonas noted that each of these areas has “potential commercial value beyond the manufacturing of Tesla vehicles.”
Tesla's full self-driving technology is 'undervalued,' says Morgan Stanley

I guess it really depends on what Tesla achieves with FSD. Obviously, if Tesla were to succeed in getting to FSD where every Model 3 becomes a robotaxi, that would be huge economically. On the other hand, if Tesla does not reach any kind of meaningful FSD, then it probably would not make any real difference economically.

But could Tesla's FSD still be undervalued even if Tesla does not achieve L5 autonomy but does achieve some form of good self-driving?
 
But could Tesla's FSD still be undervalued even if Tesla does not achieve L5 autonomy but does achieve some form of good self-driving?

1. Let's say FSD is indeed feature-complete and released by EOY, realistically it is going to spend at least a year as L2, being validated through fleet miles.

2. Then an application will be made to regulators, to which a quite probable answer [in Europe anyhow] will be, "We will approve it for L3 on the motorway when you demonstrate each redundant sensor modality can safely & reliably stop the vehicle from top AV speed for a stalled vehicle in its lane."

3. At which point there may be a need for a radar sensor upgrade, as so far as I can tell the current one cannot fulfil that requirement, so tack on another year for retrofitting. And that's presuming that Tesla's 3D depth mapping from vision will work well in real-time.

4. We're now at start of 2022, by which time there should be several competent competitors with L4/5 products in or near the field, e.g. MobilEye's Shashua recently announced they're “all in” [with Intel money] on RoboTaxis:

5. So depending on the speed of their FSD development, Tesla may still be overvalued at today's $216. Their progress with NoAP on HW2.5 is severely underwhelming, so let's hope it is not related to FSD on HW3.
 
Last edited:
  • Helpful
Reactions: diplomat33
1. Let's say FSD is indeed feature-complete and released by EOY, realistically it is going to spend at least a year as L2 being validated through fleet miles.

2. Then an application will be made to regulators, to which a quite probable answer [in Europe anyhow] will be, "We will approve it for L3 on the motorway when you demonstrate a redundant sensor modality can safely & reliably stop the vehicle from top AV speed for a stalled vehicle in its lane."

3. At which point there may be a need for a radar sensor upgrade, as so far as I can tell the current one cannot fulfil that requirement, so tack on another year for retrofitting.

4. We're now at start of 2022, by which time there should be several competent competitors with L4/5 products in or near the field, e.g. MobilEye's Shashua recently announced they're “all in” [with Intel money] on RoboTaxis:

5. Depending on the speed of their FSD development, Tesla may still be overvalued.

That is certainly one possibility.

But consider this:
1) If Tesla improves the camera vision on AP3, a radar retrofit may be unnecessary. A superior camera vision should be enough to stop the car from highway speeds.
2) With a much bigger fleet, it could take less than a year to validate FSD.
3) Tesla would be smart to plan ahead for retrofits, not wait until regulators demand a sensor upgrade but preemptively start retrofits on say the refresh Model S. For example, Tesla could do a refresh of the Model S later this year and include a sensor retrofit. if they did that, they would be ahead of your timeline.
 
  • Like
Reactions: OPRCE
That is certainly one possibility.

But consider this:
1) If Tesla improves the camera vision on AP3, a radar retrofit may be unnecessary. A superior camera vision should be enough to stop the car from highway speeds.
2) With a much bigger fleet, it could take less than a year to validate FSD.
3) Tesla would be smart to plan ahead for retrofits, not wait until regulators demand a sensor upgrade but preemptively start retrofits on say the refresh Model S. For example, Tesla could do a refresh of the Model S later this year and include a sensor retrofit. if they did that, they would be ahead of your timeline.

1. It should indeed but would still have no redundancy, which I foresee becoming a regulatory requirement.
2. Maybe.
3. 100% agree, that was what I was hoping to hear at the Model Y reveal, but nothing like that emerged.
 
  • Like
Reactions: diplomat33
Speaking of radar, does anyone know how Cadillac SuperCruise @75mph actually handles a leading vehicle cutting-out to reveal a stationary car 150m ahead in the same lane?

[For brevity, let's in future refer to this as the Firetruck Autopileup Test, (FAT)]

Nissan's ProPilot 2, coming out later this year in Japan, claims to use the latest in hi-def radar, so I am hoping it will handle this FAT hazard safely.
 
Speaking of radar, does anyone know how Cadillac SuperCruise @75mph actually handles a leading vehicle cutting-out to reveal a stationary car 150m ahead in the same lane?

[For brevity, let's in future refer to this as the Firetruck Autopileup Test, (FAT)]

Nissan's ProPilot 2, coming out later this year in Japan, claims to use the latest in hi-def radar, so I am hoping it will handle this FAT hazard safely.

I don't know but I imagine that a high def radar would be able to distinguish between the unique radar signatures of different stationary objects like say a stalled car, firetruck or overpass.
 
  • Like
Reactions: OPRCE
Elon has said Tesla's system is gradually reducing reliance on radar and one day may even eliminate it. That's how confident he is with the vision+deep learning approach. The argument is the same as why Lidar is not needed in Tesla system. Human drivers rely on input from two eyes to drive without Lidar or radar shooting off their head.

From Karpathy's talk Tesla system uses radar data to verify distance estimate from the vision system to help it to learn and improve. That's how you get vision's role to increase while radar's role to decrease.
 
Elon has said Tesla's system is gradually reducing reliance on radar and one day may even eliminate it. That's how confident he is with the vision+deep learning approach. The argument is the same as why Lidar is not needed in Tesla system. Human drivers rely on input from two eyes to drive without Lidar or radar shooting off their head.

From Karpathy's talk Tesla system uses radar data to verify distance estimate from the vision system to help it to learn and improve. That's how you get vision's role to increase while radar's role to decrease.

1. If his supreme confidence could cushion the blow of a firetruck @90mph, then I'm sure regulators would have no problem with that vision-only scheme. Given the record thus far though, I suspect they will err on the side of caution and insist that redundant sensor modalities must operate independently to safely stop the vehicle in the worst-case FAT scenario.

2. It is in a regulator's job description to not worry in the least if compliance with new minimal safety standards causes inconvenience to manufacturers. Also, lots of competing establishment OEMs are surely advising/lobbying governments on this issue, and will reason that if they can slyly poke a stick in Tesla's spokes then so much the better, which is why, in the EU at least, where the German auto cartel only need gang up to whisper their wishes in Mutti Merkel's ear, this has a distinct possibility of becoming a command.

3. Humans also do not grow wheels on their legs so why do we have bicycles? i.e. The "appeal to nature" is absurd, a transparent cover for cheapness.
 
1. If his supreme confidence could cushion the blow of a firetruck @90mph, then I'm sure regulators would have no problem with that vision-only scheme. Given the record thus far though, I suspect they will err on the side of caution and insist that redundant sensor modalities must operate independently to safely stop the vehicle in the worst-case FAT scenario.

2. It is in a regulator's job description to not worry in the least if compliance with new minimal safety standards causes inconvenience to manufacturers. Also, lots of competing establishment OEMs are surely advising/lobbying governments on this issue, and will reason that if they can slyly poke a stick in Tesla's spokes then so much the better, which is why, in the EU at least, where the German auto cartel only need gang up to whisper their wishes in Mutti Merkel's ear, this has a distinct possibility of becoming a command.

3. Humans also do not grow wheels on their legs so why do we have bicycles? i.e. The "appeal to nature" is absurd, a transparent cover for cheapness.

You're confusing FSD with old AP1/AP2, either knowingly or unknowingly. By FSD I mean the one with HW3 and latest software 2.0 neural net. The combo was only released to test cars in recent months and is used for the investor day demo. AP2x cars may be starting to share some subset of the neural net now but that's probably the extent of it until Tesla releases the feature complete system.

As for redundant sensors it's the most illogical thing i've heard and probably reason why Tesla does not talk about it. Elon is a firm first principle type of person. You can't have a primary sensor and a backup sensor working together without conflict. if the backup sensor can override the primary sensor then why not just let it be the primary sensor? To have a committee to drive a car is a pretty silly idea. The Tesla approach mentioned in the post above, using radar to train the vision but still only use vision to drive, is the most logical approach. A lot of people are talking about redundancy which is kind of like V2V approach. They might sound nice but If you can't do it the right way, making your system work, the "cheat" will only present more problem for you. We can go into more details if you can't understand this.

Not sure what you meant by wheels or bicycles. A human driver drives a car by using input from sensors(eyes), process the info by the brain, and send out only three outputs: to steering wheel, accelerator and brake. Machine does it exactly the same way. Input and output parts are all pretty straightforward. The challenge is in the middle step. No one has what Tesla has, the neural net set up, at this point for sure. BTW AI neural net is exactly how we try to make machine to do things the way human does. It's not a cover for cheapness.
 
Last edited:
You're confusing FSD with old AP1/AP2, either knowingly or unknowingly. By FSD I mean the one with HW3 and latest software 2.0 neural net. The combo was only released to test cars in recent months and is used for the investor day demo. AP2x cars may be starting to share some subset of the neural net now but that's probably the extent of it until Tesla releases the feature complete system.

As for redundant sensors it's the most illogical thing i've heard and probably reason why Tesla does not talk about it. Elon is a firm first principle type of person. You can't have a primary sensor and a backup sensor working together without conflict. if the backup sensor can override the primary sensor then why not just let it be the primary sensor? To have a committee to drive a car is a pretty silly idea. The Tesla approach mentioned in the post above, using radar to train the vision but still only use vision to drive, is the most logical approach. A lot of people are talking about redundancy which is kind of like V2V approach. They might sound nice but If you can't do it the right way, making your system work, the "cheat" will only present more problem for you. We can go into more details if you can't understand this.

Not sure what you meant by wheels or bicycles. A human driver drives a car by using input from sensors(eyes), process the info by the brain, and send out only three outputs: to steering wheel, accelerator and brake. Machine does it exactly the same way. Input and output parts are all pretty straightforward. The challenge is in the middle step. No one has what Tesla has, the neural net set up, at this point for sure.

1. Nope, there is no confusion here. The radar sensor on HW2.5 remains the same with HW3 and I doubt it was lack of processing grunt which allowed the Jeremy Banner or Walter Huang crashes. However, if I'm wrong about that and the current radar proves capable with HW3 of preventing those kinds of incidents, I will be happy to see it demonstrated live @90mph. Until then I think the more proximate reason Tesla does not talk about their radar is acute embarrassment, and that in giving out technical details they would essentially be conceding several ongoing lawsuits relating to wrongful deaths due to their system design/marketing.

2. Re. sensor redundancy: there is no necessary conflict, if, in case of FAT, the first sensor to flag the need for full emergency braking gets acted upon. Thus only when both sensor types agree there is no need to brake does one proceed unhindered. This will (at least initially) produce more false positives (annoying, but we are already somewhat inured) than relying on a single sensor type alone, in exchange for a lot less potentially fatal false negatives, a trade-off I for one would gladly accept.

3. This is the converse of the redundant FSD SoCs, which, as explained on Autonomy Day, must both agree on the driving plan for it to be executed. So just by the way Tesla very much believes in driving by committee.

4. I'm not interested in getting lost in the weeds of V2V. Just explain, if having a competent radar sensor which will pass the FAT proposed above is "cheating", as in Tesla FSD customers cheating death while the system matures at a glacial pace (judging by past AP performance), then where's the problem?

5. The Tesla FSDc compares poorly to the human brain, therefore atm it is wise to use several different types of sensors to compensate.
 
Last edited:
1. You still lack the concept of how these things work. You don't "see" things with your eyes. You see things with your brain. Exactly how the machine sees things too. Not the camera or sensor but the computer.

2. It does not work that way. You are not only needing to recognize hazards you also need to confirm objects that are not hazard. There is a conflict if one says it's a hazard and the other say it's not. It is a complex problem the added complexity of two types of inputs could huet you more than it could help. The Tesla approach of using radar to train vision but still use vision to drive the car is the smartest thing I've seen. Just like many others we learned in the investor day.

3. Redundant processor Tesla was talking about is when one processor failed there is a backup one, that is exactly like it, to take up the job. That probably happen very rarely but you can't have even a second lapse without anyone driving the car. Nothing like adding a second sensor doing a very different things.

4. Cheating means you need to do it only because your system does not work. Kind of like the term crutch Elon used to describe Lidar. You should get the idea (of course not).

Don't know if you're really this clueless or you always like to argue for argument sake. This will be my last reply to you and you will be added to my ignore list. Life is too short to waste on this (does not mean I have not wasted enough here already). No hard feelings but just what is the best for the situation.
 
1. You still lack the concept of how these things work. You don't "see" things with your eyes. You see things with your brain. Exactly how the machine sees things too. Not the camera or sensor but the computer.

2. It does not work that way. You are not only needing to recognize hazards you also need to confirm objects that are not hazard. There is a conflict if one says it's a hazard and the other say it's not. It is a complex problem the added complexity of two types of inputs could huet you more than it could help. The Tesla approach of using radar to train vision but still use vision to drive the car is the smartest thing I've seen. Just like many others we learned in the investor day.

3. Redundant processor Tesla was talking about is when one processor failed there is a backup one, that is exactly like it, to take up the job. That probably happen very rarely but you can't have even a second lapse without anyone driving the car. Nothing like adding a second sensor doing a very different things.

4. Cheating means you need to do it only because your system does not work. Kind of like the term crutch Elon used to describe Lidar. You should get the idea (of course not).

Don't know if you're really this clueless or you always like to argue for argument sake. This will be my last reply to you and you will be added to my ignore list. Life is too short to waste on this (does not mean I have not wasted enough here already). No hard feelings but just what is the best for the situation.


1. Are you suggesting that "seeing with the brain" of HW3 will radically improve the performance of the current radar? If so aren't you taking this on blind faith, with no evidence in support?

2. "You are not only needing to recognize hazards you also need to confirm objects that are not hazard. There is a conflict if one says it's a hazard and the other say it's not" -- this sure sounds like wilful nonsense to me. Surely anything not identified as a hazard is presumed to be non-hazardous driveable space? You seem to be making this up as you go along to create a conflict where none need exist.

3. Here you are definitely wrong, as Bannon clearly explained that both SoCs operate continuously on all inputs and only when their output decisions agree is it acted upon. If one SoC fails completely then presumably the output of the remaining one is seamlessly used to get to the nearest safe stopped position, though he did not get into this.

4. It is not cheating to have a robust backup sensor to ensure safety in the worst case scenario, it is prudent engineering and should in no measurable way slow the progress towards the perfect machine vision system to which Tesla aspires. Your argument OTOH equates to training high-trapeze artists without the use of a safety net, as culling a few losers is believed necessary to motivate the rest. A pretty inhumane approach which I suspect the courts/regulators shall frown upon, once they get around to examining it.

5. Certainly there are no hard feelings from me, mate, indeed I'm amused at the speed with which you've made your escape when some fragile opinions were challenged. I do enjoy a good argument, yes, and am sorry you failed to provide one, however, fare thee well without me.
 
Last edited:
  • Helpful
Reactions: electronblue
1. You still lack the concept of how these things work. You don't "see" things with your eyes. You see things with your brain. Exactly how the machine sees things too. Not the camera or sensor but the computer.

Actually both matter. Without eyes the brain ”sees” nothing in the visual spectrum, similarly with poor eyesight the brain sees less than with 20/20 and so forth. So the types of cameras and radars in cars continue to matter and make a difference alongside their NN processors of course.

2. It does not work that way. You are not only needing to recognize hazards you also need to confirm objects that are not hazard. There is a conflict if one says it's a hazard and the other say it's not. It is a complex problem the added complexity of two types of inputs could huet you more than it could help. The Tesla approach of using radar to train vision but still use vision to drive the car is the smartest thing I've seen. Just like many others we learned in the investor day.

Again, it can work either way when you have sensor redundancy. If avoidance of some risk is deemed high enough in a particular scenario, a single sensor can be trusted. Without redundancy you have no choice. If your sole sensor (or sole sensor type) missed the risk, it missed it and no action can be taken. Redundancy gives you more options and has certainly the potential for more security.

4. Cheating means you need to do it only because your system does not work. Kind of like the term crutch Elon used to describe Lidar. You should get the idea (of course not).

Tesla uses the human driver as a crutch so either way currently more than cameras and an NN computer are needed for driving. It is of course a legitimate debate to discuss what types of technology is eventually needed for truly autonomous driving and what is best for it. I’d say nobody knows yet. And no, I don’t think even Elon Musk really knows yet. He may think he does know, but I don’t think he knows.

Don't know if you're really this clueless or you always like to argue for argument sake. This will be my last reply to you and you will be added to my ignore list. Life is too short to waste on this (does not mean I have not wasted enough here already). No hard feelings but just what is the best for the situation.

Most of this sub-forum is now on your ignore list. Could it be you instead of us?
 
  • Love
Reactions: OPRCE
Most of this sub-forum is now on your ignore list. Could it be you instead of us?

Methinks @CarlK is still somewhat smarting from another recent encounter, where his factual claims were proven wrong on just about every count:

Funny story but totally untrue in every front. The acceleration shudder had been an issue only to the X but not the S. Raising the car also was only a suggested temporary solution for the S for a couple months until a permanent one, titanium underbody shield, was implemented.
 
As for redundant sensors it's the most illogical thing i've heard and probably reason why Tesla does not talk about it. Elon is a firm first principle type of person.

Amazing. If only other industries like the airline industry had your level of expertise and insight, they would had been so much better than their stupid redundant systems and sensors they have.
 
  • Love
Reactions: OPRCE