Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Damn, SVB going to zero in real time. I have never seen a crash like this from a corp that didn't even declare bk.

SVB (SIVB) Shares Trading Halted for Pending News After Sinking Another 69% | Bloomberg

SIVB.2022-03-10.08-37.Halted.png


The pre-Market Low was $33.4015 (08:04:28 AM).
 
Last edited:
Since I first heard about the goal of self-driving vehicles, I thought they'd be able to program the basic rules of the road, add a bunch of details, add sensors/hardware, and eventually be good to go; how is it that even relatively mentally slow people can usually learn to drive, yet autonomy requires such an incredibly difficult, complicated program so many orders of magnitude smarter than the slowest-learning driver, that needs machine learning, which to me is having to go around the barn thousands of times (very indirect learning) to succeed? After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...
Because the human brain is an incredibly agile, plastic, massively parallel processing unit that can rewire itself in real time and make inferences without rules, I think that even a bird-brain (and I know some folks with those) is more capable than the best computer
 

1.5 years after the start of production; GM is building 12, yes just TWELVE Hummers a day.

Elon could likely build more than that - himself personally, before lunch.

TL; DR: Mary continues to lead, and it continues to not matter.
hand made Rolls Royce production is about 14 per day
 
Since I first heard about the goal of self-driving vehicles, I thought they'd be able to program the basic rules of the road, add a bunch of details, add sensors/hardware, and eventually be good to go; how is it that even relatively mentally slow people can usually learn to drive, yet autonomy requires such an incredibly difficult, complicated program so many orders of magnitude smarter than the slowest-learning driver, that needs machine learning, which to me is having to go around the barn thousands of times (very indirect learning) to succeed? After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...

It may be because automation has to also deal with those "slowest-learning" drivers. (those drivers also have to "go around the barn thousands of times" and many still can't figure out how to drive safely)

Then, there are all those pesky variables they call "edge cases" that are complex enough that they require a lot more "thought" to solve for. The slow-learning drivers developed solutions for dealing with plenty of those when they were learning to walk as children.

A.I. requires accumulation of those many, many "baby steps" to learn what it needs to manage these unique situations that inevitably reveal themselves at the most inopportune moments. 🤔

But I like to think that mostly, it's that people, are a problem. 😏
 
Last edited:
brain is an incredibly agile, plastic, massively parallel processing unit that can rewire itself in real time and make inferences without rules
Hmm ... she can be talking about one thing, thinking about another and remembering about that other time where it was my fault and giving me the rolling eyes look ... all at the same time.
 
Last edited:
Why is going slow and cautious being FSD's fault but in fact it's the driver's behind you are the one having emotions? Cruise/Waymo pisses off drivers everywhere because their set max speed limit in many places are 25mph. They phantom brake all the time and are also cautious at times. In a world of robotaxies, safety and reliability are the only two metrics. No one cares if it takes 5 seconds to make a left turn or a right turn if they are in the back seat reading a book. The only metric you care about as a passenger is to not die as long as the vehicle can arrive at a destination in a timely fashion.

FSD beta not stopping in time or driving toward another car/pedestrian. Those are the only interventions I care about. FSD beta not going fast enough at a roundabout, or changed speed limits too slowly where the driver hits the accelerator...meh. The question I ask is would the beta not arrived at the destination without the accelerator press?
Some good points and perspective there.

One of the more dangerous things it will have to figure out is the constant wrong lane and turn signal selection. That has gotten very marginally better. That’s not just annoying but dangerous.

Cheers.
 
Article from WSJ on opening of the Tesla network. Written for a basic understanding of charging with some good cost comparisons on charging at home versus the road and comparison to ICE vehicles.

Paywalled. Factual but it does not mention the reliability issues with the other chargers.

 
After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...
Humans have the benefit of millions of years of evolution learning how to navigate in and react to the 3 dimensional world.
 
A guy could skate through his doctoral thesis if he'd do the relationship between the confidence that FSD will be fully autonomous and the subjects' driving history of accidents and to a lesser degree their tickets.
I mention this because the solvability of Autonomous driving is continually discussed in relation to the equipment needed, etc.
The personal bias is never discussed in relation to one's driving history. On the surface the relationship is recognizable/conceivable. The only personal info people discuss are their education or work experience in relation to Autonomy.
Wouldn't it be a hoot if people would start with, I have had 4 accidents, one resulting in vehicular Homicide, and another resulting in three cars being totalled, today I walk with a brace as the result of that one, and I believe full autonomy is impossible.
 
Last edited:
Some good points and perspective there.

One of the more dangerous things it will have to figure out is the constant wrong lane and turn signal selection. That has gotten very marginally better. That’s not just annoying but dangerous.

Cheers.
FSD Beta Erratic lane changes and random turn signaling seems to have gotten more frequent these last few months...
 
Last edited:
Since I first heard about the goal of self-driving vehicles, I thought they'd be able to program the basic rules of the road, add a bunch of details, add sensors/hardware, and eventually be good to go; how is it that even relatively mentally slow people can usually learn to drive, yet autonomy requires such an incredibly difficult, complicated program so many orders of magnitude smarter than the slowest-learning driver, that needs machine learning, which to me is having to go around the barn thousands of times (very indirect learning) to succeed? After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...
This is a reasonable 'first-attempt' approach, and was (generally speaking) the approach to implementing self-driving that was used in developing Tesla's highway stack and other 'first-attempts'.

There are two key areas which make the Dynamic Driving Task (DDT) a far harder problem to solve than it appears, and hence requires a far different and much harder approach:

1. Human drivers have an understanding of the objects in the world around them and how to interact with them that vehicles do not have. Procedural programming a car to follow the rules of the road could work fairly well for no-traffic (vehicular / bicycle / pedestrian), very well maintained roadways without any foreign objects. Without an understanding of the objects in the world around the car, self-driving likely cannot happen. LIDAR-based solutions are attempting to minimize the need for understanding the objects in the world around them in favor of a focus on object avoidance; this results in many limitations. Tesla's approach is to try to give the car the same understanding of the objects in the world around that a human driver has, based on the same optical-field input that a human has (albeit in 360-degrees). This is an extremely hard problem that the human brain has already solved (and is surprisingly good at, in fact) but which the car needs to solve to be on the same footing as the human driver at the start of the "learning to drive" phase.

2. Human judgement of competing priorities is both A) challenging to replicate in such an approach as the number of one-off edge cases is (for all intents and purposes) infinite, and B) frankly, quite variable from human to human. Even something as simple as "avoid the object ahead of you in the current lane" is often an easy judgement for the human only because of #1, but incredibly challenging otherwise. Is the object even real, or a phantom object based on infinite combinations of lighting / roadway conditions / etc? Is it a small cardboard box or a small piece of wood or a piece of wood with nails sticking up from it? Better to run over it and risk object impact or brake hard and risk being hit from behind or swerve and risk a side collision? Without #1 above, which the human driver already has but the computer driver does not yet have, making judgement calls other than "brake hard to avoid collision with any object" (the LIDAR-based approaches) is very challenging. This is where large training sets of how this judgement call was made by various drivers in similar situations is used to train the neural network to make similar judgement calls, much as the human being a passenger in the car for many years prior to learning to drive taught them by their parents' examples.
 
A guy could skate through his doctoral thesis if he'd do the relationship between the confidence that FSD will be fully autonomous and the subjects driving history of accidents and to a lesser degree their tickets.
I mention this because the solvability of Autonomous driving is continually discussed in relation to the equipment needed, etc.
The personal bias is never discussed in relation to one's driving history. On the surface the relationship is recognizable/conceivable. The only personal info people discuss are their education or work experience in relation to Autonomy.
Wouldn't it be a hoot if people would start with, I have had 4 accidents, one resulting in vehicular Homicide, and another resulting in three cars being totalled, today I walk with a brace as the result of that one, and I believe full autonomy is impossible.
What seems less possible is a system that mitigates sufficient risk to allow transference of dynamic driving task ownership away from a human in the driver's seat and to a corporation like Tesla who would then be liable for what the vehicles is doing while that system is operating.

I don't think we're currently even in the same solar system as a generalized Level 4-5 robotaxi, and I honestly don't think Tesla's team believes we are. I think the intent has long been to get FSD Beta out there on HW3 as a Level 2 ADAS, it will remain a Level 2 ADAS with the driver always responsible, and then further iterations will begin with the goal of continuing efforts towards Level 3+. Tesla views disengagement per mile as a risk indicator but there is all sorts of nuance in those metrics that would be required to allow assumption of liability for the driving task with no geofencing etc -- and I have little doubt Tesla knows that too.

More robust sensor suites, fusion, and redundancies are what will likely be required to mitigate sufficient risk and have backups in case some sensors fail for any number of reasons. We already see further steps towards this with the new Phoenix HD radar and what appears to be more redundancies in the HW4 internals, and who knows what further iterations will bring.

Vision-only is fine for a Level 2 ADAS, because the human is the backup and the redundancy.
 
  • Informative
  • Like
Reactions: wipster and Andy O
Since I first heard about the goal of self-driving vehicles, I thought they'd be able to program the basic rules of the road, add a bunch of details, add sensors/hardware, and eventually be good to go; how is it that even relatively mentally slow people can usually learn to drive, yet autonomy requires such an incredibly difficult, complicated program so many orders of magnitude smarter than the slowest-learning driver, that needs machine learning, which to me is having to go around the barn thousands of times (very indirect learning) to succeed? After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...
Well, it would help if drivers followed rules of the road however, people are human. They speed. They don’t use turn signals. They get drunk, etc, etc. Thats just to start with. Roads are also constantly changing: detours, traffic jams, wrecks to name a few. Years ago in a discussion of 6 Sigma I recall being told that if a driver operated at 6 Sigma, he would never get home from work. Even people of limited intelligence are better than the best computer at processing visual data and the human eye is an incredible instrument.
 
I think its worth breaking the FSD challenge into 3 bits:
  1. Getting data about the surroundings
  2. Parsing that data into a 4d representation of your surroundings
  3. Deciding how to navigate, given this 4d representation
As a programmer, I reckon 3) is WAY WAY easier than the other 2. This is stuff you can do in plain old-fashioned C++. You probably use a ton of fuzzy logic, and weights, and frankly using a neural net is likely very helpful here too, but its really not that hard, given the insane clockspeed of CPUs, and the relative slowness of the surrounding world.

1) is a hardware issue. Still a big unknown. The difficulty of 2) will depend slightly on 1), but adding cameras gets complex, and expensive very fast. Not just cost of cameras + wiring, but the extra CPU processing of more data...

2) is the meat-and-potatoes. This is the hard stuff. Object recognition in 3D is hard. Object recognition in fog, and rain, with light bouncing off nearby surfaces, water droplet distortion on the lens, mud on another lens... is seriously hard.

TL;DR: FSD is mostly an object recognition challenge. Logic/Planning is flipping trivial. Humans are amazingly good at object recognition, hence it seems easy to us. We suck at maths though.
 
I posted this yesterday and it was deleted by the Mods !!. Humor banned in TMC?
I even put a smiley at the end just in case someone was confused..

1678460657894.png


---

Moderator: you posted the link without any parody warning, which caused confusion (and a lot of unnecessary responses). A smiley isn't enough.
 
Last edited by a moderator:
Since I first heard about the goal of self-driving vehicles, I thought they'd be able to program the basic rules of the road, add a bunch of details, add sensors/hardware, and eventually be good to go; how is it that even relatively mentally slow people can usually learn to drive, yet autonomy requires such an incredibly difficult, complicated program so many orders of magnitude smarter than the slowest-learning driver, that needs machine learning, which to me is having to go around the barn thousands of times (very indirect learning) to succeed? After all, it seems like there's relatively few rules and algorithms judging by the thinness of any driver's training manual at the local motor vehicle office where you go to take your written driver's exam, at least in the US...
I think it fair to add that our brain has had 15-20 years of continous (and relevant) training before we need to master driving at the level we do…