Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
The march of nines is the opposite of exponential progress (when measured in terms of time). The largest, most obvious progress happens at the beginning and then it takes longer and longer to get that next 9.
Correct on this one. Thus, the increasingly massive Tesla fleet observations provide larger opportunities to find increasingly rare ‘extra nines’. That much should be clear to anybody who thinks real observations have greater value than simulations.
 
Later he or Musk clarified that it would work on any road within US. Reason being that nn learns what curves it can expect.

Roads are built with certain standards and rules.

Ha. Here, they're upgraded horse paths. (Direct quote from the county road supervisor.)

Eventually NN learns what to expect based on what visible clues. However road building rules vary from country to country therefore path predictionwould not work using another country’s data.
 
  • Informative
Reactions: humbaba
OT


Oh, it doesn't. I don't know why I try to explain things to people who don't want to learn; I think it's a compulsion.

Whether to tone it down or not is a tradeoff between the social value of "shut up because all you're doing is making other people angry at you, they don't want to hear it" and the value of having been the one who pointed out that the Emperor had no clothes, just in case someone was actually listening.

I'm always #2. Something in my brain makeup means I get zero mental reward for the first (and walk away angry at the stupidity and worthlessness of humanity) and some mental reward for the second (and I walk away feeling that at least I've done my duty, and humanity can go hang). Sense of responsibility probably.
You made an obviously wrong and bold statement about AI having no business in the medical field
(Chinese AI Beats Doctors in Diagnosing Brain Tumors)

Why do you insist everybody should be listening to your FSD predictions if you're no prophet? You don't know how Elon and the team will be using the time they said they need.

They missed prior deadlines due to concurrent priorities and previously unknown factors. We expect this to happen again, but not by a factor of 20.
 
To boldly anthropomorphize, the NN is going to think "gee, no idea what that is, but it looks solid and I can't fit between it and the human holding a sign anyway, I guess I will just go really slowly around them...".

Or the NN might decide: "gee, that is decidedly weird, I'm going to pull over and wake up my human, or call in to Tesla Central where a few superior biological AIs with VR headsets can help out with my small problem." :D

I.e. FSD AI doesn't have to find a solution for every corner case, they have to safely detect weirdnesses and have to be able to "punt" safely and call for backup.

Requires far less complexity and is a lot safer than being able to make the right decision in every weird/rare case.
 
Last edited:
Ark bought some more Tesla
619BC82C-D46C-42D4-BA57-B7F99408AD44.jpeg
today
 
You made an obviously wrong and bold statement about AI having no business in the medical field
(Chinese AI Beats Doctors in Diagnosing Brain Tumors)
You didn't read my statement. My statement was correct.

There are areas where computers are helpful in medicine (I'm not sure "AI" is a meaningful word), and that's one of them. A much bigger one which should be implemented immediately is to devise a database where doctors can punch in all known symptoms and have all known diagnoses spit out from the database. Leave it to the doctor to filter through the possible diagnoses; but a computer is much better at coming up with all possibilities. (A computer might even be better at ranking them by likelihood based on what it knows... but then when an actual doctor looks at it and talks to the patient, they may go "Yeah, we know the #1 likelihood one is wrong because of this information which the computer didn't have", and move on to #2.)

AIs aren't going to REPLACE DOCTORS, which is what the other person to whom I was responding was claiming. We flat out don't know enough about the human body yet. You can't train a neural net or an expert system when you don't really know what you're doing either; "computerized exploratory science" is in its infancy. Might replace specialists in some fairly cut-and-dried areas.

I am not going to go into the medical stuff I've dealt with in my life. None of it could have been handled by computers at this point. Much of it doesn't even have proper diagnostic names.

The cited "AI" example is specifically in one of those areas where the doctors have trouble coming up with a list of possible diagnoses, because recalling a long list is something humans are extra bad at. A competent combination of computer and doctor would do even better.

From the article:
"AI can reduce workloads for doctors and help them keep improving their skills. It would function like a GPS, while human physicians remain behind the wheels."

Yes, this is the way forward. Computer-*assisted* doctors.
 
Last edited:
You didn't read my statement. There are areas where computers are helpful in medicine (I'm not sure "AI" is a meaningful word), and that's one of them. A much bigger one which should be implemented immediately is to devise a database where doctors can punch in all known symptoms and have all known diagnoses spit out from the database. Leave it to the doctor to filter through the possible diagnoses; but a computer is much better at coming up with all possibilities.

AIs aren't going to REPLACE DOCTORS, which is what the other person was claiming. We flat out don't know enough about the human body yet. Might replace specialists in some fairly cut-and-dried areas.
They're already well on their way to replacing radiologists, who are doctors last time I checked.
 
Horrifyingly, Tesla's most recent video shows that they're not doing that yet. As noted in my comment regarding said video.
That's not horrifying, it's exactly what human drivers do in the same situation. And the FSD is going to adapt a lot quicker to suddenly noticing a brick wall.
 
Good interpretation. I believe this is what Tesla's thinking. They're wrong. As they fix the top layer (I'll keep calling it the "driving policy" layer) they're going to discover additional things they need for FSD -- additional features. They are very far from feature complete; they don't know what features they need yet.

To put it bluntly, you don't have access to raw data and experiment results that they have, and you're no smarter then the people working there, so I don't see a good case for your opinion to be of much value on this.

I can understand though (I'm the same way) that if you're a perfectionist you'll always suspect (and likely be right) that there will be some oddball corner cases with problems. As long as the car doesn't present a safety hazard in most of those oddball cases and can be retrieved by a non-drunken human driver, it's all good. This only needs to be super rare if the steering wheel is removed. Just parking safely and indicating a human needs to resolve this is good enough.
 
Or the NN might decide: "gee, that is decidedly weird, I'm going to pull over and wake up my human, or call in to Tesla Central where a few superior biological AIs with VR headsets can help out with my small problem." :D


That's where a GAI NN is needed ... a general AI gives advice, tells the NN to slow down, speed up, get in that lane already...

Kind of like my wife, her phone's Waze audio lady telling me what to do as well as my wife repeating long before and just before the turn. OverShare?

More importantly,


So why was the presentation and ER moved so close together?
 
  • Like
  • Funny
Reactions: Rotte and neroden
Our eyes also only have maximum resolution for a small area in the center of vision, whereas the car's are full resolution for the field of view. The consequence of this is that the NN in the car is processing a lot more input data.

Not sure that's true: the human eye has ~3 million cones (color) and ~100 million rods (grayscale). "Retina Display" and resolution is not just marketing.

The optic nerve carries 1-1.7m signals.

That's a lot of input compared to Tesla's HW3 HD 1.2 mpixel cameras that produce 1280 x 964 frames according to @verygreen.

So the human eye, despite asymmetric allocation, processes a lot more information than a single Tesla camera.
 
Last edited:
Horrifyingly, Tesla's most recent video shows that they're not doing that yet. As noted in my comment regarding said video.

You are making assumptions about safe stopping distance: at ~45 mph driving speed the stopping distance would be within 20 meters - at no point in the video was less than 20 meters of the road visible, so the car was probably always travelling at safe speeds.
 
You are assuming that they sped up the video uniformly, and you are also making assumptions about stopping distance: at ~45 mph driving speed the stopping distance would be within 20 meters - at no point in the video was less than 20 meters of the road visible, so the car was always travelling at safe speeds.
Less than 20 meters of the side roads were visible.
 
To put it bluntly, you don't have access to raw data and experiment results that they have, and you're no smarter then the people working there, so I don't see a good case for your opinion to be of much value on this.
To put it bluntly, they're specialists and I'm a generalist.

I can tell you what would have changed my opinion. A presentation on the driving policy part of the stack which demonstrated development of some more difficult part of driving -- something humans frequently get wrong. Before the presentation, I actually thought they were further along than they are, believe it or not...

I can understand though (I'm the same way) that if you're a perfectionist you'll always suspect (and likely be right) that there will be some oddball corner cases with problems. As long as the car doesn't present a safety hazard in most of those oddball cases and can be retrieved by a non-drunken human driver, it's all good. This only needs to be super rare if the steering wheel is removed. Just parking safely and indicating a human needs to resolve this is good enough.
Agreed with this.

Now can they please shut up about removing the steering wheel and moving around with no driver?

Frankly the stock deserved to drop after this event. I hope it recovers to the $270s soon.
 
  • Like
Reactions: omarsultan