Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
The other thing to consider is large scale autonomous driving in itself is a lie. It's going to require a significant about of momentum to force it. To do this date no one is doing large scale full-self driving or even Level 3 driving. They're not because they can't break through all the barriers (regulatory, liability, etc).

I disagree with the above. No one is doing large-scale FSD because nobody is there yet. It has turned out to be more difficult than people thought. That does not mean that the idea is a lie or is impossible. It just means it will take longer than expected.

As for regulatory and liability issues, these are non-issues. Once the hardware and software exist and demonstrate that the system is safer than a human driver by some reasonable amount, the insurance companies will be all on board and the regulators will follow. As for liability, the cost of insurance will be priced into the car. Easy peasy.

How many driverless cars will you be able to buy? Will you be able to own a waymo? Tesla is the only one in this field at present.

Waymo's business model is not to sell cars to consumers. It is to license the technology to car makers. If Waymo cracks the FSD nut before Tesla (and I think they're ahead, partly because they're not limiting themselves to the sensors built into 2018 cars) any car company that wants to enter the FSD market will be able to license their technology, and Tesla will have to do the same or be late to the party.
 
Can we get this thread back to the topic at hand - what FSD features will Tesla release by 12/31?

It's unlikely, but if the rumors are true that a small subset of the 3.0 EAP vehicles running 2019.40.50 are responding to stop signs and lights, this option may be the winner:

"One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles."
 
I guess what I'm saying is that there is very strong evidence that they are specifically not doing that. Several people inspecting the firmware agree that there is no ongoing task to collect things the computer doesn't properly categorize, nor is the computer apparently flagging instances where you an AP disagree. This may have changed very recently, but that seems to be the consensus as of this summer.
This is what I've heard as well, though initial interpretations of the variation in timing and dataset size of uploads, would seem to imply something other than no data is being collected and an initiating event of something other than a set interval of time. But I could certainly be wrong! ¯\_(ツ)_/¯
 
  • Like
Reactions: tyson
This is what I've heard as well, though initial interpretations of the variation in timing and dataset size of uploads, would seem to imply something other than no data is being collected and an initiating event of something other than a set interval of time. But I could certainly be wrong! ¯\_(ツ)_/¯

More than likely is that the truth is somewhere in the middle. :)
 
I disagree with the above. No one is doing large-scale FSD because nobody is there yet. It has turned out to be more difficult than people thought. That does not mean that the idea is a lie or is impossible. It just means it will take longer than expected.

As for regulatory and liability issues, these are non-issues. Once the hardware and software exist and demonstrate that the system is safer than a human driver by some reasonable amount, the insurance companies will be all on board and the regulators will follow. As for liability, the cost of insurance will be priced into the car. Easy peasy.

It's not that it's impossible given enough time, but the FSD myth rides on top of a general myth about that's really about AI. That myth is that general AI is just around the corner. In recent years there has been a substantial amount of hype regarding AI, and neural networks that most lay people probably don't realize how limited what's been developed really is.

To really do large-scale coast to coast FSD as demonstrated by Tesla WITHOUT resolving issues with infrastructure is going to require a generalized AI that can deal with issues that arise on the trip.

The regulatory issue is really a chicken and egg problem. As an example one of the reasons Audi elected not to introduce L3 driving with the Audi A8 into the North American market was the lack of any unified regulatory rules regarding it. But, then the regulators seem to be waiting for the technology to actually exist.

I think this is really the importance of Tesla approach. It's not that Tesla's approach will work, but it forces the regulators hands. It's like the bad boy in a class without rules because all the other boys are sheep, and rules weren't needed before.

As to liability we already know there is a huge liability problem with self-driving cars. Governments will have to impose limitations on how liable self-driving cars are for mixed fault accidents. Like if I jaywalk in the middle of the night wearing all black then isn't it partially my fault if I get hit? As things exist right now if I did that then I'd be pretty much guaranteed lots of money or my family would be since I'd be dead.

Consumers are extremely weary of self-driving cars, and I think it's going to take awhile to get them to warm up to them. It's especially going to take awhile for them to warm up to the idea that a few deaths as a result of Robocars is worth it if it saves the lives of many more. We're not at that point since most people are not like Spock.

FSD is going to be a slow process, and I don't expect to see generalized self-driving country wide anytime in the next 20 years.
 
  • Like
Reactions: DanCar
It's weird that they hid them behind a user checkbox in the UI considering they're just visuals.

Maybe a little yeah. The only explanation I can think of is that the new visualizations might not be final and might be seen as cluttering up the display a bit for some drivers so Tesla decided to make it something the driver can toggle off if they don't like them. Again, it is entirely possible that the visualizations are not final but Tesla decided to let us see them as part of the "FSD sneak preview".
 

Attachments

  • upload_2019-12-24_20-50-56.png
    upload_2019-12-24_20-50-56.png
    48.3 KB · Views: 61
  • Like
Reactions: DanCar
Maybe a little yeah. The only explanation I can think of is that the new visualizations might not be final and might be seen as cluttering up the display a bit for some drivers so Tesla decided to make it something the driver can toggle off if they don't like them. Again, it is entirely possible that the visualizations are not final but Tesla decided to let us see them as part of the "FSD sneak preview".

It would be awesome if various things could have checkboxes so those who find visuals distracting could turn them off. Especially things like the oncoming vehicles.
 
  • Like
Reactions: diplomat33
It's not that it's impossible given enough time, but the FSD myth rides on top of a general myth about that's really about AI. That myth is that general AI is just around the corner. In recent years there has been a substantial amount of hype regarding AI, and neural networks that most lay people probably don't realize how limited what's been developed really is.

To really do large-scale coast to coast FSD as demonstrated by Tesla WITHOUT resolving issues with infrastructure is going to require a generalized AI that can deal with issues that arise on the trip.

The regulatory issue is really a chicken and egg problem. As an example one of the reasons Audi elected not to introduce L3 driving with the Audi A8 into the North American market was the lack of any unified regulatory rules regarding it. But, then the regulators seem to be waiting for the technology to actually exist.

I think this is really the importance of Tesla approach. It's not that Tesla's approach will work, but it forces the regulators hands. It's like the bad boy in a class without rules because all the other boys are sheep, and rules weren't needed before.

As to liability we already know there is a huge liability problem with self-driving cars. Governments will have to impose limitations on how liable self-driving cars are for mixed fault accidents. Like if I jaywalk in the middle of the night wearing all black then isn't it partially my fault if I get hit? As things exist right now if I did that then I'd be pretty much guaranteed lots of money or my family would be since I'd be dead.

Consumers are extremely weary of self-driving cars, and I think it's going to take awhile to get them to warm up to them. It's especially going to take awhile for them to warm up to the idea that a few deaths as a result of Robocars is worth it if it saves the lives of many more. We're not at that point since most people are not like Spock.

FSD is going to be a slow process, and I don't expect to see generalized self-driving country wide anytime in the next 20 years.

General AI is a pipe dream. Intelligence is an emergent phenomenon arising in biological systems that are so completely unlike any computer that a neural network the size of the galaxy could not replicate or simulate it. In a computer, a circuit is either open or closed, depending on concrete deterministic inputs. In a brain, there are multiple inputs and multiple outputs at every synapse, and firing depends on the analog summation of tens, if not hundreds of chemicals released into the synapse, including exciters, inhibitors, re-uptake promoters and inhibitors, etc., and acted upon as well by hormones related to emotion. In a computer, a series of instructions is executed. In a brain, a constant chain of firings is happening simultaneously among eighty-six billion massively-interconnected neurons.

We're never going to have AGI (artificial general intelligence) But sometime around HW 5 to 8 they'll have enough computing power to deal with enough of the situations that arise in driving to have a driverless car that's safer than a human driver. It will kill people. But it will kill far fewer people than human drivers.

And with fewer deaths, insurance companies will have fewer payouts. The car maker will be responsible, but will buy insurance just as we do now, and price it into the car. Or perhaps legislators will pass laws that car owners need to pay for the insurance even though they are not controlling the car.

Sadly, I'm no longer convinced that I'll live to see it. I'm an old man and could check out at any time.
 
General AI is a pipe dream. Intelligence is an emergent phenomenon arising in biological systems that are so completely unlike any computer that a neural network the size of the galaxy could not replicate or simulate it. In a computer, a circuit is either open or closed, depending on concrete deterministic inputs. In a brain, there are multiple inputs and multiple outputs at every synapse, and firing depends on the analog summation of tens, if not hundreds of chemicals released into the synapse, including exciters, inhibitors, re-uptake promoters and inhibitors, etc., and acted upon as well by hormones related to emotion. In a computer, a series of instructions is executed. In a brain, a constant chain of firings is happening simultaneously among eighty-six billion massively-interconnected neurons.

We're never going to have AGI (artificial general intelligence) But sometime around HW 5 to 8 they'll have enough computing power to deal with enough of the situations that arise in driving to have a driverless car that's safer than a human driver. It will kill people. But it will kill far fewer people than human drivers.

And with fewer deaths, insurance companies will have fewer payouts. The car maker will be responsible, but will buy insurance just as we do now, and price it into the car. Or perhaps legislators will pass laws that car owners need to pay for the insurance even though they are not controlling the car.

Sadly, I'm no longer convinced that I'll live to see it. I'm an old man and could check out at any time.
The assumption that general intelligence is needed for FSD/Level 5 is fundamentally flawed in my opinion. But in counter to your point in a computer, multiple circuits can be open and or closed simultaneously... though again, the idea that general intelligence will require 100% emulation of the size and complexity of the human brain is also an assumption not supported by any current cognitive science that I am aware of.

You could be absolutely correct in your assertions that level 5 autonomy needs another, 10-30 years of just hardware development. But let’s be honest, this crystal ball forecasting includes little substance.
 
The assumption that general intelligence is needed for FSD/Level 5 is fundamentally flawed in my opinion. But in counter to your point in a computer, multiple circuits can be open and or closed simultaneously... though again, the idea that general intelligence will require 100% emulation of the size and complexity of the human brain is also an assumption not supported by any current cognitive science that I am aware of.

You could be absolutely correct in your assertions that level 5 autonomy needs another, 10-30 years of just hardware development. But let’s be honest, this crystal ball forecasting includes little substance.

I think that AGI requires self-awareness, which machines will never have. But I also think that with the possible exceptions of the arts, which deal with emotion, and biological activities, any task that humans perform will eventually be within the purview of machines to perform with better results.

I'm not even convinced that AGI is a desirable goal, though if it can be achieved somebody will do it. Self-aware machines would have a strong incentive to get rid of people. With so-called "deep learning" we don't know what will emerge, but humans want to remain the masters of the machines, and even without self-awareness the machines could reason that we are no longer needed.

I do think that a driverless car that can operate anywhere a human could, and do so more safely than a human, causing fewer deaths and injuries and less damage to property, is ten to thirty years away. Of course I could be wrong, and I hope I am wrong. I'm just not seeing the kinds of improvements that would be needed. Even recognizing and reacting to stop signs does not seem significant, considering that AP1 was reading some road signs. And I think that Tesla is making the job much harder for itself by insisting on the present suite of sensors.
 
FSD is a pipe dream. Requires much more intelligence then will exist in the next 5 years, likely 10+. Partial self driving is reachable with current technology. Won't be that hard to handle enough corner cases on a limited access freeway. PSD will start with bumper to bumper traffic. This will be a landmark debut, when you don't have to monitor the car, and the car drives itself. Elon will make it happen, hopefully with in two years.

Pipe dream - From the fantasies experienced when smoking an opium pipe.
... the idea that general intelligence will require 100% emulation of the size and complexity of the human brain is also an assumption not supported by any current cognitive science that I am aware of.
And there is no science that says the contrary. Nature's billions of years of evolution isn't so easily replicated.
You could be absolutely correct in your assertions that level 5 autonomy needs another, 10-30 years of just hardware development. But let’s be honest, this crystal ball forecasting includes little substance.
Lets be honest, we don't need hard science to state what is obvious: too many corners cases for a dumb computer to handle.
By the way, a futurist tech expert, Ray Kurzweil, is predicting we will have computer chips with the computing power of our brain before the end of the next decade. We already have computers more powerful, measured by flops and other measures. Those computers are often called cloud or data centers.
 
Last edited:
By the way, a futurist tech expert, Ray Kurzweil, is predicting we will have computer chips with the computing power of our brain before the end of the next decade. We already have computers more powerful, measured by flops and other measures. Those computers are often called cloud or data centers.

Futurists have the worst record of any profession. A futurist is a person who reads science fiction and says, "Yes. That's what will happen." It never does. Futurists in the 1960's predicted that we'd have household robots doing all our housework by the 1980's. But oddly, they never predicted the miniaturization of computers, which logically would be a precursor to such robots. It's the end of 2019 and the closest thing we have to a home robot is a Rumba that dumbly follows a random walk to vacuum your floors. Or a RealDoll (tm) that says "I love you" when you pull the string in the back of its head.

Computers are nothing whatsoever like biological brains. Computers can do math a billion billion times faster than a human, and can beat 99.99% of human chess players (Deep Blue's team at IBM is reliably accused of cheating and refused to offer evidence to the contrary) and can manage mountains of data. But they don't think, and what they do is nothing whatsoever like what a human does, no matter how big and complex a "neural network" they run. Calling it a "neural network" does not make it a brain emulation.

But a computer does not need to think to tell you the logarithm of the cube root of 254684351656876.35468435168465151, or to calculate a strong chess move. And with a few more improvements in size and speed and programming sophistication it won't need to think to solve enough edge cases that it kills fewer people than a human would when driving a car.
 
I gotta say, with current polling:
None - on Jan 1 'later this year' will simply become end of 2020! 89 votes and 52% voting for this
Elon Musk's credibility is at an all time low... roughly in-line with "Funding secured." I wish it was otherwise. I suppose we can never hope for the leopard to change his spots. I really don't want this company to be a lawsuit magnet.
 
Futurists have the worst record of any profession. A futurist is a person who reads science fiction and says, "Yes. That's what will happen." It never does.
You should read up about Ray Kurzweil. He has been correct on many predictions, when many said no way.
Ray Kurzweil - Wikipedia His methods are simple and math based.
Yes, any one can call themselves a futurist and spout out garbage.
In reference to when computer chips will have the computing power of the human brain, it is not very complicated to make a prediction. Involves mostly math. Just look at the trend of improvements over the years of computer chips, read scientific estimates for what the brain does, look at when the two intersect and wahlah, you have yourself a science based estimate. That is what Ray did and anyone can do. This does not mean a computer will be as smart as the brain, just that they have similar compute capability, based on scientific estimates. I believe the exact year that Ray is predicting this will occur is 2028.
... Computers are nothing whatsoever like biological brains....
Wikipedia seems to disagree with you. Artificial neural network - Wikipedia
Wikipedia said:
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. ... The original goal of the ANN approach was to solve problems in the same way that a human brain would. ...
 
Last edited: