Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
There's a couple things wrong here. First, I am suggesting that true full Level 5 autonomy may never be possible. But especially with neural networks.

Second, Noam Chomsky is an interesting character, but he holds zero sway in this area of study. Simply put, this isn't his area of expertise.

Finally, and by far most importantly, these things aren't learning. Not in the biological sense, not in the classical sense, not in any sense. These networks have no intelligence either. At all, or in any sense. What they are is multi-layered, multi-variable probabilistic calculations. It's not that they don't learn the way a human brain does, it's that there's no learning involved at all. A training process twiddles weights and biases until the network outputs something desirable. That's not learning.

People really need to stop attributing agency to these machines, because there is none. It's a computer algorithm, and that's it. It's the same was it was in the 1950s.




Accurately? Probably not. But the odds of solving level 5 driving in 5 years is as close to zero as makes no difference. So maybe someone gets lucky, or maybe someone throws an unbelievable amount of programmers at the problem, but those are about the only ways this is happening in the next 5 years. Basically it's the odds of infinite monkeys with typewriters coming up with Shakespeare. An infinitesimally probable event.



A million times, THIS. These things aren't magic. It may appear like magic to the uninformed, but it's just software running in a (specialized) processor running matrix math- That part of Algebra class most people slept through.
It’ll never be possible! Okay, it’s possible, but highly unlikely, though not insurmountable that a group of monkeys couldn’t randomly stumble on it... or rather given enough effort. Get your story straight!

Honestly, I’m having a hard time following your reasoning. Are you suggesting Level 5 requires ‘general intelligence’ and is such never possible? Or maybe possible, but not likely or soon? Again, it’s that shifting goal-posts of non-zero absolutes, maybe.

Neural nets aren’t magic and even if you want to debate if their training is true ‘learning’, it’s really irrelevant; can a probabilistic algorithm determine drivable space within X bounds, 99.9% of the time? So far that seems to be a very reasonable outcome. Can that process not be extended to the different components/logical rules of driving in our societies? This seems obviously true to me, even if it took 200 different NN’s. There’s no reason each one couldn’t end up being as skilled as say AlphaGo - why would this be any different?

And really, humans aren’t prophetic, we do not intuit how to drive. We learn very much clearly defined rules through teaching and put that together with an ability to recognize patterns - to join the masses doing the exact same thing, with varying levels of success by practicing and reinforcement learning. These things are called Neural Nets for a reason.
 
  • Funny
Reactions: AlanSubie4Life
It’ll never be possible! Okay, it’s possible, but highly unlikely, though not insurmountable that a group of monkeys couldn’t randomly stumble on it... or rather given enough effort. Get your story straight!

I can't tell if you're being serious or not. You know there's an expression that goes something like given infinite time and infinite monkeys with typewriters, they'd come up with Shakespeare, right? I didn't just come up with it. It's the acceptance that given a timeline that ends with the heat death of the universe, nearly anything is possible. Even things that are effectively never ever going to happen.

Honestly, I’m having a hard time following your reasoning. Are you suggesting Level 5 requires ‘general intelligence’ and is such never possible? Or maybe possible, but not likely or soon? Again, it’s that shifting goal-posts of non-zero absolutes, maybe.

My reasoning has been pretty easy to follow so far, I think. Level 5 autonomy is likely not at all achievable given the complexity of the problem. And is not at all achievable by just rubbing some neural networks on it. The goalposts haven't moved at all, I've said this same thing many other times here on this forum and elsewhere. A minimum of some general intelligence is necessary to solve this problem.

Neural nets aren’t magic and even if you want to debate if their training is true ‘learning’,

I don't. It's not.

it’s really irrelevant;

Not really. It's literally the technology that most of these companies are pinning their hopes on.

can a probabilistic algorithm determine drivable space within X bounds, 99.9% of the time?

That's not nearly enough nines, and Level 5 is a robot driving itself. As in a car that has no means for you to take over, and likely has no means for you to even give input or suggestions except destination.

So far that seems to be a very reasonable outcome. Can that process not be extended to the different components/logical rules of driving in our societies? This seems obviously true to me, even if it took 200 different NN’s. There’s no reason each one couldn’t end up being as skilled as say AlphaGo - why would this be any different?

See, this is the problem. People read sensationalist articles about a neural network that can do a simplistic task moderately well, and they think we've cracked any level of intelligence. That's not what happened. The size of the network alone isn't even comparable here, the size of the networks required to solve Level 5 is entirely unknown, and the increase of complexity doesn't simply mean a network grows slightly. It grows dramatically with every input given, and all of the layers grow in response. Each of them needs an activation function to inform the next layer, and so on. Before you know it, you've got trillions of logic gates just so you can detect the path of debris blowing in the wind.

AlphaGo has zero relevance here. At all. It's not worth discussing further, to be honest.

And really, humans aren’t prophetic,

Compared to computers, we might as well be.

we do not intuit how to drive.

You don't seem to grasp everything that goes into driving. The mechanical operation of a vehicle perhaps isn't entirely intuition, but actually taking in all of the parameters you witness and outputting an automobile driving safely down a road is very much intuition. If you drive up to a puddle, you can probably guess how deep the water is without measuring it, or having any knowledge of it whatsoever. Small details give small hints to completely hidden functions of your brain that let you solve these problems given almost no data at all.

We learn very much clearly defined rules through teaching and put that together with an ability to recognize patterns - to join the masses doing the exact same thing, with varying levels of success by practicing and reinforcement learning. These things are called Neural Nets for a reason.

Rules in this case being almost entirely not written rules, but rather strange social norms that are communicated silently and without formal training. Which is why nearly every driver on the road is violating almost the entirety of the local driving manual at any given point, but we still seem to handle the function of driving.
 
To get us all back on track, and off this ridiculous topic. Robotaxi fleet in 2.5 weeks isn't happening. Feature complete FSD in 2.5 weeks isn't happening. These cars aren't even properly centering themselves within their lanes on the highway with the latest updates. So what will happen seems to be " None - on Jan 1 'later this year' will simply become end of 2020!". On the other hand, "One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles." appears to be a miss, since HW3 vehicles aren't consistently detecting signs, and aren't stopping by design.
 
  • Like
Reactions: Matias
I agree with almost everything DrDabbles says in the few posts above: Computers don't "think" and they don't "learn." Programs are being written that self-adjust according to strictly-defined parameters. This has served to develop the ability to play certain board games in which a very small number of precise rules lead to mind-boggling complexity of play, and do it better than the vast majority of humans. But it is noteworthy that computers play these games very differently than humans play them. What this demonstrates is that at certain tasks computers are better-suited than humans. When it comes to a task such as distinguishing a child from a large dog, computers have failed so completely up to now that it's fair to say they have not taken the first baby step.

A human brain is nothing at all like a computer, even one running a neural net. What happens inside a computer: There are switches, and each one is either open or closed. What happens inside a biological brain: Every neuron has inputs from many others, and outputs to many others. Every neuron when triggered emits a chemical that stimulates or suppresses the next or which inhibits the re-uptake of one or another of the above, and it is the ANALOG sum of all these chemicals that determines if the next neuron fires, and all the neurons in the entire massively-interconnected brain are simultaneously firing strings of pulses. This is so entirely unlike a computer that you'd need a computer the size of a galaxy to simulate it and get similar results.

But where I disagree with DrDabbles is that we don't need to simulate a human brain in order to achieve a computer that can drive more safely than a human. Today's technology is decades away, for all the reasons that have been discussed, but HW4 or HW5 will have sufficient computing power, combined with an adequate suite of sensors (cameras, radar, lidar, maybe sonar, maybe others also) to simply track every obstacle and vehicle within line-of-sight and drive a path that avoids hitting them. The key will not be "understanding" or "predicting" anything, but just tracking and plotting, which computers are eminently capable of. We're just a couple of decades away, not a couple of years away.

There will be accidents and there will be deaths. This is unavoidable. And those accidents will be entirely unlike the kinds of accidents a human would have. The goal cannot be zero deaths, and the goal should not be to drive the same way a human would drive. The goal is just to achieve a system that has fewer accidents and kills fewer people than a human driver would. And eventually we will get there. Neural nets might be one aspect of that system, or some new programming system we have not thought of yet. The problem can be solved. It will just take a lot longer than the optimists among us imagine.

And BTW, an infinite number of monkeys would produce every possible combination of keystrokes, including all the works of Shakespeare, and would do it an infinite number of times even though an infinite number of them would leave their typewriters without ever hitting a single key. And an infinite number of them would build an infinite number of driverless cars. But monkeys are notoriously ill-mannered so we're probably better off not having an infinite number of them.

Meanwhile, I really like EAP.
 
This is so entirely unlike a computer that you'd need a computer the size of a galaxy to simulate it and get similar results.

I don't think that is correct. In fact, at the University of Manchester, scientists completed a special type of computer called a neuromorphic computer. It is dubbed SpiNNaker, for Spiking Neural Network Architecture. The neuromorphic computer has 1 million processing cores and 1,200 interconnected circuit boards that together mimic the firing of neurons in a human brain. It can perform 200 quadrillion actions simultaneously. Now, granted it is far from equaling the entire human brain. But the fact that we can simulate significant parts of the brain in a computer that fits inside a room, should tell you that it is conceivable to some day, we will have a computer that mimics the entire human brain and it won't need to be as big as a galaxy.

Source: A New Supercomputer Is the World’s Fastest Brain-Mimicking Machine
 
I can't tell if you're being serious or not. You know there's an expression that goes something like given infinite time and infinite monkeys with typewriters, they'd come up with Shakespeare, right? I didn't just come up with it. It's the acceptance that given a timeline that ends with the heat death of the universe, nearly anything is possible. Even things that are effectively never ever going to happen.



My reasoning has been pretty easy to follow so far, I think. Level 5 autonomy is likely not at all achievable given the complexity of the problem. And is not at all achievable by just rubbing some neural networks on it. The goalposts haven't moved at all, I've said this same thing many other times here on this forum and elsewhere. A minimum of some general intelligence is necessary to solve this problem.



I don't. It's not.



Not really. It's literally the technology that most of these companies are pinning their hopes on.



That's not nearly enough nines, and Level 5 is a robot driving itself. As in a car that has no means for you to take over, and likely has no means for you to even give input or suggestions except destination.



See, this is the problem. People read sensationalist articles about a neural network that can do a simplistic task moderately well, and they think we've cracked any level of intelligence. That's not what happened. The size of the network alone isn't even comparable here, the size of the networks required to solve Level 5 is entirely unknown, and the increase of complexity doesn't simply mean a network grows slightly. It grows dramatically with every input given, and all of the layers grow in response. Each of them needs an activation function to inform the next layer, and so on. Before you know it, you've got trillions of logic gates just so you can detect the path of debris blowing in the wind.

AlphaGo has zero relevance here. At all. It's not worth discussing further, to be honest.



Compared to computers, we might as well be.



You don't seem to grasp everything that goes into driving. The mechanical operation of a vehicle perhaps isn't entirely intuition, but actually taking in all of the parameters you witness and outputting an automobile driving safely down a road is very much intuition. If you drive up to a puddle, you can probably guess how deep the water is without measuring it, or having any knowledge of it whatsoever. Small details give small hints to completely hidden functions of your brain that let you solve these problems given almost no data at all.



Rules in this case being almost entirely not written rules, but rather strange social norms that are communicated silently and without formal training. Which is why nearly every driver on the road is violating almost the entirety of the local driving manual at any given point, but we still seem to handle the function of driving.

Yeah, I was definitely being sarcastic/poking fun at things you've said. Which was, to paraphrase some; it's not possible, it's highly unlikely, monkeys might randomly stumble upon it (I do know the analogy) or if a lot of resources were thrown at the problem. Generally, I take immediate issue when someone speaks in absolutes, it makes me think less of their opinions, especially on something that's so far removed from black and white, that all I can hear is arrogance.

You do not wish to define what 'learning' is, yet, can say Neural Nets are not simulating learning? Please... and yes, it is irrelevant, if the end result of say AlphaGo's creation, was that it's capable of playing and winning a game, for which each move is not programmatically planned - the end result is still the same. The task was completed successfully and to such a degree, that it's now superior than any human counterpart. Whether you want to call that something other than 'learned', is functionally irrelevant, semantics. Neural Nets, Deep Neural Nets, are approximating biological neural networks, this isn't some new idea or concept. I'm sorry, it just isn't. Also, Level 5 doesn't require X amount of nines, to be called level 5, you know and understand this.

Wait a minute, are you trying to describe exponential growth!? What a unique thought and concept! This is common sense friend and a given, just because the complexity of a system will need to grow to some unknown amount, you've declared it functionally impossible? Please, that's not an argument, that's an ego. Right, so why discuss a Neural Net here like AlphaGo, it's completely irrelevant to the theory and foundation of 'deep learning', 'reinforcement learning', 'unsupervised learning' and so on.

The topic may have been derailed, but only by your arrogance. We arguably have not seen any deeper complexity DNN's on Tesla's yet, they're the same from HW2 or 2.5 to 3. So these strange statements, that we can't do it yet, we don't know how to do it, we don't know how large our technical resource needs are yet and with all these personal unknowns, it's therefore impossible - is childish grandstanding.
 
But it's not the cameras (or the eyes) that are analyzing the surroundings and making decisions. In a human, it's the brain. In the putative driverless car it's the computer. And computers, while blindingly fast, are also unbelievably stupid.
Saying "computers are stupid" makes zero sense to me. Computers are made to do what's in the software - which is written by humans. Computers don't have their own agency.

What a human can do with visual information alone, a computer (in the near term) will need much more input to do. Radar, lidar, and sonar can complement visual imagery to give the (stupid) computer more to work with
This is an assumption on your part stated as fact.

Can you distinguish between what is a fact and what is your assumption ?
 
There's a couple things wrong here. First, I am suggesting that true full Level 5 autonomy may never be possible. But especially with neural networks.
And you are ?

Second, Noam Chomsky is an interesting character, but he holds zero sway in this area of study. Simply put, this isn't his area of expertise.
LOL - "interesting character". I think Chomsky is eminently qualified to talk about how AI handles language (which is what he was talking about) or intelligence in general.

Avram Noam Chomsky[a] (born December 7, 1928) is an American linguist, philosopher, cognitive scientist, historian,[c] social critic, and political activist. Sometimes called "the father of modern linguistics",[d] Chomsky is also a major figure in analytic philosophy and one of the founders of the field of cognitive science.
BTW, the inventor of CNN, Yan LeCun didn't say FSD was not possible either. Infact I don't think any of the experts on AI Podcast said FSD wasn't possible.
 
Last edited:
But where I disagree with DrDabbles is that we don't need to simulate a human brain in order to achieve a computer that can drive more safely than a human.

I don't actually think we need anything nearing a human brain. I do think we need some level of general "intelligence", which in this case is really just a codeword for "complex software".

The key will not be "understanding" or "predicting" anything, but just tracking and plotting, which computers are eminently capable of.

Prediction scenario. You're driving through a residential neighborhood. There's a motor home parked in a driveway, it's not moving, and it takes up the entire length of the driveway so you can't see anything from the side of the house right to the road. In the yard there are toys- play houses, balls, a swing, etc. but nobody's playing with them. There are children playing across the street at another house, but they're far back from the road.

As a human, there should be a tiny little alarm that goes off in your head. Something that makes you predict there is a likelihood of a child suddenly appearing at the end of that motor home, running toward the kids on the other side of the street. If this is an autonomous vehicle, it also needs to make this same probabilistic calculation. There is no evidence of a child about to appear until it does, there's no evidence of the presence of a child currently until it appears in our path.

If we would like to see what happens when we simply rely on a computer reacting faster than a human, the NCAP does that exact type of test, and the child is hit 100% of the time at 20 MPH (most residential speed limits near me are 30 MPH or higher), but at reduced speed.

Skip to 2:20 for the exact clip I'm talking about.


There will be accidents and there will be deaths.

The question is whether these accidents would have conceivably been produced by a human operator. Right now, we've seen some collisions by AP that an attentive human would not have been involved in. A prime example is the two cars that have run under a trailer crossing a road. To any human it would be obvious that there's a massive obstacle, but to the computer, its sensors gave the all clear and it wasn't informed enough to detect that obstacle from that approach.

These are the types of issues that dumb machines fail poorly at, but humans naturally know "don't attempt to drive under that thing".

and the goal should not be to drive the same way a human would drive.

The problem here is that while autonomous and human drivers need to coexist, we need them to operate in a way that humans can predict and be comfortable. The MobilEye folks have given hundreds of talks about this, and they're probably the best thing that they've produced IMO. A vehicle that doesn't follow local norms is a road hazard would be a decent TL;DW, but I do really strongly suggest everybody go look up MobilEye conference talks to see what the 20+ year veterans think.

when someone speaks in absolutes,

Which I was trying not to do. That was the point of bringing up the monkeys and typewriters.

I think Chomsky is eminently qualified to talk about how AI handles language

Oh, I forgot we were teaching our cars how to talk and not drive. My mistake.

inventor of CNN, Yan LeCun

Well, we know a CNN can't solve FSD because there are currently multiple types of NN being used just for the effect they have today.
 
Yes! These are more examples of how ambiguous and confusing the new definitions are. A car could qualify for a level designation that would lead a buyer to believe the car can do far more than it can.

I don't see this as being ambiguous. The ratings address the functionality of the vehicle, not how it is implemented.


So what I'm left with is that everything is clear except that, as trent Eady suggests, allowing the auto maker to specify whatever ODD they like allows them to fudge the categories almost indefinitely. A car that I would think of as Level 3 might be capable of Level 4 in a very narrow geographic range and be sold as L4, leading a buyer to believe he could go to sleep in the back and the car would never ask him to take over immediately, when in fact that would be true only in one city on the other side of the country.

You don't think this will be used by the car companies to "expose" limitations in the competition's cars? I'm sure we will be kept fully aware.


QUOTE]Yes, Elon is, as I am fond of noting, an extreme chrono-optimist.

There's no way that my car will have the capability of L5 operation by the end of 2020 if I paid for FSD today. And I don't believe that the present suit of sensors in my car will ever be capable of robotaxi operation regardless of what computer upgrade it gets. And on top of that, they have still not figured out how to upgrade HW2.5 to HW3 in a simple plug-and-play manner. (FWIW, the sensor question, and my assessment of the time it will take to develop the software, are the reasons I did not pay for FSD. I'll buy the FSD-capable car when it actually becomes available for purchase.) There are people out there who paid for FSD whose cars will never be robotaxi-capable. And we are years away from any car that's ready to apply for regulatory approval for L5 operation by the general public.[/QUOTE]

In the mean time drivers with FSD get to enjoy a number of functions that are not available otherwise. The main one being Navigate on Autopilot.

And all this is especially sad because the only thing Musk is doing wrong here is promising insane timelines.

Yes, he is well known for that.
 
Full autonomy is certainly a difficult problem but the idea that full autonomy is unsolvable is pretty silly IMO.

It's not a question of whether its solvable, but it's a question of whether we as a nation have the resolve to get it done. I don't question that places like China will have full autonomy (in parts of it) within a decade or two. All the building blocks for autonomous cars are in place in that Country.

Self-driving cars require strong support at the federal level to make sure everyone is on the same page.

Some examples:

Centralized Map Database:
The organizations that work in that road transportation arena need agree to some mapping standard where a central map database is kept accurate. So everyone from road maintenance crews repairing pot holes to Tesla are updating the same maps. For example if your car detects a missing stop sign it sets an alert (on the map database) for the County to go and fix it. The only way for your car to know a stop sign was supposed to be there is through maps. That's how humans do it because those that are familiar with the area know a stop sign is supposed to be there. Those from outside the area blow through it not even realizing a stop sign was supposed to be there.

Accept that humans don't have telepathy, but robots do.
There is absolutely no point in self-driving cars without addressing the traffic problem that will happen as a result of self-driving cars. The only way to do this is through connectivity to allow for more efficient road utilization. Where that same connectivity can be used to improve detection/localization in visually obstructed areas (bad weather or stopped traffic around the next bend in the road). Aside from safety redundancy this won't be utilized much in the beginning, but it will grow to be a very important aspect of autonomous cars as they grow in popularity. Where people will ride in them not because they are autonomous, but because they get priority for green lights (when they get batched up based on destination). This means we have to give up the privacy and autonomy that we take for granted on roads.

Accept that it's a trolley car problem, and make the logical choice:
We as people need to accept that some deaths are inevitable. Our current approach is too cautious and we're far more accepting of death as a result of human behavior than we are of autonomous driving. I think that's always going to be the case to some degree, but I think we have to give a bit more allowance for self-driving cars to make mistakes in order to make progress at solving a far greater amount of deaths.
 
Last edited:
In the mean time drivers with FSD get to enjoy a number of functions that are not available otherwise. The main one being Navigate on Autopilot.

EAP, which I have in my Model 3, has Navigate on Autopilot. There are no freeways on Maui, so there's no place for me to use NoA, but on the mainland people with EAP have NoA. So far, people who have paid for FSD have nothing that's not also in EAP. (Though Tesla is promising some "real soon now.")

If we refuse to allow a self-driving car that ever has an accident a human would not, we'll never have self-driving cars. Computers are not people and they do not have intelligence or cognition or agency of their own. There will always be situations where a computer fails in which a human would not. The task is not, or at least should not be, a car that never makes a mistake a human would not. The task is, or should be, to make a car that in the aggregate, has fewer accidents than a human would. And I believe this will be achieved and far exceeded. But I believe it will require more hardware than our cars have today. And I apologize to EVNow for not adding "I believe" or "in my opinion" to statements earlier that I thought would be obvious were just my opinions.
 
If we would like to see what happens when we simply rely on a computer reacting faster than a human, the NCAP does that exact type of test, and the child is hit 100% of the time at 20 MPH (most residential speed limits near me are 30 MPH or higher), but at reduced speed.

The NCAP test was an AEB test, and AEB is designed to react only when a crash is deemed inevitable.

This means that car isn't slowing down as soon as the vehicle detects the child, and makes a path prediction. If any self-driving car hit the kid in the video it would be an automatic failure.

Where I live the residential speed limits are 25mph, and I usually drive 20mph. I do because there are kids around, and there is a lot of unpredictable events. Plus the drivable area is often narrow with people parking on both sides of the street. There just isn't enough visibility to drive faster than 20. The only places that have 35mph are the various roads that connect up all the residential areas. On some of these they've even lowered it to 25mph which is absolutely maddening because they are clearly 35mph zone. They did that to get people to drive 35mph through them, and not 45mph.
 
Last edited:
It's not a question of whether its solvable, but it's a question of whether we as a nation have the resolve to get it done. I don't question that places like China will have full autonomy (in parts of it) within a decade or two. All the building blocks for autonomous cars are in place in that Country.

Self-driving cars require strong support at the federal level to make sure everyone is on the same page.

Some examples:

Centralized Map Database:
The organizations that work in that road transportation arena need agree to some mapping standard where a central map database is kept accurate. So everyone from road maintenance crews repairing pot holes to Tesla are updating the same maps. For example if your car detects a missing stop sign it sets an alert (on the map database) for the County to go and fix it. The only way for your car to know a stop sign was supposed to be there is through maps. That's how humans do it because those that are familiar with the area know a stop sign is supposed to be there. Those from outside the area blow through it not even realizing a stop sign was supposed to be there.

This is no small undertaking. There is also a lot of responsibility issues to deal with. Once you mandate everyone use the same system they are no longer responsible for accuracy and safety of the data. Who is?


Accept that humans don't have telepathy, but robots do.
There is absolutely no point in self-driving cars without addressing the traffic problem that will happen as a result of self-driving cars. The only way to do this is through connectivity to allow for more efficient road utilization. Where that same connectivity can be used to improve detection/localization in visually obstructed areas (bad weather or stopped traffic around the next bend in the road). Aside from safety redundancy this won't be utilized much in the beginning, but it will grow to be a very important aspect of autonomous cars as they grow in popularity. Where people will ride in them not because they are autonomous, but because they get priority for green lights (when they get batched up based on destination). This means we have to give up the privacy and autonomy that we take for granted on roads.

Not clear why you think autonomous cars will increase traffic congestion. One of the biggest safety issues in autonomous cars will be the lack of ridiculous driving habits. That alone is the reason I much prefer the more rural highways to the main highways, not as many crazies. Once you have everyone using the autonomous system that problem goes away and the accident rates will drop significantly. But the cars aren't there yet and are still far from it.

There will be a lot of infrastructure changes needed to provide the sort of advantages you list. Changing lights and intersections is a big deal. It will have to happen very slowly as funds will not be rapidly forthcoming.


Accept that it's a trolley car problem, and make the logical choice:
We as people need to accept that some deaths are inevitable. Our current approach is too cautious and we're far more accepting of death as a result of human behavior than we are of autonomous driving. I think that's always going to be the case to some degree, but I think we have to give a bit more allowance for self-driving cars to make mistakes in order to make progress at solving a far greater amount of deaths.

I agree with that. But it will also get a lot better as fewer humans are on the roads to cause accidents. Once autonomous vehicles are the norm, the traffic patterns will change providing increased safety for everyone possibly at the expense of slower transport.
 
To get us all back on track, and off this ridiculous topic. Robotaxi fleet in 2.5 weeks isn't happening. Feature complete FSD in 2.5 weeks isn't happening. These cars aren't even properly centering themselves within their lanes on the highway with the latest updates. So what will happen seems to be " None - on Jan 1 'later this year' will simply become end of 2020!". On the other hand, "One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles." appears to be a miss, since HW3 vehicles aren't consistently detecting signs, and aren't stopping by design.


I don’t think anyone here is arguing that robotaxi will be here in 2.5 weeks.
 
I don’t think anyone here is arguing that robotaxi will be here in 2.5 weeks.

I know you didn't miss the 3 months maybe, 6 months definitely trope. I'm just surprised you missed the same jab in my comment.

Especially since not even Elon promised robotaxis by the end of this year. He just promised "feature complete" which is different from robotaxis.

He did. But it's Elon, so we pretend only his correct guesses are the ones he makes. Either way, feature complete FSD isn't happening in the next 2.5 weeks either.