Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
Fortunately we don't need to rely on Kurzweil for anything, AGI in a self-driving car would be counter-productive, I don't need my driving robot having an existential crisis while driving, or composing poetry, it just needs to stay in its lane and not hit stuff. The only ML you need is for vision and maybe a few other models for inferring parking lot and intersection topologies and predicting vehicle/pedestrian dynamics, like the cut-in detector. That's counted as "AI" I guess but it's really narrow. Once everything is in 3D then it can just be rule-based.

Anyway Tesla shipping the new FSD visualizations are exactly what I wanted to see before the end of the year because it's a big milestone and I was scared they were going to take the same approach as highway NoA with city driving and have all the turn restrictions and intersections geocoded into the map database. But now at least the vision system is there in some form, so maybe even highway NoA will improve. I know folks were hoping to see actual intersection traversal but it's still good stuff.
 
As a hodgepodge gathering of random information, what Wikipedia "disagrees with" is absolutely immaterial anyway.
Wikipedia which is backed up by science versus random person on the internet, hmm, I know which I'll believe.
Wikipedia said:
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. ... An artificial neural network consists of a collection of simulated neurons. Each neuron is a node which is connected to other nodes via links that correspond to biological axon-synapse-dendrite connections. ... ANNs are composed of artificial neurons which retained the biological concept of neurons, ...
 
Last edited:
You should read up about Ray Kurzweil. He has been correct on many predictions, when many said no way.
Ray Kurzweil - Wikipedia His methods are simple and math based.
Yes, any one can call themselves a futurist and spout out garbage.
In reference to when computer chips will have the computing power of the human brain, it is not very complicated to make a prediction. Involves mostly math. Just look at the trend of improvements over the years of computer chips, read scientific estimates for what the brain does, look at when the two intersect and wahlah, you have yourself a science based estimate. That is what Ray did and anyone can do. This does not mean a computer will be as smart as the brain, just that they have similar compute capability, based on scientific estimates. I believe the exact year that Ray is predicting this will occur is 2028.
Wikipedia seems to disagree with you. Artificial neural network - Wikipedia

Kurzweil may be good at predicting how many megaflops computers will have in a given year, but that's very different than predicting something like when cars will drive themselves.

Comparing the "compute capability" of a computer and a brain is utterly meaningless. What a human does is not "compute," unless we're doing math, in which case we are so much slower we don't have a word for it. And "computing" can give you a really good game-playing machine but not much else.

You quote Wikipedia as saying:

Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. ... The original goal of the ANN approach was to solve problems in the same way that a human brain would. ...

Note the words "vaguely inspired by," and that solving problems "the same way a human brain would" is merely "The original goal" of neural networks.

Neural networks are even better at game-playing than earlier systems, and they may indeed finally give us driverless cars. They are an extremely powerful and useful tool. But they don't function like brains and they don't "think" and they won't produce AGI.
 
  • Disagree
Reactions: DanCar
And with fewer deaths, insurance companies will have fewer payouts. The car maker will be responsible, but will buy insurance just as we do now, and price it into the car. Or perhaps legislators will pass laws that car owners need to pay for the insurance even though they are not controlling the car.

The concern I have is that the payouts will be fewer, but they'll be substantially larger at least in the beginning. Larger due to the cost of the vehicle repair being greater due to all the sensors/computers/etc, and larger because when big corporations are considered liable for death or injury than the payouts are quite a bit higher than for a human caused accident.

The other things that's inevitable is the pool of vehicles being insured will be smaller. It will be because it will make more economic sense for people not to own a car. So insurance price will go up as a result of the pool of people being smaller.

Insurance might play a substantial role in who actually gets to own a self-driving car versus who simply has to use a robo-taxi service. My other concern is that self-driving cars might require being sold as a service, and not ownership. The issue is that everything has to be kept modern as technology evolves. You certainly wouldn't want the self-driving car equivalent of Windows XP still on the road.

When its all said and done I do see a lot of potential for robo-taxis to evolve than what Tesla's approach currently is. Unless the Tesla approach switches to a service model which I could see happening.

Tesla is already getting ahead of the game by offering Tesla insurance. But, this is mostly about the weird in-between area that Tesla is in now. Where they offer advanced L2 while still putting all the responsibility on the human driver which isn't really fair liability wise because the computer can, and does screw up. If the human fails to have quick reflexes to save the day then all the liability for the crash (assuming single vehicle crash) is on them.
 
Anyway Tesla shipping the new FSD visualizations are exactly what I wanted to see before the end of the year because it's a big milestone and I was scared they were going to take the same approach as highway NoA with city driving and have all the turn restrictions and intersections geocoded into the map database. But now at least the vision system is there in some form, so maybe even highway NoA will improve. I know folks were hoping to see actual intersection traversal but it's still good stuff.

I think a lot of people are under appreciating the visualizations.

Where they fail to understand what the car is actually showing us is object recognition, and localization. Where we can tell how good the system is at that without actually having to use NoA.

I don't have a HW3 vehicle so I can't see for myself how well it does so I'm hoping someone will post a long video where it shows the front/side views along with the view of the visualizations. So I can see how often it mis-detects things.

I know on my HW2.5 vehicle that it has a hard time with localization where it will show a car in the incorrect lane from time to time. So I'm curious to see if a HW3 vehicle does the same thing.

From the videos so far all I've seen is a lot of false cone detection.
 
Fortunately we don't need to rely on Kurzweil for anything, AGI in a self-driving car would be counter-productive, I don't need my driving robot having an existential crisis while driving, or composing poetry, it just needs to stay in its lane and not hit stuff. The only ML you need is for vision and maybe a few other models for inferring parking lot and intersection topologies and predicting vehicle/pedestrian dynamics, like the cut-in detector. That's counted as "AI" I guess but it's really narrow. Once everything is in 3D then it can just be rule-based.

I have to kind of laugh not in disagreement, but I've been wondering if we should really be looking at General Intelligence to solve the problem of self-driving cars. As it currently stands we have total control over our autonomous machines, and we don't have to worry about them having an existential crisis.

I brought up AGI because it's required if we don't make significant efforts to harmonize rules/regulations of the road, infrastructure (like V2X communication), etc to simplify the requirements for self-driving if we're going to expect them to match/exceed ODD (Operating Design Domains) that professional human drivers are capable of.

I personally like the idea of growing white listed areas so that you don't need AGI. You simply use offline human intelligence to create rules for that area. Things like "Don't bother turning left here because it will take forever". Where part of growing white listed areas is creating a national map database for all autonomous cars. Where construction companies and/or road planners have the responsibility to keep this data up to date. That way an autonomous car had really good maps for visual redundancy, and route planning. Especially for things like reporting the exact position of pot holes where the autonomous cars help the road maintenance people.

Growing the white list also includes making sure the approved area has a connectivity. The number one thing an autonomous car needs to do is to call home for help. Because there will be situations that arise that it's going to need to be bailed out of. Like "help, it's Dec 24th and I'm in a parking lot and cars won't let me out because they keep coming". With that specific example it doesn't necessarily have to call home because with V2X it could possibly ask other vehicles for to help out.

There also has to be some ability to break rules. Humans have allowance for breaking rules, and that allowance is essential for driving in areas with imperfect infrastructure. Like for example when a left turn light doesn't change on consecutive cycles. Most human drivers like me will simply run it when its safe to do so without knowledge of what the rule is for that specific area. Breaking rules also comes in handy time to time when dealing with rude human drivers.

I think we can all agree that our road systems are a bit like the Wild West where there are rules, but so often they're not followed. Sure it's better than places like India, but worse than places like Germany where drivers are a lot more rules based.

Non-AGI is very much rules based so there is this incompatibility problem.

I see AGI in our lifetime as being a myth. Hopefully that's not true because I strongly believe humans have an obligation to either survive or to create life. Having AGI ether assures our survival (in helping us seed the universe with human life) or our destruction (by being self-aware and seeing humans as a threat). Either way the obligation of our existence is met.

In any case I agree that AGI isn't needed for FSD if we accept a very step by step gradual evolution in achieving it.
 
The concern I have is that the payouts will be fewer, but they'll be substantially larger at least in the beginning. Larger due to the cost of the vehicle repair being greater due to all the sensors/computers/etc, and larger because when big corporations are considered liable for death or injury than the payouts are quite a bit higher than for a human caused accident.

The other things that's inevitable is the pool of vehicles being insured will be smaller. It will be because it will make more economic sense for people not to own a car. So insurance price will go up as a result of the pool of people being smaller.

Insurance might play a substantial role in who actually gets to own a self-driving car versus who simply has to use a robo-taxi service. My other concern is that self-driving cars might require being sold as a service, and not ownership. The issue is that everything has to be kept modern as technology evolves. You certainly wouldn't want the self-driving car equivalent of Windows XP still on the road.

When its all said and done I do see a lot of potential for robo-taxis to evolve than what Tesla's approach currently is. Unless the Tesla approach switches to a service model which I could see happening.

Tesla is already getting ahead of the game by offering Tesla insurance. But, this is mostly about the weird in-between area that Tesla is in now. Where they offer advanced L2 while still putting all the responsibility on the human driver which isn't really fair liability wise because the computer can, and does screw up. If the human fails to have quick reflexes to save the day then all the liability for the crash (assuming single vehicle crash) is on them.

I think the above speculations are misplaced. Right now, middle-class people have cars and poor people walk. I see no reason to think that insurance pay-outs to the victims of accidents will increase, though it might be a good thing if they did.

As for L2 and putting the responsibility on the driver, that's pretty much the definition of L2. And the driver chooses when and where to engage it.

... I strongly believe humans have an obligation to either survive or to create life.

Not an opinion I share. Fortunately, there's a built-in control mechanism: We shouldn't be allowed to spread unless we can learn to leave our world in better shape than we found it. We seem to be unable to do that, and this will end up causing the collapse of our civilization and any hope we might have had of spreading.
 
@verygreen found something interesting:

"Hm, looking at hw3 image more closely, we can see that redundancy stuff is now apparently live in 19.40.50.1, or at least the B node starts the full copy of the autopilot software instead of getting stuck in the "do nothing" loop."
green on Twitter

Looks like Tesla is now using the redundancy part of HW3 as of 2019.40.50.1.
 
  • Informative
Reactions: S4WRXTTCS
I have to kind of laugh not in disagreement, but I've been wondering if we should really be looking at General Intelligence to solve the problem of self-driving cars. As it currently stands we have total control over our autonomous machines, and we don't have to worry about them having an existential crisis.
AGI itself is a vague notion. Humans don't have "GI" - they have human level intelligence. We can't figure out the environment like bats or dolphins do.
 
Wikipedia which is backed up by science versus random person on the internet, hmm, I know which I'll believe.
Wikipedia is only as good as the sources upon which it relies (and the conclusion at which its information gatherers/compilers arrive). Being a "random person on the Internet" has no bearing on the fact that Wikipedia is frowned upon as a research tool in most college assignments/research. ;) As little as you think of me, I think the ignore function will do us both good collectively. Good day.
 
Its always funny to look at what was actually promised.

Feb 19 2019: "I think we will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

On the Road to Full Autonomy With Elon Musk — FYI Podcast

WbPSs6F.png
 
I think a lot of people are under appreciating the visualizations.

Where they fail to understand what the car is actually showing us is object recognition, and localization. Where we can tell how good the system is at that without actually having to use NoA.

I don't have a HW3 vehicle so I can't see for myself how well it does so I'm hoping someone will post a long video where it shows the front/side views along with the view of the visualizations. So I can see how often it mis-detects things.

I know on my HW2.5 vehicle that it has a hard time with localization where it will show a car in the incorrect lane from time to time. So I'm curious to see if a HW3 vehicle does the same thing.

From the videos so far all I've seen is a lot of false cone detection.

The only things I've seen so far are red traffic lights not switching to green, but instead the light being entirely grey (as seen in below video) and street lamps being read as traffic lights, which I will be showing when I'm back from vacation.

 
Feb 19 2019: "I think we will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"
There is a contradiction even within this paragraph. You highlighted the optimism, I highlighted the caution.
 
Last edited:
Wikipedia is only as good as the sources upon which it relies (and the conclusion at which its information gatherers/compilers arrive). Being a "random person on the Internet" has no bearing on the fact that Wikipedia is frowned upon as a research tool in most college assignments/research.
What would be a good research idea is to figure out what kinds of wiki articles are more accurate and be able to rate them through AI.

From what I've observed, common topics with a lot of edits and a livery conversation are better than single person articles with hardly any activity.
 
Wikipedia is only as good as the sources upon which it relies (and the conclusion at which its information gatherers/compilers arrive). Being a "random person on the Internet" has no bearing on the fact that Wikipedia is frowned upon as a research tool in most college assignments/research. ;) As little as you think of me, I think the ignore function will do us both good collectively. Good day.
Wikipedia is used extensively as a research tool for college assignments and research. It is discouraged to use wikipedia as a reference for college papers and research papers, but the references wikipedia uses are used extensively. I still find wikipedia used in research papers as an introduction into subjects. Saying wikipedia is absolutely immaterial is completely false. In the article quoted there are many references to check, if you care about the validity of the argument. Good day.
 
Last edited:
Its always funny to look at what was actually promised.

Feb 19 2019: "I think we will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"

On the Road to Full Autonomy With Elon Musk — FYI Podcast

WbPSs6F.png

Yes, Elon missed another deadline. In other breaking news, water is wet.

Traffic light response is in EA testing now and "city NOA" will follow. And "city NOA" is the last piece of "feature complete". So Tesla is one step closer to "feature complete". Now, how long it will take to go from "feature complete" to "sleep in your car while it self-drives" is anybody's guess. Of course, Elon is being overly optimistic again. I think he is crazy to think that it will only take another year to get to "sleep in your car while it self-drives". Personally, I think it will take longer than that.

But Elon is not entirely wrong in his last statement. While Tesla did not achieve the coast to coast FSD demo this year, he is right about the second part that Tesla is not planning to do the coast to coast FSD demo with software that is not available to the public. Who knows when Tesla will have that capability but whenever they do, it will be with software that the public who purchased FSD will also have in their cars at the same time.
 
The only things I've seen so far are red traffic lights not switching to green, but instead the light being entirely grey (as seen in below video) and street lamps being read as traffic lights, which I will be showing when I'm back from vacation.
This is an interesting video. The lights turning all grey probably means they moved out of the cameras' viewing angles. This will be a problem for FSD when the lights are mounted high, and particularly in Europe, where they typically don't have lights on the opposite side of the intersection (and human drivers stopped in the first row often have to crane their necks to see them).

I also agree that it's disappointing that the car doesn't recognize crosswalks. You'd expect that to be one of the highest priority items.
 
What would be a good research idea is to figure out what kinds of wiki articles are more accurate and be able to rate them through AI.

From what I've observed, common topics with a lot of edits and a livery conversation are better than single person articles with hardly any activity.
When the topics are simply fact-based and not clouded with opinion or an obvious agenda (which clouds my own perception of Wikipedia), I confess to agreeing it's not entirely useless (and making references available at the bottom does make it possible for the reader to investigate the claims made).