I'm willing to take a bet on your myopicness.
Sure. Although I’m not sure you understand myopic. But sure.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I'm willing to take a bet on your myopicness.
I’m not going to claim that true level 5, fully autonomous driving is right around the corner, but whenever I hear someone say that it may not happen for 50 years, I can’t help but think how myopic they are. I’d be shocked if it’s not here within 10 years. Really and truly.
That's what people said ten years ago.I’d be shocked if it’s not here within 10 years. Really and truly.
I'm curious as to what is stopping Elon from getting closer and closer to L4/5 but keeping the designation as L2 for no other reason than keeping the accountability on the driver? Seems like based on the SAE Levels there's what the car can actually do the vast majority of the time, and who gets the blame, where the functional level could be higher than the rated Level +/- some small margin of error hence keeping it L2.
Also, pardon my lack of SAE research but what is the acceptable failure rate for L4 or L5? It can't be 0% in this universe just like planes can't be 0% - Is there some criteria such as demonstrably as good or better than a human based on some test? What about gray areas where it's ambiguous as to whether a human could have done better in crash scenario X. I'm asking all of this because it seems plausible FSD could exceed L2 in some form yet stay at L2 for responsibility reasons.
I'm curious as to what is stopping Elon from getting closer and closer to L4/5 but keeping the designation as L2 for no other reason than keeping the accountability on the driver? Seems like based on the SAE Levels there's what the car can actually do the vast majority of the time, and who gets the blame, where the functional level could be higher than the rated Level +/- some small margin of error hence keeping it L2.
Also, pardon my lack of SAE research but what is the acceptable failure rate for L4 or L5? It can't be 0% in this universe just like planes can't be 0% - Is there some criteria such as demonstrably as good or better than a human based on some test? What about gray areas where it's ambiguous as to whether a human could have done better in crash scenario X. I'm asking all of this because it seems plausible FSD could exceed L2 in some form yet stay at L2 for responsibility reasons.
From a practical standpoint Tesla needs their cars to be legal on public roads. It seems likely that if such a L2 system was abused enough to result in a decrease in road safety it would be banned.SAE J3016 said:The level of a driving automation system feature corresponds to the feature’s production design intent. This applies regardless of whether the vehicle on which it is equipped is a production vehicle already deployed in commerce, or a test vehicle that has yet to be deployed. As such, it is incorrect to classify a level 4 design-intended ADS feature equipped on a test vehicle as level 2 simply because on-road testing requires a test driver to supervise the feature while engaged, and to intervene if necessary to maintain safe operation.
Karpathy joined Tesla (from OpenAI) in June 2017, shortly after he wrote this Medium post:
Software 2.0
Fascinating. Considering when watching the videos from this year re how the rewrite is going, it all has his signature.
I think the AGI (Artificial General Intelligence) stuff Elon got from Karpathy (not the other way around)!
AGI doesn't exist, except for science fiction books, peoples dreams, and what if discussions. It is the target of significant research. And there has been some discussion of a U.S. government Manhattan style project to develop it.... I think the AGI (Artificial General Intelligence) stuff Elon got from Karpathy (not the other way around)!
Trump version: Artificial Intelligence for the American People | The White HouseSchumer said:The top Democrat in the U.S. Senate wants the government to create a new agency that would invest an additional $100 billion over 5 years on basic research in artificial intelligence (AI).
I tried to find some list of what we have seen FSD Beta do in the short week (and stuff that we have not seen before, contrary to what green said)!"it [FSD beta release] does not really change much wrt [with regard to] what we have seen before"
We have not see any of the functionality the FSD Beta has shown a Tesla can do.
Did you read the article?AGI doesn't exist, except for science fiction books, peoples dreams, and what if discussions. It is the target of significant research.
I tried to find some list of what we have seen FSD Beta do in the short week (and stuff that we have not seen before, contrary to what green said)!
That's the beauty of neural nets. You never know what you're going to get!I’ve heard and read from multiple beta users that the software is not able to do a maneuver on one day and is able to do it another day? Some kind of secret sauce is going on.
I tried to find some list of what we have seen FSD Beta do in the short week (and stuff that we have not seen before, contrary to what green said)!
- Right turns
- Right turns on RED
- Left turns
- Unprotected
- Protected
- 4-way stop
- with other cars
- without other cars
- slowing down for speedbumps
- passing bikes
- going over double yellow to do so
- passing pedestrians
- stopping at crosswalks with ppl walking
- navigating roundabouts without a lead car
- respecting traffic lights
This is just what I could dump off the top of my head (guaranteed there is more than that)!
I tried to find some list of what we have seen FSD Beta do in the short week (and stuff that we have not seen before, contrary to what green said)!
- Right turns
- Right turns on RED
- Left turns
- Unprotected
- Protected
- 4-way stop
- with other cars
- without other cars
- slowing down for speedbumps
- passing bikes
- going over double yellow to do so
- passing pedestrians
- stopping at crosswalks with ppl walking
- navigating roundabouts without a lead car
- respecting traffic lights
This is just what I could dump off the top of my head (guaranteed there is more than that)!
Would be cool if Tesla implemented some kind of micro updates, where only some of the machine learning weights and biases were updated nightly. Similarly there is incremental / continual learning:That's the beauty of neural nets. You never know what you're going to get!
That's the beauty of neural nets. You never know what you're going to get!
Oh, you are going to there.We saw a lot of this during autonomy day. What we haven't seen is hands-off, which is what Green appears to be focusing on.
This is what I think is happening. (but I have no inside knowledge)Would be cool if Tesla implemented some kind of micro updates, where only some of the machine learning weights and biases were updated nightly. Similarly there is incremental / continual learning:
Would be cool if Tesla implemented some kind of micro updates, where only some of the machine learning weights and biases were updated nightly. Similarly there is incremental / continual learning: