Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Machine Learning kind of screwed up assessments.

It was and continues to be such a giant leap in what computers can do that people constantly underestimate what is possible at this point.

Sort of? Depends on your background.

I mean, we got computing power big enough to do this sort of so-called "neural net" stuff on a bigger basis fairly recently, but the math is, as the hardware guy (Bannon) said, linear algebra -- the concepts are old and those of us who took sufficient math classes don't find anything magic in this. The most ultra-basic mechanism underlying this was a math/CS toy for a long time (Markov chains).
 
I believe this was by copying drivers.

I think we have different ideas of what Tesla is doing in terms of mimicry. Karpathy mentioned at one point, in kind of a throwaway statement way, that they also take into account when the human does the wrong thing(this was during the discussion on path planning for left turns at intersections).

That suggests to me that they weight each human action by some utility function(s) of the end result, including negative weighting for bad results. Bad results are mostly easy to define, so long as you have good classification(car exited lane, crashed, came within x distance of an object). Following this technique, you wouldn’t end up mimicking the performance of the average driver, but rather the best one.
 
It would be except that in the presentation it was said they could filter out the items which already had sufficient data. I believe it was filtered out automatically, but I could have misremembered that part.

Right, but for each 9 you add, the odds of collecting an instance are 1/10 what they were before. So you need a fleet experiencing 10x as many event to collect sufficient images (a constant) at the same rate.
Another reason for TN, more miles of data...
 
  • Like
Reactions: capster
A thought occurs to me: Tesla knows where disengagement occurs. They could release their “feature complete” features with human supervision still required, collect data on where engagements happen and where they don’t and just geofence the areas where they don’t. When they get to the point to release robotaxis, they initially only roll them out in those areas with no disengagements.

Exactly. I went to school in Ithaca and recall how construction workers in that area (and most of upstate NY) seemed to have little regard for keeping the public roads safe when they're working. The consequence -- there will be parts of the country in 2021+ where robo-taxis are willing to drive, and areas where they won't, based on Tesla's disengagement data. You'll request a ride on your phone ride and get messages like "Sure, I can take you to Livermore from your current location. Be there in 6 minutes." or "Sorry, Ithaca is off limits to me for now. Would you like to go to Cleveland, instead? A wonderful town filled with such friendly people."
 
So partner with a high volume mfr and start gathering. Otherwise how will they ever catch up? Just playing Dev Adv here. Are there seriously no options at all? Hard to believe, but if true, I'm for sure going to be rich!
They would have to talk a manufacturer into outfitting all their cars with the appropriate sensors. I think that would be a really hard sell because the basic pitch is "Please install these sensors in all your cars (or at least all the volume selling cars) so I can begin to program my FSD computer that may or may not work". A car manufacturer could do this in-house if they so desired, and I expect the Korean and Chinese auto manufacturers to do so at some point. I think the North American and European auto manufacturers will have to do it eventually, but it may be too late for them.
 
  • Like
Reactions: lklundin
Don't know about sales because it's been so long, but what is wrong with the service communications? They always answer promptly (and I'm in a State where I'm not allowed to phone the service centre directly, but have to go through corporate). If it's an SC issue, I get text messages showing the progress, if it's mobile service, they call before they arrive to make sure I'm there. I really don't see how it could be better. Because this has taken place for over six years, I don't believe I'm an anomaly.

Using the app for service works well for me.

For sales, I know someone really interested in buying an X. The SA said they would call back in a couple hours and haven’t called back in days. SA’s definitely seem to be hit or miss. I had good luck but too many people do not.
 
We actually don't know what kind of scenarios they are working on now.
The presentation gave us some very, very strong indications of what they're working on now. They have a long way to go. They're still working on relatively "easy" stuff.

But the main premise of NN is that there are no "difficult" problems. Every situation calls for the same set of solution - get the data and train NN. As long as they can do this quickly, there are really no difficult problems that human drivers can easily solve.

Well, given that caveat, yes! But how about the problems which human drivers have trouble with? I've named several, Karen has named others. There are solutions to these, and if we want robotaxis to be conclusively better than human drivers, we have to solve them too.

You still get the data and train the NN, but you have to know what you're trying to train it to do.

I could train an NN to be a stereotypical teenage driver. Or a stereotypical Boston driver. Or a stereotypical New Jersey driver. Or a stereotypical Florida driver. Or a really bad driver like the guy who chopped his own head off. I don't think any of this would be desirable. It would be relatively easy to do, and I personally suspect this is what Uber is doing, given that they're Uber ;-)
 
a simple answer might be like
There are well over 400 different types of vehicles sold in the US listed on Kelly Blue Book
they all drive differently, sensors would be randomly placed, a great way to get total gibberish

Tesla has around 450,000 fairly identical vehicles that act fairly the same, with a few dozen sensors each. all fairly identically placed reporting with not a lot of +/- on placements of readouts
( when target shooting, you were very precise, all close together, but the target was _over there_)
this removes a lot of variables

Tesla apparently has a camera-agnostic NN, so if Google developed such the placement of the cameras would be less important. Placement and orientation could be likely deduced from the resulting camera footage. That being said, it's far easier to validate your data when it's more consistent ...
 
Right, but for each 9 you add, the odds of collecting an instance are 1/10 what they were before. So you need a fleet experiencing 10x as many event to collect sufficient images (a constant) at the same rate.
Another reason for TN, more miles of data...
True, but you are not processing the redundant data, and every week somewhere between 5K and 10K data gathering cars are being added, so you're getting more input all the time. So it may not be quite exponential, but it's certainly a good deal better than linear.
 
  • Like
Reactions: humbaba
They dropped MobilEye and started from scratch in mid 2016. That's less than 3 years to get here.

I'm not 100% positive they will be done in 2 years, but 3 years seems like a possibility.
I will be shocked if they are able to attain FSD in 2020, extremely happy if they achieve it in 2021, and still happy if it comes in 2022.
 
Yeah, that just shows that you didn't understand things as well as me. Those were never that hard and I never thought they were that hard.

People who don't have background are often inaccurate in their assessments regarding what's hard and what's easy. It's a thing.
Gotta agree with the other post that this is bordering on arrogant.

Landing rockets on barges is indeed hard. From a number of different aspects:

- The physics dealing with recovering from orbital velocities

- The mechanical engineering challenges associated with engine design/throttling for descent

- The system design issues associated with rapid reusability

- Materials science challenges associated with several of the above

- The challenges associated with managing descent trajectories in 3D space to hit a floating platform

- Doing all of the above on a comparatively small budget, and at rapid pace


In the big picture, none of those were obviously insurmountable. But to suggest it "wasn't hard" is insulting to a group of incredibly talented people that generated a lot of sweat and sleepless nights building that "How not to land a rocket" blooper real.
 
So I just skimmed 5 analyst reports on yesterday's presentation. Every single one was skeptical of Elon's insistence that LIDAR was not useful. None of them provided justification for their belief, other than "everyone else is using LIDAR". I swear, financial analysts wouldn't recognize a game changing technology if it was handed to them on a silver platter.

No one thought FSD was going to happen next year. The test drives had issues:

"Throughout the ride, the car performed relatively well but experienced a few rough maneuvers and had one disengagement where it failed to recognize cones blocking off some parked vehicles on the side of the road."

"Tesla demonstrated true Level 4/5 capable autonomous vehicles which, in our experience, traveled for more than 20 minutes over suburban and highway roads with absolutely no human engagement."

"While the vehicle was hesitant at times (RVs parked on the side of the road), took a turn or two tight, and was tentative in a collector merge, we equate the experience to the APTV rides in Las Vegas at CES 2017 (which was also a mix of on-and-off highway) —though a key differentiation is a lack of LIDAR or V2X installed/being deployed. Further, the vehicle was slightly more aggressive than those of OEM peers, but still did show some signs that improvement is needed (like all other test rides we have experienced). Altogether, we thought this was a positive showcase for where the company’s technology is currently, particularly the ability to navigate off-highway and recent internal push to enhance this development."

"Our ride handled both highway and off-highway roads well, though it was very cautious and at times hesitant to change lanes (even with no other vehicles nearby). This led to a somewhat jerkier ride."

"The biggest differentiator, in our view, is that Tesla conducted a complete, fully autonomous 20- minute test drive including on-highway and off-highway suburban streets without Lidar. Was it perfect? No. Did the driver have to manually intervene/disengage autopilot? On our drive,yes - one time when the car was about to miss a right-hand turn on a ramp. Did I feel safe? Yes. Would I want to fall asleep behind the wheel while in autopilot? Not yet."

These five analysts were bears, with low TSLA price targets. Basically, if Tesla was trying to attract valuations that other Autonomy companies are getting, it doesn't seem to have convinced these guys.

The thing these guys are missing (although some did pay lip service to this observation, but then didn't follow up on the implications), is that all other autonomy systems rely on very expensive and ugly LIDAR. For the past four years, Tesla has been the leader for autonomous driving, and based on yesterday's presentation, it's only going to get better for Tesla relative to the competition. They have a 7 year lead (and counting) on EV tech, and a growing lead on autonomous driving for the average Joe that wants to make long boring drives, or commuter traffic jams better. Analysts are completely missing the point that EV+AP makes any Tesla a much more valuable car, regardless of when FSD will arrive.

If I see anything from the more bullish analysts, I'll summarize them.
 
Using the app for service works well for me.

For sales, I know someone really interested in buying an X. The SA said they would call back in a couple hours and haven’t called back in days. SA’s definitely seem to be hit or miss. I had good luck but too many people do not.
I didn't use an SA. Just went to the website and ordered. They called when the car was ready. I got the financing and picked up the car.
 
  • Informative
Reactions: Off Shore
They would have to talk a manufacturer into outfitting all their cars with the appropriate sensors. I think that would be a really hard sell because the basic pitch is "Please install these sensors in all your cars (or at least all the volume selling cars) so I can begin to program my FSD computer that may or may not work". A car manufacturer could do this in-house if they so desired, and I expect the Korean and Chinese auto manufacturers to do so at some point. I think the North American and European auto manufacturers will have to do it eventually, but it may be too late for them.
Google - $
Car Mfg - slump sales, competition, horses.

Seems possible. Look at the Pacifica with all that Lidar hardware on it.
 
It seems to me like some people here don’t get what’s at stake with FSD, even though Elon showed it pretty clearly with the presentation at the end.

The company that will reach level 5 autonomy at scale will own the market for decades, just like Microsoft with OS, Google with search or Amazon with ecommerce. And because it is the biggest market in the world it will surely make the company that will win it the most valuable company in the world by a big margin.

Think about it, if just 10% of all miles driven in US happens on Tesla platform, then that’s 200B in revenue. So the revenue of Apple or Amazon just from the US!

So I believe Tesla should put all their effort into it. Not just because of the scale of the market, but also because if they won’t win it someone else will and it doesn’t matter if it takes 2, 5 or even 10 years, once it happens nobody will be buying electric cars that doesn't drive themselves...
 
The short of it is that you have to train neural nets as though they're idiots just looking for any excuse to mess up. They're not going to understand something that's not like things they've seen before. They can't "reason it out". They're pattern detectors.

This deserves some further exposition. Our brains have several different types of abilities: one is pattern detection (like the neural nets). (Neural nets is a bizarre term of art.) Another is, well, reasoning -- logical reasoning. In computing the attempts to do reasoning were called "expert systems" (another bizarre term of art). It has been suggested that for good AI we need to actually link neural nets and expert systems together, perhaps with other specialized units as well.