Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
I think the fact that he did reiterate at TED this close to the deadline is telling.

The shorter the timeline-to-deadline gets without a revision, the exponentially less the probability of not making the deadline becomes.
That wasn't the case with the Model X release. We weren't told anything about the production delays but rather learned about it in hindsight during CCs. That was a frustrating time for investors. I'm guessing at this point, if Tesla is way behind for the FSD event in December, as the evidence from AP2 performance suggests, they would not reveal that. The event would still take place to some extent, possibly highly coordinated with software to achieve the appearance of FSD technology, but there would be no imminent release of FSD software, or even level 3 software. Hope I'm totally wrong on this.
 
Thanks! Might we assume about 200 passengers per a typical flight? Or is there a better number?

I'm trying to get a feel for what impact such a hyperlood could have on jet fuel demand. I thoroughly expect jet to be the last fuel that is needed in quantity after EVs clean up ground transport, so anything that cuts into jet fuel demand is pretty important.
That is a heavily used route so load factors are typically around 90%. The most common aircraft on the route is the Embraer 190, a 104 seat capita for AA, and a very efficient airplane. There are roughly 16 flights daily in each direction LGA-DCA, typical fuel burn, say, 700 gallons, and about 90 passengers per flight, so 2880 passenger daily at 7 3/4 gallons per equals roughly 22,400 gallons per day.

That is very rough but p[robably good enough for this purpose.

Amtrak could become more efficient too, but if the Hyperloop works that will demolish any other choice if the price were comparable. Could it actually happen?
 
That wasn't the case with the Model X release. We weren't told anything about the production delays but rather learned about it in hindsight during CCs. That was a frustrating time for investors. I'm guessing at this point, if Tesla is way behind for the FSD event in December, as the evidence from AP2 performance suggests, they would not reveal that. The event would still take place to some extent, possibly highly coordinated with software to achieve the appearance of FSD technology, but there would be no imminent release of FSD software, or even level 3 software. Hope I'm totally wrong on this.
By all means they can train the software specifically for the route of the SF-NY demonstration and show it to the world. This is a much simpler task then showcasing a "generic" FSD that can be used elsewhere.
 
By all means they can train the software specifically for the route of the SF-NY demonstration and show it to the world. This is a much simpler task then showcasing a "generic" FSD that can be used elsewhere.

My guess, and this is completely a guess based on what he said at ted, they may incorporate a demo of dynamic routing during the trip IF they are comfortable enough with the system.
 
That is the theory but the relative weakness of AP2 performance at this point, already after 8+ months since release, and missing Musk's target for parity by several months, lowers confidence in the execution of FSD, unless Tesla somehow wants to keep the perception that they are struggling with this. If it was moving along smoothly, wouldn't AP2 be performing better than AP1 by now?

The way that I understand it, AP1 vs AP2 was require because Mobileye and Tesla has a falling out. It started with the accident where the car went under a tractor trailer and Tesla blamed Mobileye and Mobileye got pissed that Tesla was building their own solution. Tesla wanted to include the Mobileye solution and continue to use it for AP features. This became impossible so Tesla had no choice but to reinvent it from scratch. Well they had some capabilities from nVidia but they where only demo like feature and thus Tesla started from almost nothing to where they are today. If you just think about that for a moment, you understand how much the fallout jammed up Tesla. I cant blame either party, Tesla was Mobileyes most impressive solution and probably felt betrayed by Tesla for wanting to go it alone with FSD, which would have eventually displaced Mobileye complete in the Tesla fleet.

Now that we have the history behind us. I believe the AP2 and FSD solutions are completely different. The reason is that AP2 is basically just TACC with the added capability of staying within the the lanes. Do do this, Tesla uses the vision system to see the lane markets and edge of the road. This is the entirety of the solution. Now they have added parking and lane change, but at its core its a singular purpose solution that had to be rushed into production. This probably meant all hands on deck. My contention is that FSD has been worked on for more then a year. They didnt just select nVidia the day Mobileye backed out and they showed a demo of FSD in Oct. of 2016 which was made sometime before. This means they did enough work to complete that demo. Now they could have leveraged some of nVidia's existing solution, but nVidia doesn't have a fully functional FSD solution either. This means that they had to have already done some work. A proof of concept that probably focused on localized high-def mapping and focused machine learning on the demo area. Meaning they took images from the path they where going to travel to teach the machine. This is a very basic proof of concept, but it shows they can some map tiles and paths and teach the machine what signs and stop lights in the area look like. The car then just follows the path in the map and stops for signs, lights and obstacles. This is a bit of smoke and mirrors, but most proof of concepts are. Now, their solution could have changed radically between then and now as they brought in more and more talent and tweaked and improved the systems and processes for crunching the image and video data. They very well could be no further along in terms of doing another demo today like the one they showed in Oct. 2016, but they could be way ahead in terms of the infrastructure and systems to process billions of images and create HD maps. There was just an article on Electrek and other sites about a couple of ex-Tesla guys who have been working on a high-def mapping solution that uses only a dash mounted cell camera and they claim 10cm accuracy. These guys went to Uber drives and paid them between 1c and 5c per mile for data and have mapped like 80% of the country. That is without any kind of radar, which should help get down closer to 5cm, which should be good enough. You cannot tell me that Tesla does not have high def mapping that is closer to 5cm accuracy at this point and a system to keep them up to date as changes occur. Now maybe the brain drain has hurt them but this stuff is getting much easier to do by the day not harder. Processing power is cheaper and cheaper and talent is more and more common and tools are more readily available. Not to dismiss how talented the folks are that Tesla needs to pull this off, but not everyone is gong to go startup their own self driving company.

I am no expert, but I believe their are two distinct paths and some overlap as it relates to the vision system and recognizing lane markings and the edge of the road. I truly believe that for the most part, AP2 is damage control and rushed into production and FSD is a much more complex problem that requires several orders of magnitude more preparation before they can even start to the real work. I think the first thing we will see is HD maps and the second will be stop sign, stop light recognition. Both of these give you absolute most bang for the effort. HD maps makes it so seeing the lane markings is only a backup to the maps and seeing stop signs and stop lights could save some lives when combined with AEB. The maps would also help AP2 work in heavy snow where the car cannot see the lanes, but the system would need to be able to see other landmarks like signs and k-rails and other stationery/permanent objects (there is a good video on the Electrek article that shows how this works). None of this would require any regulatory approval, just validation that the vehicle is following the paths and not emergency braking at stop lights/signs. Validation is also not an easy tasks at all, but I think this is something they are already doing with AP2 today.
 
My guess, and this is completely a guess based on what he said at ted, they may incorporate a demo of dynamic routing during the trip IF they are comfortable enough with the system.

This would tell me that they have pretty good 3d Maps for most of the US, but maybe not every single road in the country. It would also show that they can see red lights and stop signs and know how to navigate them, not just react in an emergency braking situation.
 
That is a heavily used route so load factors are typically around 90%. The most common aircraft on the route is the Embraer 190, a 104 seat capita for AA, and a very efficient airplane. There are roughly 16 flights daily in each direction LGA-DCA, typical fuel burn, say, 700 gallons, and about 90 passengers per flight, so 2880 passenger daily at 7 3/4 gallons per equals roughly 22,400 gallons per day.

That is very rough but p[robably good enough for this purpose.

Amtrak could become more efficient too, but if the Hyperloop works that will demolish any other choice if the price were comparable. Could it actually happen?
Wow! Thanks. I'll take this discussion back to the Shorting Oil thread.

Shorting Oil, Hedging Tesla
 
Last edited:
That is the theory but the relative weakness of AP2 performance at this point, already after 8+ months since release, and missing Musk's target for parity by several months, lowers confidence in the execution of FSD, unless Tesla somehow wants to keep the perception that they are struggling with this. If it was moving along smoothly, wouldn't AP2 be performing better than AP1 by now?
I don't think parity is of very much importance to Tesla right now, in the sense that they are really more interested in much bigger things. Why worry about that if you are going to drop the mother of all updates soon? Either that or they really are struggling, but if that were the case I don't think you see Musk reiterating the coast to coast trip, and I think you see the head of auto pilot getting fired rather than starting his own thing.
While I'm all for a positive attitude and hope they have significant improvements up their sleeve, please all, stop with the delusional references to the 6B miles.

It isn't the 6B fleet miles coming up,
nor is it the total autopilot miles,
nor is it the AP2 miles.

It's the 6B level 5 capable while still being supervised by humans miles to prove their implementation, starting when this capability is deployed sometime in the future!
I think maybe there is a misunderstanding or something. The 6B isn't delusional, it's the gameplan. My understanding is that 6B has been adding up for a while while ap is being used and even when it is not, it started a while ago, and is talking about level 4 and then I'm not sure how soon after level 5 will arrive. My understanding is they already pretty much have the ability for level 4, it's just the accumulation that they need to prove safety beyond a doubt before it can be released to the cars. Tesla has now 1.3 billion miles of Autopilot data going into its new self-driving program
 
I expected both Rives to depart after the acquisition. I agree with the video that maybe Elon wasn't pleased with how SolarCity's business was being run. Speculation on the solar roof being late, we won't know until Aug. 2.

Another thing we may learn more about on August 2 is whether the Rive brothers cashed out early on their $17.5 million each in Solar Bonds.

On April 13, 2017, Tesla redeemed 10 months early SCTY's 6.5% , 18 month Solar Bond Series 2016/13-18M, paying full interest to holders. The early repayment was to avoid violating the unencumbered liquidity covenant of SCTY's Secured Revolving Credit Facility. Electrek later reported that Tesla had advised them that Elon had exchanged his $65 million of that Series for a promissory note. IIRC correctly there was no mention of whether the Rive brothers also exchanged their bonds for notes.
 
  • Like
  • Helpful
Reactions: neroden and AlMc
The way that I understand it, AP1 vs AP2 was require because Mobileye and Tesla has a falling out. It started with the accident where the car went under a tractor trailer and Tesla blamed Mobileye and Mobileye got pissed that Tesla was building their own solution. Tesla wanted to include the Mobileye solution and continue to use it for AP features. This became impossible so Tesla had no choice but to reinvent it from scratch. Well they had some capabilities from nVidia but they where only demo like feature and thus Tesla started from almost nothing to where they are today. If you just think about that for a moment, you understand how much the fallout jammed up Tesla. I cant blame either party, Tesla was Mobileyes most impressive solution and probably felt betrayed by Tesla for wanting to go it alone with FSD, which would have eventually displaced Mobileye complete in the Tesla fleet.

Now that we have the history behind us. I believe the AP2 and FSD solutions are completely different. The reason is that AP2 is basically just TACC with the added capability of staying within the the lanes. Do do this, Tesla uses the vision system to see the lane markets and edge of the road. This is the entirety of the solution. Now they have added parking and lane change, but at its core its a singular purpose solution that had to be rushed into production. This probably meant all hands on deck. My contention is that FSD has been worked on for more then a year. They didnt just select nVidia the day Mobileye backed out and they showed a demo of FSD in Oct. of 2016 which was made sometime before. This means they did enough work to complete that demo. Now they could have leveraged some of nVidia's existing solution, but nVidia doesn't have a fully functional FSD solution either. This means that they had to have already done some work. A proof of concept that probably focused on localized high-def mapping and focused machine learning on the demo area. Meaning they took images from the path they where going to travel to teach the machine. This is a very basic proof of concept, but it shows they can some map tiles and paths and teach the machine what signs and stop lights in the area look like. The car then just follows the path in the map and stops for signs, lights and obstacles. This is a bit of smoke and mirrors, but most proof of concepts are. Now, their solution could have changed radically between then and now as they brought in more and more talent and tweaked and improved the systems and processes for crunching the image and video data. They very well could be no further along in terms of doing another demo today like the one they showed in Oct. 2016, but they could be way ahead in terms of the infrastructure and systems to process billions of images and create HD maps. There was just an article on Electrek and other sites about a couple of ex-Tesla guys who have been working on a high-def mapping solution that uses only a dash mounted cell camera and they claim 10cm accuracy. These guys went to Uber drives and paid them between 1c and 5c per mile for data and have mapped like 80% of the country. That is without any kind of radar, which should help get down closer to 5cm, which should be good enough. You cannot tell me that Tesla does not have high def mapping that is closer to 5cm accuracy at this point and a system to keep them up to date as changes occur. Now maybe the brain drain has hurt them but this stuff is getting much easier to do by the day not harder. Processing power is cheaper and cheaper and talent is more and more common and tools are more readily available. Not to dismiss how talented the folks are that Tesla needs to pull this off, but not everyone is gong to go startup their own self driving company.

I am no expert, but I believe their are two distinct paths and some overlap as it relates to the vision system and recognizing lane markings and the edge of the road. I truly believe that for the most part, AP2 is damage control and rushed into production and FSD is a much more complex problem that requires several orders of magnitude more preparation before they can even start to the real work. I think the first thing we will see is HD maps and the second will be stop sign, stop light recognition. Both of these give you absolute most bang for the effort. HD maps makes it so seeing the lane markings is only a backup to the maps and seeing stop signs and stop lights could save some lives when combined with AEB. The maps would also help AP2 work in heavy snow where the car cannot see the lanes, but the system would need to be able to see other landmarks like signs and k-rails and other stationery/permanent objects (there is a good video on the Electrek article that shows how this works). None of this would require any regulatory approval, just validation that the vehicle is following the paths and not emergency braking at stop lights/signs. Validation is also not an easy tasks at all, but I think this is something they are already doing with AP2 today.
Thanks for the response. Perhaps they really do have a lot more going on beyond the AP2 system. Any thoughts on why something as simple as lane keeping on curving roads is too challenging at this point for AP2? My AP1 does fine on curving roads.
 
While I'm all for a positive attitude and hope they have significant improvements up their sleeve, please all, stop with the delusional references to the 6B miles.

It isn't the 6B fleet miles coming up,
nor is it the total autopilot miles,
nor is it the AP2 miles.

It's the 6B level 5 capable while still being supervised by humans miles to prove their implementation, starting when this capability is deployed sometime in the future!

And it might not be 6B miles. That particular number seems to have started with Elon in the Secret Master Plan Part Deux.

We expect that worldwide regulatory approval will require something on the order of 6 billion miles (10 billion km). Current fleet learning is happening at just over 3 million miles (5 million km) per day.

I'm taking the fleet learning to be a reference to the Tesla vehicles with AP hardware installed (EAP enabled or not), that are collecting driving data and watching over our shoulders while we're driving.


The thing is, he provides no further evidence or support beyond what he says, that 6B is the right number to achieve worldwide regulatory approval. Nor do I take what he said as being the standard that will achieve the outcome - rather I take it to be directionally accurate and what he believed to be the order of magnitude of the effort, as of the time of writing (July 2016).


The reality is that nobody, including Elon, knows today what level of activity and fleet learning will be needed to achieve worldwide regulatory approval and initial implementation. We can guess, as Elon has done.

My guess is he's short by 1 or 2 orders of magnitude (60 or 600B miles).

One reason I'm guessing that, is that if you study vehicle accident statistics and other driving metrics, when units are needed, the standard unit of measure today (US) is units of 100 million miles driven. So deaths for 100 million miles driven, as an example. If 100 million miles is a standard unit of measure, then 6B miles represents 60 units of work; thus my guess that Elon is 1 or 2 orders of magnitude short of the fleet learning activity needed for FSD.

(Sidebar - some of the numbers per 100 million miles driven are getting small enough, that I'm expecting the unit of work to change to billion miles driven. If that happens, then 6B miles represents 6 units of work. Oh - and if we're thinking the autonomous drivers need to be an order of magnitude safer, then that further sounds to me like 6 units of work just isn't enough to prove it out. It's enough to say the results are encouraging, but that doesn't sound like enough to me).

What I love about Tesla's approach is I don't need to wait 2-4 years, or 2-4 decades, to start gaining some of the benefits today. I can get driver assistance (adaptive cruise, lane keeping) in a usable and improving form today. The work that gets us closer to FSD will also approve my fancy cruise control along the way (and my use of the fancy cruise control along the way will get us all to FSD sooner).

In fact, it's the approach that looks to me like it makes FSD economically achievable. The rest of the competitors that are working this with car fleets measured in the 0's or 00's - those look like approaches that won't be able, economically, to get to the end result. They are all going to have to find a way to do what Tesla is doing today - putting cars on the road with some of the functionality, and trick humans into using it, so the data can be collected and fleet learning can occur. For free (or better - pay the company, instead of being paid by the company).

I don't know of anybody taking a rules based approach to FSD - I believe everybody is doing some form of supervised learning, and that means training data to train and validate the models.
 
Thanks for the response. Perhaps they really do have a lot more going on beyond the AP2 system. Any thoughts on why something as simple as lane keeping on curving roads is too challenging at this point for AP2? My AP1 does fine on curving roads.

The people programming have never designed a Bridgeport milling machine. Or taken classes about how you do it.

With a Milling machine there are 2 cylindrical rails that the table travels on as you advance the piece through the cutter. There is no way to hold a tolerance between 2 rails for the entire travel of the table. The only way to get a precision path is to use a circular hole on one of the rails to follow that one closely, and a slot riding the other rail. The slot keeps the table level, but does not require the distance between the two rails be precise.

Auto pilot should work the same way. Use the dashed center lines to construct a smooth path line. Register the vehicle a fixed distance off that line. No ping pong.

Adjust the target distance off the center line only with "out of control limit" changes in lane width.

The software does not demonstrate as much cross training and breadth of experience as I would expect.
 
Thanks for the response. Perhaps they really do have a lot more going on beyond the AP2 system. Any thoughts on why something as simple as lane keeping on curving roads is too challenging at this point for AP2? My AP1 does fine on curving roads.
This is how a Neural Network behaves. It's a black box full of weighted values that represent variables whose combined output leads to a decision. Hence, as it's a "fuzzy" value and you get imperfect behavior that (hopefully) improves with more training. It's also why the driving behavior is changing every release and getting better/worse in some places. Those weighted values are changing as the system is fed more data (i.e. learning). They may also be changing the shape of the neural network, which means it needs to be entirely retrained with whatever data they've stored or acquired. And now, we don't know what changes Karpathy has made.

It's much easier to hand code an algorithm to stay centered between two lines, but that doesn't scale to more complex behaviors. I don't believe MobileEye's solution used an NN for the driving portion, and was more of a hand-coded solution. Their chip was a power-sipping 2.5W...
 
Status
Not open for further replies.