Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
No one wants to confront the reality of what $600 mil in m3 in transit will do to the balance sheet and cash flow.

Plus half of S/X sales missing ..

I warned about it:

What is hurting Q1 most is half of S/X sales missing, and over 10k units in transit. That's a hit of over $2b in missing revenue, and at least a $500m hit to GAAP income.

A lot of good things need to happen in Q1 for those two strong forces to be countered.

Not advice! :D
 
a simple answer might be like
There are well over 400 different types of vehicles sold in the US listed on Kelly Blue Book
they all drive differently, sensors would be randomly placed, a great way to get total gibberish

Tesla has around 450,000 fairly identical vehicles that act fairly the same, with a few dozen sensors each. all fairly identically placed reporting with not a lot of +/- on placements of readouts
( when target shooting, you were very precise, all close together, but the target was _over there_)
this removes a lot of variables

Also bear in mind that Tesla is capturing the throttle movement, the braking, the turning on the steering, speed, the speed of each wheel, and how all these change and interact as a human driver navigates a tricky bend, with video from 8 cameras at the same time. This cannot be retrofitted.
 
  • Love
Reactions: winfield100
A word on the FSD demo ride video that Tesla released...

Tesla claims that this is a recording of their FSD capabilities. So just for argument's sake, let's assume Tesla is lying here. Let's assume this is nothing but a geofenced, pre-planned route that they mapped out ahead of time. OK, so that means that in the worst case scenario Tesla is at least on par with what Waymo and the others are demonstrating. OK, that said, who is in the better place to move forward and bring this technology to fruition on a mass market scale?

Tesla controls every aspect of their technology. They make the chips. They make the computer the chips go in. They make the software that runs on those chips. They make the cars that those computers go in. They manage the neural net that learns from their fleet. Their system is far, far cheaper than all the others relying on LIDAR as their primary source of data input. (Let's not forget that SpaceX uses LIDAR to help do the simultaneous landings of their boosters. It's not like Elon doesn't understand LIDAR. He knows it very well. He also knows its limitations.) They already have hundreds of thousands of cars in the wild right now providing data to their neural net. Who is better positioned?

Game...
Set...
Match ...

...and this is the WORST case scenario. Oh wait...Elon time. Tesla never delivers on time. OK, I'll give you that too. So add a year to his time lines. That still puts FSD in the hands of owners in 2021. Where will WAYMO be then? Still trying to figure out how to get LIDAR to work in all situations. Still buying their hardware from third parties. Still buying their cars from third parties. Still trying to figure out how to mass market their technology, let alone at a profit.

Yeah, Game...
Set...
Match...

Dan


REPLY

No offense, but I don't want to assume for arguments sake that Tesla is lying about their FSD. The very reason why Tesla brought Bannon and Karpathy into the limelight is because that kind of people are not capable of lying about their life's main activity.

I have (in other parts of computing) worked with similarly world-class competent people and I can imagine how e.g. financial analysts with their different background are completely unconvinced by reasoning such as mine.

But I don't think that is a reason to consider the (hypothetical) scenario where Tesla is lying to its investors about its progress.

Because they are not.
 
I think they will start in a couple of cities but quickly expand.

I agree that boiling the ocean approach won't work. They won't wait for 'real' FSD.

BTW, I won't be surprised if waymo beats them to market, like Bolt did.
Well, about that... Waymo already went live. Of course, that deafening thunder you are not hearing is the sound of all those people who need a cab service that can only pick them up from fixed stops (you know, like a bus), follow certain routes (you know, like a bus) and drop them off at fixed stops (... you get the point). The sole advantage to the bus is you don't have to share space with the hoi polloi.

The Bolt beat the M3 to market, sure. And it had no impact on the sale or profitability of the M3.

Waymo has beaten Tesla to the taxi market (well, to the not-a-bus market, anyway). I don't expect it to have any impact on Tesla's "ride sharing" network (if that ever comes to pass) because its very design means that it cannot scale. And, in any case, it is only for those who will pay a premium to avoid a bus, but won't use a taxi or similar service. Meh.

[as @Nocturnal pointed out, Waymo's service isn't actually autonomous. I can't believe I left that out. But other than the implication what I wrote was accurate: I referred to "bus service" vs "taxi"]
 
Last edited:
I can tell you what would have changed my opinion. A presentation on the driving policy part of the stack which demonstrated development of some more difficult part of driving -- something humans frequently get wrong. Before the presentation, I actually thought they were further along than they are, believe it or not...
I think the problem they have with that is twofold:
One. They would have to set something up to replicate that kind of condition, in which case all the detractors would yell "staged" and "faked".

Two. They could have done an obvious thing like stopping before entering the intersection from a parking lot--because most drivers don't do that, at least around here they don't--there would have been "it's pokey", "like grandma driving".

Those who went for a ride were impressed. The video wasn't, but it's very hard to make an exciting video of driving. After about ten seconds, you skip to the end (unless you are analyzing frame by frame).
 
  • Like
Reactions: neroden
Less than 20 meters of the side roads were visible.

The car was driving on a main road and had the right of way in the first intersection and was not in a residential area:
Even traffic law doesn't require drivers to be able to anticipate every possible high speed object arriving from behind cover:

So what's your point? The Tesla in the video was driving safely yet practically in every possible way.
 
Well, about that... Waymo already went live. Of course, that deafening thunder you are not hearing is the sound of all those people who need a cab service that can only pick them up from fixed stops (you know, like a bus), follow certain routes (you know, like a bus) and drop them off at fixed stops (... you get the point). The sole advantage to the bus is you don't have to share space with the hoi polloi.

The Bolt beat the M3 to market, sure. And it had no impact on the sale or profitability of the M3.

Waymo has beaten Tesla to the taxi market (well, to the not-a-bus market, anyway). I don't expect it to have any impact on Tesla's "ride sharing" network (if that ever comes to pass) because its very design means that it cannot scale. And, in any case, it is only for those who will pay a premium to avoid a bus, but won't use a taxi or similar service. Meh.
And still have a driver sitting in the seat right? And as I've said multiple times, for Waymo to beat Tesla they need to get FSD going first AND get that tech into a sizable number of vehicles. That's a several year process. Build their own? Good luck. Buy their own? Better hope GM can make enough cars for you and is willing to sell them, good luck. License tech to GM? It will take them a year or so just to get the hardware designed into their models.
 
You didn't read my statement. My statement was correct.

There are areas where computers are helpful in medicine (I'm not sure "AI" is a meaningful word), and that's one of them. A much bigger one which should be implemented immediately is to devise a database where doctors can punch in all known symptoms and have all known diagnoses spit out from the database. Leave it to the doctor to filter through the possible diagnoses; but a computer is much better at coming up with all possibilities. (A computer might even be better at ranking them by likelihood based on what it knows... but then when an actual doctor looks at it and talks to the patient, they may go "Yeah, we know the #1 likelihood one is wrong because of this information which the computer didn't have", and move on to #2.)

AIs aren't going to REPLACE DOCTORS, which is what the other person to whom I was responding was claiming. We flat out don't know enough about the human body yet. You can't train a neural net or an expert system when you don't really know what you're doing either; "computerized exploratory science" is in its infancy. Might replace specialists in some fairly cut-and-dried areas.

I am not going to go into the medical stuff I've dealt with in my life. None of it could have been handled by computers at this point. Much of it doesn't even have proper diagnostic names.

The cited "AI" example is specifically in one of those areas where the doctors have trouble coming up with a list of possible diagnoses, because recalling a long list is something humans are extra bad at. A competent combination of computer and doctor would do even better.

From the article:
"AI can reduce workloads for doctors and help them keep improving their skills. It would function like a GPS, while human physicians remain behind the wheels."

Yes, this is the way forward. Computer-*assisted* doctors.

They say that because they do not want to scare doctors. Many general practitioners actions and diagnosis can be defined by hard coded if /then statements and have already been replaced by computers (I.e pharmacy drug interaction checks, symptom checkers, etc). Xray/ct scan reading are next.
 
You didn't read my statement. My statement was correct.

There are areas where computers are helpful in medicine (I'm not sure "AI" is a meaningful word), and that's one of them. A much bigger one which should be implemented immediately is to devise a database where doctors can punch in all known symptoms and have all known diagnoses spit out from the database. Leave it to the doctor to filter through the possible diagnoses; but a computer is much better at coming up with all possibilities. (A computer might even be better at ranking them by likelihood based on what it knows... but then when an actual doctor looks at it and talks to the patient, they may go "Yeah, we know the #1 likelihood one is wrong because of this information which the computer didn't have", and move on to #2.)

AIs aren't going to REPLACE DOCTORS, which is what the other person to whom I was responding was claiming. We flat out don't know enough about the human body yet. You can't train a neural net or an expert system when you don't really know what you're doing either; "computerized exploratory science" is in its infancy. Might replace specialists in some fairly cut-and-dried areas.

I am not going to go into the medical stuff I've dealt with in my life. None of it could have been handled by computers at this point. Much of it doesn't even have proper diagnostic names.

The cited "AI" example is specifically in one of those areas where the doctors have trouble coming up with a list of possible diagnoses, because recalling a long list is something humans are extra bad at. A competent combination of computer and doctor would do even better.

From the article:
"AI can reduce workloads for doctors and help them keep improving their skills. It would function like a GPS, while human physicians remain behind the wheels."

Yes, this is the way forward. Computer-*assisted* doctors.
I'm not up for trying to dig up a citation, but if memory serves what I'm talking about was using the then-popular term, "expert system" which was better at ranking diagnoses. Years (decades? I lose track) ago this was done, but the medical profession is a bit too... defensive... to accept the use of diagnostic aids.

Well, I'm not that close to health care anymore, but to caveat I wouldn't be surprised if this hasn't filtered up in some fashion that is less direct than the doctor-omniscience threatening computer diagnosis simply because of how long ago what I'm talking about referred to. But I have had doctors who plainly just googled the symptoms. Oh, I'm sorry, looked it up on webmd. Sigh. (how do I know, because they said. And I wondered what I was paying them for...)

In other words, this is exactly the right approach. Funny how it parallels driver assistance :)
 
  • Love
Reactions: neroden
You clearly don't understand what on-die SRAM brings to this solution.

To expand on this to the laymen. This is a general knowledge for ASIC designer from back in my day. The data from on die SRAM (talking about writing to a sram here) usually takes about 4 clock cycles to settle down. Because a gate at the RTL level act as a capacitor and the charging of such capacitors from 0 to 1 or the discharging from 1 to 0 is not a smooth curve, if you look at it under the oscilloscope, it actually oscillates between 0 and 1 quite frequently. So the solution to that is to wait for a safe minimum of 4 clock cycle for it to settle in theory 99.99999999% of the time. There's always that 0.00000000001% of the time where it is still oscillating. I've experienced once porting the same IP to a smaller node process and the instability wait time actually increased. So this is a problem that gets bigger when new lithography process gets introduced.

Where as numbers in the register inside the cpu can be generally trusted to be used on the next clock cycle. Now if we are to bring in data off chip. The wait is so long and inefficient that it is usually relegated to loading of preset data only. Like the loading of ROM or an image file or a song.
 
Last edited:
Not sure that's true: the human eye has ~3 million cones (color) and ~100 million rods (grayscale). "Retina Display" and resolution is not just marketing.

The optic nerve carries 1-1.7m signals.

That's a lot of input compared to Tesla's HW3 HD 1.2 mpixel cameras that produce 1280 x 964 frames according to @verygreen.

So the human eye, despite asymmetric allocation, processes a lot more information than a single Tesla camera.

I suspect they meant the car has full resolution for a much wider field of view than the human. We have a fairly tight field of view period(a little less than 180 degrees) vs the car, and much of that isn’t what we could call “full resolution”, and is useful mostly only for high speed movement detection. Almost all of our usable vision is focused into a very small field of view.

That said, while I don’t remember the specifics, I believe the cones/rods don’t work in such a straightforward way that those would equate directly to pixels.
 
  • Informative
Reactions: BioSehnsucht
And still have a driver sitting in the seat right? And as I've said multiple times, for Waymo to beat Tesla they need to get FSD going first AND get that tech into a sizable number of vehicles. That's a several year process. Build their own? Good luck. Buy their own? Better hope GM can make enough cars for you and is willing to sell them, good luck. License tech to GM? It will take them a year or so just to get the hardware designed into their models.
Yeah, updated to make it clearer. But surely you didn't miss my dismissive relegating them to being a bus service? Even if Waymo succeeds in eliminating the safety driver that is all they are offering. In other words, not competitive. Kinda like the Bolt: it had no impact on the success of the M3, good or bad, IMO. Same with Waymo.