Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Investor Engineering Discussions

This site may earn commission on affiliate links.
So presumably this increases parts cost for the camera modules? Offsetting what's saved by eliminating the USS?
If it was a regular semiconductor package, I would consider that it might even be cheaper with the newer tech, as even with more transistors, if the process is mature enough, compared to an older one, sometimes the package area is smaller enough that you fit so many more dies per wafer that the total cost comes down.

However, I imagine a camera device will have some minimum viable size so I don't imagine the same logic applies to them. They might end up roughly equivalent if the process node they're built on isn't particularly more expensive (it seems likely that the old cameras used an older node, if only because they likely existed for some time and that was what was current when they were designed). If they're on the same/similar node and the dies ended up larger, then they'd definitely cost more (there's some small possibility that they are on the same/similar node, but the die sizes stay roughly the same, but that seems an unlikely combination).
 
  • Informative
Reactions: Discoducky
So presumably this increases parts cost for the camera modules? Offsetting what's saved by eliminating the USS?

Just had another thought. I've been reading that upgrading to 5 MP cameras might require Tesla to move from CAN bus to automotive ethernet to handle the bandwidth. What if the USS were the last remaining sensors using CAN bus, and removing them allows Tesla to switch everything over to automotive ethernet and simplifying their cabling significantly?
 
Switching to newer camera modules is something that Tesla does, and other car OEMs don’t, to their detriment. The supply chain disasters that have mostly befallen other OEMs is mostly due to their reliance on old semiconductor process nodes because they didn’t bother updating their designs. No one builds new semiconductor plants to handle old process nodes. Todays advanced process nodes just become tomorrow’s legacy process nodes, so there is a fixed amount of capacity at any one time, more or less.

Tesla, unique to high volume manufacturers, is constantly making running line design upgrades. These changes are often to reduce costs, but they can also be used to bypass things like chip shortages. At any rate, Tesla’s Samsung multi year camera module deal was made to ensure supply well into the future and thus was made at current or future process nodes. It also allows Tesla to be ready for their more advanced Autopilot chips which they will of course introduce (again, not staying with old or existing tech). And considering a camera sensor costs like $3, I doubt the cost difference mattered much, even if there was much of a cost difference.
 
Just had another thought. I've been reading that upgrading to 5 MP cameras might require Tesla to move from CAN bus to automotive ethernet to handle the bandwidth. What if the USS were the last remaining sensors using CAN bus, and removing them allows Tesla to switch everything over to automotive ethernet and simplifying their cabling significantly?
The cameras are individually connected to the AP computer via high speed serial links.
How is your rear camera after MCU1 to MCU2 upgade (dark?)
CAN could never support video (at useful frame rate/ resolution).
Using dedicated pairs for video adds wiring, but reduces latency and boosts total bandwidth.
1.2MP*4sub pixels * 10 bit * 30 fps = 1.4 Gbps per camera. 6 Gbps for a 5 MP sensor.
 
The cameras are individually connected to the AP computer via high speed serial links.
How is your rear camera after MCU1 to MCU2 upgade (dark?)
CAN could never support video (at useful frame rate/ resolution).
Using dedicated pairs for video adds wiring, but reduces latency and boosts total bandwidth.
1.2MP*4sub pixels * 10 bit * 30 fps = 1.4 Gbps per camera. 6 Gbps for a 5 MP sensor.
Correct, video is fed over those connections.

CAN signals are also sent however, to these older modules to control aspects of the capture process. The assembly has a micro that pre-processes the pixels before sending them to the AP ECU. This can be toggled on or off and it sounds, from AI day, that they are taking in the raw image sensor data now. One less reason to have a CAN signal.

There are other CAN signals that come back from the sensor, mostly diagnostic in nature, but these could be replaced if Tesla designed the micro and implemented a lightweight comms micro possibly sending those signals embedded/encoded into the video stream, thus eliminating another wire.

And then you start to wonder if they are thinking of eliminating the power wire and push power with the video signal. This might also be in the cards. Will be interesting to see the assembly breakdown in the service manual.
 
Correct, video is fed over those connections.

CAN signals are also sent however, to these older modules to control aspects of the capture process. The assembly has a micro that pre-processes the pixels before sending them to the AP ECU. This can be toggled on or off and it sounds, from AI day, that they are taking in the raw image sensor data now. One less reason to have a CAN signal.

There are other CAN signals that come back from the sensor, mostly diagnostic in nature, but these could be replaced if Tesla designed the micro and implemented a lightweight comms micro possibly sending those signals embedded/encoded into the video stream, thus eliminating another wire.

And then you start to wonder if they are thinking of eliminating the power wire and push power with the video signal. This might also be in the cards. Will be interesting to see the assembly breakdown in the service manual.
Ah yeah, communication side channel. They could hang that and power off the new flex-PCB style bus whenever that gets implemented. (Though ethernet seems overkill if it only needs CAN rates).

Side question on bold part, I thought the raw->tone map conversion was occuring in the ISP on the AP computer?
 
Is there a downside to using ethernet?
Part commonality for certain, implementation cost probably. High speed usually means higher cost and Ethernet is point to point vs CAN with multiple nodes on the bus. Technically, two wire CAN has fault tolerance, but that shouldn't be needed.

Info on automotive ethernet 100Base-T1, it is two wire full duplex.https://www.ti.com/lit/SZZY009

There is a new lower speed multidrop deterministic variant 10BASE-T1S, but again cost might be higher.
StackPath
 
Weekend 2.3 update TL;DR - Better, several non-intervention drives with many city street complexities, but still a few safety issues with non 90 degree UPLs and multi-lane roundabouts.

It drives much better, than even 2.2, less often makes mistakes in roundabouts, less jerky with traffic in intersections. I've had several complete end to end no-intervention drives, much like Whole Mars Catalog, so I echo his confidence and conclusions.

Where it is not good: Non 90 degree UPLs as it is not using the creep wall and I believe the lane connectivity graph is incorrectly perceiving these lanes intersecting. To be clear, the car will stop at the intersection, creep forward for visibility, see vehicles coming from both sides correctly and visualize the lanes as intersecting, but won't act on them as target vehicles that have intersecting paths (I have my crude handheld phone video to demonstrate this).

The Fix: Train on these type of intersections so that the creep wall appears and is effective

And multi-lane roundabouts are still a huge issue in my area. Nearly unusable in traffic as it attempts to change lanes TWICE inside the roundabout while coming to a complete stop. This could also be a lane connectivity graph issue.
Regarding roundabouts, Google directions picks the wrong lane for some multilane situations. If Tesla uses that...

SmartSelect_20221016_122823_Maps.jpg

Inner lane is left and straight. Outer is straight and right.
SmartSelect_20221016_123005_Maps.jpg
 
Side question on bold part, I thought the raw->tone map conversion was occuring in the ISP on the AP computer?
Sony has a proprietary tone map on the local micro, but maybe Tesla opted to license it, modify it and tell the sensor to send un-converted tones (i.e. no modification to the sensor data). If you know for sure, that would be cool to know.
 
Sony has a proprietary tone map on the local micro, but maybe Tesla opted to license it, modify it and tell the sensor to send un-converted tones (i.e. no modification to the sensor data). If you know for sure, that would be cool to know.
The TRIP chip has a goodly chunk dedicated to ISP, so I expect that's what they were using.
HW4 gets to use that space for something else, unless they keep it for user/ sentry streams.
 
  • Informative
Reactions: Discoducky
Wrote part of this earlier in an FSD-specific thread, but it's probably worth repeating here for discussion, as I haven't seen much commentary connecting Optimus to FSD.

Has anyone considered that Optimus might be a part of a first-principles approach to solving machine vision in a way that assists FSD on vehicles? Especially in light of the fact that the same FSD computer and cameras were installed in Optimus.

I was driving by a driveway the other day, where the path was blocked by a single, thin chain hanging across two posts. I don't think FSD Beta registered the chain as an obstacle, and it occurred to me that we as drivers achieve our understanding of the world at the human scale. A thin chain doesn't seem like a major obstacle to a heavy vehicle, but it's something you come across while walking that indicates "Do not cross." Regardless of the sensor, camera, LIDAR, radar, ultrasonic, I think any of them would have trouble picking up the chain, but camera vision over multiple successive moving frames probably has the best odds.

So the robot might be able to start with a rudimentary understanding of the physical world via vehicle training data, but then it may be able to augment and improve that training dataset with observations taken from a human view-point at a human scale. You have an extension of the vehicle's vision system that can reach out and touch a chain, feel that it's solid and learn that it is, in fact, an obstacle.
 
  • Like
Reactions: RabidYak
Wrote part of this earlier in an FSD-specific thread, but it's probably worth repeating here for discussion, as I haven't seen much commentary connecting Optimus to FSD.

Has anyone considered that Optimus might be a part of a first-principles approach to solving machine vision in a way that assists FSD on vehicles? Especially in light of the fact that the same FSD computer and cameras were installed in Optimus.

I was driving by a driveway the other day, where the path was blocked by a single, thin chain hanging across two posts. I don't think FSD Beta registered the chain as an obstacle, and it occurred to me that we as drivers achieve our understanding of the world at the human scale. A thin chain doesn't seem like a major obstacle to a heavy vehicle, but it's something you come across while walking that indicates "Do not cross." Regardless of the sensor, camera, LIDAR, radar, ultrasonic, I think any of them would have trouble picking up the chain, but camera vision over multiple successive moving frames probably has the best odds.

So the robot might be able to start with a rudimentary understanding of the physical world via vehicle training data, but then it may be able to augment and improve that training dataset with observations taken from a human view-point at a human scale. You have an extension of the vehicle's vision system that can reach out and touch a chain, feel that it's solid and learn that it is, in fact, an obstacle.


Unless they're planning to just unleash a fleet of robots randomly walking around all over the country.... no.

If they do though they need to train them to randomly stop every X person they see and ask if they know where to find Sarah Connor.
 
Unless they're planning to just unleash a fleet of robots randomly walking around all over the country.... no.

If they do though they need to train them to randomly stop every X person they see and ask if they know where to find Sarah Connor.
First sentence, hey that’s a pretty good idea.

Second sentence, that’s even better!
 
  • Like
Reactions: willow_hiller
Unless they're planning to just unleash a fleet of robots randomly walking around all over the country.... no.

I think even just the objects they encounter in homes and businesses would help FSD gain a deeper understanding of the physical world. What better way to ground-truth whether something you see is an obstacle than to touch it? You can touch it with light, like LIDAR, or touch it with radio waves, like radar, but those have their pitfalls. You could touch it with bumpers, but you'd end up with a lot of angry car owners. So a safe way to touch things is with a robot arm on a mobile pair of legs.
 
  • Like
Reactions: MC3OZ and navguy12
That is how it *was* designed as the data coming off of them was interpreted/processed by licenced proprietary embedded code from the manufacturer. Remember, gotta pay the license fee! Then the data comes into the AP/FSD ECU where it can be further processed, heuristics applied, actioned and visualized. Data was super noisy back in the day (2014).

Again, I hope Tesla says how much per car this saves them.
Too bad there's no version of this question in SAY yet
 
  • Like
Reactions: Discoducky
How well managed is the HUGE data challenge? The number of vehicles * huge data collection each vs limited upload capacity and limited mothership storage capacity? I understand the model presented at this point is that the cars only upload data if it matches a mothership query. Just curious if the known /presented solution seems well thought out and if it maximizes the available data. Lots of collected data goes unused IMO
 
Does Tesla still use USS ECUs? I tried finding such a best on the parts site and failed. Wondered if it was integrated.

That is how it *was* designed as the data coming off of them was interpreted/processed by licenced proprietary embedded code from the manufacturer. Remember, gotta pay the license fee! Then the data comes into the AP/FSD ECU where it can be further processed, heuristics applied, actioned and visualized. Data was super noisy back in the day (2014).

Again, I hope Tesla says how much per car this saves them.

Too bad there's no version of this question in SAY yet

So, I just saw a screenshot in Twitter of new service screens and they showed both front and rear USS connecting to the VCRight ECU.
FfO5xYtWYAA2Sm9.jpg

 
How well managed is the HUGE data challenge? The number of vehicles * huge data collection each vs limited upload capacity and limited mothership storage capacity? I understand the model presented at this point is that the cars only upload data if it matches a mothership query. Just curious if the known /presented solution seems well thought out and if it maximizes the available data. Lots of collected data goes unused IMO
Autolabeling helps with filtering out usable information from the superset of data available to collect.

Not all data is collected at all times from the fleet. The verbosity and specificity can be turned up or turned down or turned off on just about everything.

Are you familiar with sparse vs dense data? Essentially, they collect lots of dense data, which if looked at by a human, would be 99.99% useless. They turn it into sparse data which is, hopefully, closer to 99.999% useful for the ML model at only 0.0001% of the size. Then, as they iterate, the model trains away most of this data and ends up with weightings on what it thinks are the best options given each of these possible variables of travel given each scenario.

Example from AI day is here:


I can dive deeper into this if this level of detail is useful.