Todd Burch
14-Year Member
My theory:
1. Using vision instead of a dedicated sensor reduces the amount of hardware required, which means less cost (slightly higher margins) and less hardware to break. It also means simpler manufacturing/assembly. Might save $10 in hardware. Doesn't sound like much, but when Tesla is making >1M cars a year, that's $10M in additional profit. That's like adding 100 very well-paid employees to the payroll for zero additional cost, which when put in those terms is a lot. Doubt this is a primary reason, but it's a benefit. Penny saved is a penny earned.
2. The neural net likely needs to know when it's raining or when the roadways are wet anyway (for autonomous driving, knowing this would be useful to adjust maximum speed in curves, etc).
Since (2) is a bit of a necessity, (1) follows.
1. Using vision instead of a dedicated sensor reduces the amount of hardware required, which means less cost (slightly higher margins) and less hardware to break. It also means simpler manufacturing/assembly. Might save $10 in hardware. Doesn't sound like much, but when Tesla is making >1M cars a year, that's $10M in additional profit. That's like adding 100 very well-paid employees to the payroll for zero additional cost, which when put in those terms is a lot. Doubt this is a primary reason, but it's a benefit. Penny saved is a penny earned.
2. The neural net likely needs to know when it's raining or when the roadways are wet anyway (for autonomous driving, knowing this would be useful to adjust maximum speed in curves, etc).
Since (2) is a bit of a necessity, (1) follows.