immunogold
Member
Any and all carryover peeping through to trading hour activity will be run over. Multiple times.
Using vision only.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Any and all carryover peeping through to trading hour activity will be run over. Multiple times.
I would rather not see another vehicle after the pickup. They should not show any others until the y and at least one other is in production. Hopefully the semi.So LA Autoshow starts 22 Nov. Optimal date to release Pickup would therefore be ~20/11. 2 weeks notice to the launch party - invites go out ~6th Nov. Therefore Elon will be back on Twitter in 72 hours and the world will get back to normal.
I have plenty of "one more thing" ideas. My favourite being a Model 2 rolling off the back. <1% chance. I will spare you from the other ideas as they mostly involve multitudinous LIDAR up the wazoo.
I kind of feel the same way. But it would be super sweet if they showed off an early stage concept for an electric excavator, bulldozer and cement mixer, with a view to dominating construction and mining support vehicles, together with a mobile power pack station. And agricultural tractors. Churn those babies out and kill red diesel.I would rather not see another vehicle after the pickup. They should not show any others until the y and at least one other is in production. Hopefully the semi.
If you solve vision why do you need radar?
Yes, but they started out with LIDAR to race in the DARPA Grand Challenge, because ~18 years ago the only way to get a high resolution, high FPS 3D map of the car's surroundings was LIDAR.
That "path dependent" LIDAR accident of history turned into a design and process failure they weren't able to get rid of yet.
Tesla's FSD efforts didn't have this historical baggage - they started from a clean slate in 2016 when Elon & his team realized that they could probably do FSD with 7 cameras, a bunch of ultrasonic sensors, accelerometers, GPS and two radar channels hooked up to an in-car power efficient supercomputer they designed for the purpose.
What amazes me is that despite Elon explaining this early on, none of the other major FSD projects is following Tesla's lead, they are stubbornly clinging to their LIDAR approaches - and by today it's probably too late already.
Yes. Which makes it questionable if Elon is right.
Here is MobilEye's EyeQ5 for example. 24 trillion operations per second
Not sure how much experience you have with Lidars and snow, but Lidars sends out light and snow reflects light, so Lidars can see snow.
it is not a very different problem than deciding what is road and what is grass.
There is plenty of testing being done on snow:
LIDAR cannot distinguish between when it's reflecting off snow, reflecting off a tree branch, or reflecting off a person laying on the ground; there is no colour or subtle pattern contrast.
Have you done any lidar pointcloud filtering or what are you basing these statements on? A person looks very different to a tree branch.
White snow and a tree branch have very different reflectivity and very different surfaces.
What is the reason Tesla didn’t include inexpensive infrared in their camera suite? I’d have thought it was a useful extra sense beyond “eyes” and radar, given it would see pedestrians obscured from vision and engine signature.Because:
If LIDAR units cost $10 each and had a power draw of 10 watts there's no doubt it might make sense to add them like ultrasonic sensors, for redundancy. But at $50,000+ (high end LIDARs), or even at $5,000 they'd be crowding out real safety measures.
- Radar sensors provide valuable, life saving physical information that cameras don't: they can sense through ~200 meters of fog, dust, rain and snow, at night. They can often "see through" the next car in front and detect a suddenly slowing car two cars ahead. LIDAR on the other hand is using single frequency photons that don't sense more than cameras and radars already do.
- Radar sensors are also an order of magnitude less expensive than LIDAR.
FSD sensors for volume manufacturing of passenger cars must be selected based on cost/benefit analysis, not theoretical utility.
For example there's no doubt that a second, rear facing radar, or a secondary forward facing radar with a different frequency would improve overall safety - but radar sensor units are not that inexpensive yet.
Not only is your argument a logical fallacy, there actually is one FSD competitor who is following Tesla's lead - Intel:
Intel's very latest chip might have the computing capacity - but they don't have Tesla's fleet size, nor the training data feedback loop.
All of these are essential to success if the FSD problem is "very complex" (as @ReflexFunds pointed it out), requiring tens of million of miles of training on hundreds of thousands of cars per neural network and driving software iteration, and billions of miles of training on over a million cars to reach "superhuman" levels of reliability - which I think it is.
One argument I’m missing in the Lidar vs. camera discussion: Do you want your car to look like a police car? This (Byton) is the cleanest Lidar setup I found so far and it still looks weird:
View attachment 472685
That is, of course, not how LIDAR is used; if it were used that way, you'd have LIDAR picking up lane lines and the like (ever see that in a Waymo LIDAR demo?). It's a research topic, but AFAIK nobody is actually using it. And more to the point, the problem that you face is that a beam reflecting off of a 90% reflectivity surface lying at an angle of 10° relative to the beam path yields the same intensity value (~0,156 * attenuated strength) as a beam reflecting off of a 20% reflectivity surface lying at an angle of ~51,3°.
One argument I’m missing in the Lidar vs. camera discussion: Do you want your car to look like a police car? This (Byton) is the cleanest Lidar setup I found so far and it still looks weird:
View attachment 472685
If you are going to question it, you have to find fault in the simple logic:
To solve FSD, you have to solve for vision.
If you solve for vision, lidar is redundant.
Posts about how good lidar is are irrelevant. To make a case for lidar you have to find something that lidar is *required* for that vision/radar/ultrasonics cannot do.
The lidars that will be used in production vehicles are not in final form yet. No one knows if lidar will be necessary, including Tesla and everyone in this thread.
Using Lidar was not an option for Tesla. It was too expensive and too big.
Umm, aren't you contradicting your opening statement? If lidar is not final, how can you claim the lidar final form will arrive before a vision FSD system is completed? Given Tesla has mass production and is waiting on SW (maybeeee retrofit HW upgrade will be needed, but doubt it), any Tesla solution in the next (final lidar dev time + vehicle development time) would still hit mass production first.It won't be too expensive and too big by the time FSD vehicles are ready for mass production.
The lidars that will be used in production vehicles are not in final form yet. No one knows if lidar will be necessary, including Tesla and everyone in this thread.
Using Lidar was not an option for Tesla. It was too expensive and too big. It won't be too expensive and too big by the time FSD vehicles are ready for mass production.
The AP computers used on FSD cars are not in final form right now (pick whether SW and SW + HW will change). All current drivers know Lidar is not necessarily given vision and sufficient computing power (how drivers currently drive). So we know lidar is not neccessary, the only question is the timeline/ cost for a solution that does not require it.