Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Bad, evil companies! Except the one eliminating auto dealer middleman, of course :)
Except the one fighting on the right side of the climate battle ;)

I couldn't care less if some company makes more money and uses it for business-as-usual purposes.

Besides, I don't think uber/lyft drivers are hated as much as the dealerships.

So much for Robofantasy, then. I found it funny when an analyst asked Musk who would be liable in case of a wreck and got a few seconds of deer-in-headlights followed by a mumbled 'Tesla, I suppose". Nothing like a fully fleshed-out business plan.
Well, for that they have to first get the robotaxi ... rolling. I'm sure they will come up with a plan by then ;)
 
LOL, BMW ducking behind none government regulation to say level-3 won't be available in U.S. but yes in Europe. I doubt we will see much of Level-3 in Europe either.
Obviously carbuzz doesn't know much about the U.S., in going along with the excuse. Perhaps a better title would be: "BMW doesn't want to assume liability in the U.S. for Level-3."

... BMW's director of development, Frank Weber: "Level 3 you will see from us in the 7 Series next year," he said. "It's a function you can buy. It will be ready to go at the launch of the 7 Series."

Related articles:
  1. BMW R&D Boss: "Level 3 Autonomous Driving Will First Come On Highways"
  2. BMW 7 Series To Reach Level 3 Autonomy Next Year
 
Last edited:
Obviously carbuzz doesn't know much about the U.S., in going along with the excuse.
None of these auto magazines will say anything negative about the car companies (except Tesla) - because if they do, the reporter will lose the job or the magazine will lose the ad money. The myth of "independent media". Without financial independence/freedom there is no freedom. True for people and for media/companies.
 
  • Like
Reactions: Terminator857
Recently, Musk tweeted that “he was able to do several zero takeover drives around Austin last night using random map pin drops (no Tesla has ever done these routes).”

The vehicle has been driving under the control of the FSD function. The FSD shows extremely strong stability and does not need to take over during the whole process.

n addition, Musk also emphasized that the high-precision maps do not support these routes. Even Tesla vehicles have not got them yet.

www.google.nl/amp/s/www.gizchina.com/...
 
My guess is that it's better than LIDAR because it's much cheaper. It's probably a lot worse than LIDAR at producing a consistently accurate depth map and obviously this implementation is much lower resolution.
Are you talking in general or 10.5 in particular ?

Also, what do you do when your voxel and perception NN's disagree??? ;)
Hmmm ... aren't voxels created using perception NN ?
 
Are you talking in general or 10.5 in particular ?


Hmmm ... aren't voxels created using perception NN ?
Yeah, that's my understanding. There is no "sync" issue as with using different type of sensors (as with previously with the radar), given it's all based on the same pixel input. You still have to make decisions however how to treat the various inputs as the voxel one apparently ignores moving objects like cars (while the general object recognition obviously identifies cars, as does the depth map).
 
  • Informative
Reactions: EVNow
Are you talking in general or 10.5 in particular ?
I'm talking about Vidar which is an area of ongoing research. My understanding is that no one has managed to achieve the performance of LIDAR yet.
Hmmm ... aren't voxels created using perception NN ?
I'm pretty sure it's a separate NN (and even if it's another output of the perception neural net is that really any different?). Anyway, it was a joke about sensor fusion concerns. Voxel data is the same type of data that you get from LIDAR so any supposed issues with sensor fusion would also apply. If the perception NN says there's drivable space in front of you but the Voxel NN says there's an alien space ship sized object which do you believe?
 
I'm talking about Vidar which is an area of ongoing research. My understanding is that no one has managed to achieve the performance of LIDAR yet.
So, nothing to do with 10.5 released two days back ?

I'm pretty sure it's a separate NN (and even if it's another output of the perception neural net is that really any different?). Anyway, it was a joke about sensor fusion concerns. Voxel data is the same type of data that you get from LIDAR so any supposed issues with sensor fusion would also apply. If the perception NN says there's drivable space in front of you but the Voxel NN says there's an alien space ship sized object which do you believe?
Then I believe NN has been smoking ....
 
I'm talking about Vidar which is an area of ongoing research. My understanding is that no one has managed to achieve the performance of LIDAR yet.

I'm pretty sure it's a separate NN (and even if it's another output of the perception neural net is that really any different?). Anyway, it was a joke about sensor fusion concerns. Voxel data is the same type of data that you get from LIDAR so any supposed issues with sensor fusion would also apply. If the perception NN says there's drivable space in front of you but the Voxel NN says there's an alien space ship sized object which do you believe?
Not exactly. Because they come from the same sensor, there are per-pixel associations that can be made between them (in the case of the depth maps, it's a direct association). You can't do the same with two different sensors (like cameras + Radar or Lidar), those must be "fused," where you have to deal with alignment.

Also the voxel output discussed here is already filtered/processed data (moving objects like cars are already filtered out):
It's different from the raw data you get from Lidar, which are point clouds and would require further processing to generate such voxels. The twitter link above actually has a browser tool actually has links to point clouds generated by the depth map. You can also see the voxel representations. It's actually quite interesting to play with.

Another difference is because lidar/radar works on different wavelengths than cameras, there will be things that show up there that won't show up in the cameras (or vice versa).

Sensor fusion is more similar to what Tesla does to treat the input from multiple cameras (when the same object is captured by different cameras).
 
Not exactly. Because they come from the same sensor, there are per-pixel associations that can be made between them (in the case of the depth maps, it's a direct association). You can't do the same with two different sensors (like cameras + Radar or Lidar), those must be "fused," where you have to deal with alignment.

Also the voxel output discussed here is already filtered/processed data (moving objects like cars are already filtered out):
You already have a bunch of cameras with different alignments so you've got to solve that problem anyway.
I think the moving objects are filtered out in the way they train the neural net. They talked about moving through the world and finding the the point cloud that is consistent across all frames. That would naturally filter out moving objects.
It's different from the raw data you get from Lidar, which are point clouds and would require further processing to generate such voxels. The twitter link above actually has a browser tool actually has links to point clouds generated by the depth map.
What is the advantage of voxels over point clouds?