Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is Tesla using machine learning for AP?

This site may earn commission on affiliate links.
It's not just SLAM. The outputs of SLAM are used as inputs for machine learning. How would an autonomous car know how to drive if it doesn't know where it is or what's around it?

You can read about MobilEye's approach Road Experience Management™ (REM™) - Mobileye

Fleet learning is really fleet mapping and always has been.

You can't just say mapping data is used for actuating, consider reinforcement learning with respect to self driving, if you have no actuating or localization then you have no learning. Mapping data is vital to the learning process.

SLAM isn't being feed into machine learning. And when you say "machine learning" I assume you are talking about a model so even in this case is static and DOESN'T change so there is no learning.

SLAM has been here for decades, its really really really old. there haven't been any modifications made other than having it run faster and us having mor powerful phones.

You don't need machine learning to do SLAM and you certainly don't need machine learning to build maps of SLAM interiors.
You need to look up SLAM (Simultaneous localization and mapping).

Mapping data IS used for actuating. Its a big part of Mercedes drive pilot improvements to their new software coming later this year (their using here maps) and GM supercruise.

"The combination of real-time data with precision mapping also improves vehicle control through curves and hills."

Its literally what the map is there for.

I know all about mobileye and there is no fleet learning in REM.
Now if you call it fleet mapping then i can roll with you on that. but there is NO fleet learning. your car doesn't learn. your car maps. and in Tesla Ap1 case it only mapped GPS location making it useless. i don't understand which part of that you can't comprehend?

This is why i love mobileye, they will tell it to you straight. they don't have time to scratch itching ears waiting desperately to hear the next hype.


the machine learning part of google's VPS is most likely doing object recognition. for example it will recognize that this aisle has toothpaste...etc They are not going in and manually plugging in, "over here is lotion and over here is cereal, etc". This way it can constantly be up to date since stores change their layouts all the time.
 
Last edited:
@JeffK

check out the new drive pilot and what the addition of maps let it do.


also for guys like stopcrazy who keep mentioning regulation as some type of barrier only because they want to use it as a crutch when elon fails to deliver.

2018-Mercedes-Benz-S-class-01-876x535.jpg


they already have turns at any type of T intersection, handling round abouts, etc.

aka there is no regulatory issue, you simply release your self driving features as a driver assistance software.
 
Here's the list of convolution kernels (I think this is called) from 17.17.4 that are run on nvidia cuda side:
Code:
b2f.cubin
bbias.cubin
bgemm_128x128x8_TN_vec.cubin
bgemm_32x32x32_TN_vec.cubin
bpool.cubin
copy_transpose_f4_27_4_4_26_40_0_3_1_4_2.cubin
copy_transpose_f4_5_16_16_26_40_0_3_1_4_2.cubin
f2b.cubin
i8conv_3x3_20x4_16_N1_q8.cubin
i8conv_3x3d2_20x4_16_N1_q8.cubin
i8conv_5x5_20x4_8_N1_q8.cubin
i8conv_5x5d2_20x4_8_N1_q8.cubin
sbias.cubin
sconv_direct_fprop_64x32_N1_bias_relu.cubin
sconv_winograd_2x2_3x3_32x32_K256_W40_Q40_N1_bias_relu.cubin
sgemm_32x32x32_TN.cubin
sgemm_32x32x32_TN_vec.cubin
sgemm_tn_128x128_vec.cubin

Here's mapping to more readable names:
Code:
  name: "conv1"
  class_name: "neon_convolution_kernel"
  kernel_name: "sconv_direct_fprop_64x32_N1_bias_relu"
  module_name: "sconv_direct_fprop_64x32_N1_bias_relu"
  name: "convert_to_int8"
  class_name: "tesla_conversion_kernel"
  kernel_name: "f2b"
  module_name: "f2b"
  name: "pool1"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3"
  module_name: "bpool"
  name: "conv2_1x1_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "conv2_1x1"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "conv2_1x1_bias"
  name: "conv2_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "pool2"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3"
  module_name: "bpool"
  name: "inception_3a_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_3a_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "inception_3a_merge_bias"
  name: "inception_3a_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_3a_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_3a_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_3a_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_3a_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_3a_pool_proj_bias"
  name: "inception_3b_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_3b_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "inception_3b_merge_bias"
  name: "inception_3b_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_3b_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_3b_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_3b_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_3b_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_3b_pool_proj_bias"
  name: "pool3"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3"
  module_name: "bpool"
  name: "inception_4a_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4a_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "inception_4a_merge_bias"
  name: "inception_4a_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_4a_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_4a_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_4a_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4a_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4a_pool_proj_bias"
  name: "inception_4b_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4b_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "inception_4b_merge_bias"
  name: "inception_4b_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_4b_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_4b_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_4b_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4b_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4b_pool_proj_bias"
  name: "inception_4c_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4c_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4c_merge_bias"
  name: "inception_4c_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_4c_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_4c_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_4c_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4c_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4c_pool_proj_bias"
  name: "inception_4d_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4d_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4d_merge_bias"
  name: "inception_4d_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_4d_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_4d_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_4d_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4d_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4d_pool_proj_bias"
  name: "inception_4e_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4e_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4e_merge_bias"
  name: "inception_4e_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3_20x4_16_N1_q8"
  module_name: "i8conv_3x3_20x4_16_N1_q8"
  name: "inception_4e_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5_20x4_8_N1_q8"
  module_name: "i8conv_5x5_20x4_8_N1_q8"
  name: "inception_4e_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_4e_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_4e_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_4e_pool_proj_bias"
  name: "pool4"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_5a_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_5a_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_5a_merge_bias"
  name: "inception_5a_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3d2_20x4_16_N1_q8"
  module_name: "i8conv_3x3d2_20x4_16_N1_q8"
  name: "inception_5a_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5d2_20x4_8_N1_q8"
  module_name: "i8conv_5x5d2_20x4_8_N1_q8"
  name: "inception_5a_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_5a_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_5a_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_5a_pool_proj_bias"
  name: "inception_5b_merge_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_5b_merge"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_128x128x8_TN_vec"
  module_name: "bgemm_128x128x8_TN_vec"
  next_kernel_spec_name: "inception_5b_merge_bias"
  name: "inception_5b_3x3"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_3x3d2_20x4_16_N1_q8"
  module_name: "i8conv_3x3d2_20x4_16_N1_q8"
  name: "inception_5b_5x5"
  class_name: "device_q8_convolution_kernel"
  kernel_name: "i8conv_5x5d2_20x4_8_N1_q8"
  module_name: "i8conv_5x5d2_20x4_8_N1_q8"
  name: "inception_5b_pool"
  class_name: "tesla_pooling_kernel"
  kernel_name: "bmaxpool_3x3_s1x1"
  module_name: "bpool"
  name: "inception_5b_pool_proj_bias"
  class_name: "tesla_bias_scale_kernel"
  kernel_name: "bbias"
  module_name: "bbias"
  name: "inception_5b_pool_proj"
  class_name: "openai_gemm_kernel"
  kernel_name: "bgemm_32x32x32_TN_vec"
  module_name: "bgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "inception_5b_pool_proj_bias"
  name: "convert_to_fp32"
  class_name: "tesla_conversion_kernel"
  kernel_name: "b2f"
  module_name: "b2f"
  name: "conv_pool_26x40_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_relu"
  module_name: "sbias"
  name: "conv_pool_26x40"
  class_name: "openai_gemm_kernel"
  kernel_name: "sgemm_32x32x32_TN"
  module_name: "sgemm_32x32x32_TN"
  next_kernel_spec_name: "conv_pool_26x40_bias"
  name: "conv_pool_13x20_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_vec_relu"
  module_name: "sbias"
  name: "conv_pool_13x20"
  class_name: "openai_gemm_kernel"
  kernel_name: "sgemm_32x32x32_TN_vec"
  module_name: "sgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "conv_pool_13x20_bias"
  name: "conv_pool_9x14_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_relu"
  module_name: "sbias"
  name: "conv_pool_9x14"
  class_name: "openai_gemm_kernel"
  kernel_name: "sgemm_32x32x32_TN"
  module_name: "sgemm_32x32x32_TN"
  next_kernel_spec_name: "conv_pool_9x14_bias"
  name: "conv_pool_5x7_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_vec_relu"
  module_name: "sbias"
  name: "conv_pool_5x7"
  class_name: "openai_gemm_kernel"
  kernel_name: "sgemm_32x32x32_TN_vec"
  module_name: "sgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "conv_pool_5x7_bias"
  name: "conv_PSP"
  class_name: "neon_convolution_kernel"
  kernel_name: "sconv_winograd_2x2_3x3_32x32_K256_W40_Q40_N1_bias_relu"
  module_name: "sconv_winograd_2x2_3x3_32x32_K256_W40_Q40_N1_bias_relu"
  name: "deconv_16x16"
  class_name: "neon_compound_deconvolution_kernel"
  kernel_name: "sgemm_tn_128x128_vec"
  module_name: "sgemm_tn_128x128_vec"
  next_kernel_spec_name: "deconv_16x16_shuffle"
  name: "deconv_16x16_shuffle"
  class_name: "neon_transpose_kernel"
  kernel_name: "copy_transpose_f4_5_16_16_26_40_0_3_1_4_2"
  module_name: "copy_transpose_f4_5_16_16_26_40_0_3_1_4_2"
  next_kernel_spec_name: "deconv_16x16_bias"
  name: "deconv_16x16_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_vec"
  module_name: "sbias"
  name: "reshape_to_input_27x104x160"
  class_name: "openai_compound_deconvolution_kernel"
  kernel_name: "sgemm_32x32x32_TN_vec"
  module_name: "sgemm_32x32x32_TN_vec"
  next_kernel_spec_name: "reshape_to_input_27x104x160_shuffle"
  name: "reshape_to_input_27x104x160_shuffle"
  class_name: "neon_transpose_kernel"
  kernel_name: "copy_transpose_f4_27_4_4_26_40_0_3_1_4_2"
  module_name: "copy_transpose_f4_27_4_4_26_40_0_3_1_4_2"
  next_kernel_spec_name: "reshape_to_input_27x104x160_bias"
  name: "reshape_to_input_27x104x160_bias"
  class_name: "tesla_bias_kernel"
  kernel_name: "sbias_128x1_vec"
  module_name: "sbias"

They are used with some sort of a neural net if this message is to be believed in the code: "constructured Tesla conv kernels for network".
I don't think they train the net in the car, though I don't know much about the subject.
There's a ~30M binary blob of the trained network data they use instead that is supplied.

Are these codes really use for AP?
Wow I did not thought it will be this complicated.