You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Ok good point, I'm leaning more towards your interpretation.This is a convention center loop. Three stops at different convention center buildings. No luggage.
Where would you put comma.ai:s openpilot in this race?
I have it in my pre-ap model S, and it's surprisingly good. Beats ap1 hands down.
They plan on pushing this down to 1500 today. I don't know why but it appears to be the plan. I have cash standing buy.
Nasdaq pooping, TSLA pooping.
It doesn't matter why they buy and hold, only that the buy and hold.It's unearned. They are not specifically buying Tesla because of anything Tesla is doing. Rather they are simply saving for retirement. Tesla is becoming a generic stock, held by many for no better reason than it is included in a major index.
Ok good point, I'm leaning more towards your interpretation.
Old neural network architecture is labelling one frame(2D) from a single camera and to train a neural network on the last two frames(2.5D(2D(x,y)+0.5D(~time))) to predictict where objects are in the image(2D) then do some neural network magic to get it into bird eye view(~2.5D).
New neural network architecture will take a video feed, generate a point cloud of all the static objects(3D) and of the moving objects(4D) and train a neural network to predict where static objects are(3D) and dynamic objects will be(4D) based on the current image frames and recurrent information from the neural network at previous timestep.
I think he is referring to how the neural network internally will be representing the information. If it needs to think in 4D it will start to think in 4D. In order to predict where a moving car will be in 4D space, which is needed in order to predict the depth of the next frame for example, the neural network will find an internal representation in vector form for this. See this video at 22:47
Here is a video about a similar technique:
OK?Old neural network architecture is labelling one frame(2D) from a single camera and to train a neural network on the last two frames(2.5D(2D(x,y)+0.5D(~time))) to predictict where objects are in the image(2D) then do some neural network magic to get it into bird eye view(~2.5D).
New neural network architecture will take a video feed, generate a point cloud of all the static objects(3D) and of the moving objects(4D) and train a neural network to predict where static objects are(3D) and dynamic objects will be(4D) based on the current image frames and recurrent information from the neural network at previous timestep.
I think he is referring to how the neural network internally will be representing the information. If it needs to think in 4D it will start to think in 4D. In order to predict where a moving car will be in 4D space, which is needed in order to predict the depth of the next frame for example, the neural network will find an internal representation in vector form for this. See this video at 22:47
Here is a video about a similar technique:
I found that part interesting. Why would they operate multiple types of vehicles? The only answer I have is that they believe they are truly production constrained, which would explain the lack of the Y in there which would be a better choice overall. I'm guessing they plan to just fill in the fleet whenever they have excess inventory of 3s, Xs, S etc.
Or ARK invest??
Watching the SP go down (sure just a tiny bit..but still) is funny. Right before the biggest potential catalyst if a long time.
I mean who would be selling?!
What happened to the MPVs?
@Curt Renz
Reddit user __TSLA__, whom I believe is @Fact Checking , would like more context about your recent quote from Todd Rosenbluth.
It's not entirely clear from the quote whether Todd is suggesting that index funds usually get a ~7 days notice ahead of inclusion separate from the official public announcement, or whether he is referring to the public announcement itself.