Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

General Discussion: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
Pulling from market:
There's a benefit of selling to non-owners.

Current owners have already spread the gospel to their friends/relatives. A non-owner brings a new community of friends/relatives who will then learn the EV benefits.

In the process, one disgruntled voice will be silenced...

That's a best case scenario.
Worst case is purchaser doesn't understand EV/ charging/ cold/ wind/ rain range impacts and complains to all their friends and family. And a new disgruntled voice is created.
 
  • Like
Reactions: Tenable
I’d like to put an end to the nonsense of Tesla being valued similarly to companies that produce several times more units.

Not only the “competition” produces vehicles that live in the past, Tesla’s ENTERPRISE value is a fraction of those of functionally antiqued road transporter (FART) makers.

More than half of the ENTERPRISE VALUE of many traditional FART makers is held bondholders, so market cap alone is not an informative comparison.

Be informed against FUD.

Edit. Of course if you are simply trying to counter the FUD that uses a ratio of market cap per production units, then enterprise value would be a much smarter numerator. All capital is ultimately used for production.

This.

Sorry, typing at 5:30 am, in bed, with my eyes half open, is clearly no bueno... The feedback I got on Twitter:

Screen Shot 2018-02-15 at 7.15.14 AM.png


No dude. I was half asleep.
 
Last edited:
  • Funny
Reactions: gigglehertz and jhm
@Starno
Well, that is one interpretation.
My thoughts are that they are not using DRL for the entire driving problem. Rather, they are using image recognition/ detection + physics model (path prediction ) + rules of the road.
If you already have a codified set of laws/rules for driving, it seems highly inefficient to then train a DNN to follow them via DRL. Especially since to do so would require implementing the entire law/rule set to produce the reward function.
Example: stop on red light
Examine scene, is there red light in our lane? If so stop behind stop line."Here is what you do"
V.s. negative reward for going through red light with no additional data regarding what a red light is, or a stop line. "Oh, you know what you did wrong/ figure it out yourself"

I wish I could give this two reactions: Funny and Helpful.

Could you please expand on your example some more? Which method is Tesla using? At what point is Tesla using DNN vs. rules?

I understand a full explanation requires several courses/books, but I'm hoping you could maybe expand your example with a few more sentences?
 
@Starno
Well, that is one interpretation.
My thoughts are that they are not using DRL for the entire driving problem. Rather, they are using image recognition/ detection + physics model (path prediction ) + rules of the road.
If you already have a codified set of laws/rules for driving, it seems highly inefficient to then train a DNN to follow them via DRL. Especially since to do so would require implementing the entire law/rule set to produce the reward function.
Example: stop on red light
Examine scene, is there red light in our lane? If so stop behind stop line."Here is what you do"
V.s. negative reward for going through red light with no additional data regarding what a red light is, or a stop line. "Oh, you know what you did wrong/ figure it out yourself"

Much of the complexity will also be coded into the maps. What I mean by that, is that the car does not need to be able to drive like a human because there will be data from the high def maps required to make all this stuff work. The maps will include the normal paths and lanes and constraints where the car can drive. It will include speed limits, curvatures of roads as defined by the paths and where to stop or to look for a light to stop and exactly where the line to stop is withing a few centimeters. There are limits and road conditions can change so the system still needs to be able to do human like things, like object recognition and knowing if the object is something to stop for.

With no other cars on the road or no changes to the roads the HD maps and ability to read stop lights would be enough to drive anywhere. Obviously that is not good enough but it limits the scope a bit from everything in the world, to everything along a set path within a map tile. These tiles would be updated fairly frequently by sending data back from the cars to identify new objects. Many of which will not be permanent but some could be actual changes to the road and environment that matter.

My understanding is that Vision+Radar and GPS is good enough to get that 5cm minimum accuracy and you do not even need GPS for the entire time you are driving. Once you know your location down to 5cm, you continue to know it and just periodically verify that you are still where you think are, which may or may not require GPS.

At the end of the day. this stuff is hard and its going to take the best of the best working harder then they have ever worked before. It sounds right up Tesla's ally.
 
  • Like
Reactions: Duffer and mongo
I would hope someone purchasing a 50K car would do at least a little homework beforehand.

As would I. Yet I know people who did not know oil (ICE car of course) needed changed/ checked until their engine stopped running (ok not a 50k car)

Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.
-Albert Einstein
 
  • Like
Reactions: Duffer
I wish I could give this two reactions: Funny and Helpful.

Could you please expand on your example some more? Which method is Tesla using? At what point is Tesla using DNN vs. rules?

I understand a full explanation requires several courses/books, but I'm hoping you could maybe expand your example with a few more sentences?

I'd say it helps to think about problem type/ classifications.
Things that cannot cannot be fully enumerated, like Go positions need a DNN due to inability to fully model the system
Same with image recognition, how can one fully define what a cat picture looks like?
On the other end of the spectrum are things like Pong, which are fully physics defined. From a small video sample, you can determine with high accuracy where the paddle needs to be.

For driving, you have a blend of the two. On the abstract side: what is a lane marker, what is a pedestrian, what is a vehicle (truck, bike, car, bus)? On the deterministic side: what do you do at a red light, four way intersection, roundabout. How do you handle speed and lane position.These things can be programmed as an interaction of the abstract items identified by the DNN.
There is nothing to be gained through the effort of a NN trying to learn how to deal with a 4-way stop through trial and error. It would also require millions of examples of a 4 way stop for the training set. Once the SW can identify a 4 way stop, other vehicles, pedestrians, and arrival order, the code to handle correct interaction is simple. (Like training monkeys writing Shakespeare, it's already written, so use it). Other example: For the self taught Go playing AI, AlphaZero, they programed in the rules for Go, rather than adding a large negative reward for violations thereof to force the NN to discover them itself..

Life can be simpler for the SW if the map tile identifies that it is a four-way stop. But I feel that is a crutch that comes with its own problems. What if there is a detour? What if it changes to a 2-way stop? How do all the 4 way stops get identified initially?

The other day, they were repainting the lines in a 2 lane roundabout. So a hard coded path system would fail badly there.There needs to be an additional catch all layer of maneuvering by disregarding normal traffic flow in the correct circumstances. Need to go, normal lane not available, move cautiously to open lane.


P.S. Why was my post funny?
 
@Starno
Well, that is one interpretation.
My thoughts are that they are not using DRL for the entire driving problem. Rather, they are using image recognition/ detection + physics model (path prediction ) + rules of the road.
If you already have a codified set of laws/rules for driving, it seems highly inefficient to then train a DNN to follow them via DRL. Especially since to do so would require implementing the entire law/rule set to produce the reward function.
Example: stop on red light
Examine scene, is there red light in our lane? If so stop behind stop line."Here is what you do"
V.s. negative reward for going through red light with no additional data regarding what a red light is, or a stop line. "Oh, you know what you did wrong/ figure it out yourself"

One thing about the blog post that struck me was he approved of local maximum use, but Elon chided the Lidar solution as just that. I’m assuming the local max Elon was referencing was the use of extensively detailed pre-made maps so that Lidar can accurately position the car in good weather.
 
One thing about the blog post that struck me was he approved of local maximum use, but Elon chided the Lidar solution as just that. I’m assuming the local max Elon was referencing was the use of extensively detailed pre-made maps so that Lidar can accurately position the car in good weather.

Are you referring to the part "Local optima are good enough" ?

Not speaking for Elon, but my feeling on Lidar is that it does not remove the need for cameras (sign/ signal recognition and such) so the main advantage of Lidar is it allows you to ignore the image to object/ distance conversion algorithm. If you can solve that problem, then there is little to be gained from the additional sensor cost.
Lidar plus cameras provides a quick local optimum, but cameras with the proper software (and optionally radar) provide a better optimum (sensor low cost/ complexity) (not going to start firestorm of whether it is a global optimum).
 
  • Like
Reactions: kbM3
One thing about the blog post that struck me was he approved of local maximum use, but Elon chided the Lidar solution as just that. I’m assuming the local max Elon was referencing was the use of extensively detailed pre-made maps so that Lidar can accurately position the car in good weather.

Local in the time domain. You have to fully solve it with cameras.

Err. If you put cameras in the top of the A pillars there should be enough stereoscopic leverage to easily (computationally light) train the cameras to return the range of an object.
 
  • Like
Reactions: kbM3 and DurandalAI
Local in the time domain. You have to fully solve it with cameras.

Err. If you put cameras in the top of the A pillars there should be enough stereoscopic leverage to easily (computationally light) train the cameras to return the range of an object.

I have lazy eye or some such thing where stereo vision is not my friend. One camera, or even better one camera + movement, can also give distance. Especially with a vehicle system where objects are expected to follow perspective/ vanishing point and you have the roadway as a ruler/ baseline reference plane.
 
I saw people ragging on TT007 in the Market Action thread. Some even said he's a contrarian indicator. Yes, he has had some bad timed enthusiastic posts. I even cautioned him before earnings that when I get as enthusiastic as he was before earnings that it doesn't end well.

Well here's his tweet from yesterday:
upload_2018-2-15_20-10-52.png

007 on Twitter

So if you sold based on him being a contrarian indicator you missed out on the $11.75 move up today. With that said we probably aren't going over $600 by September by 2018. He should probably tone down words like "likely" to "possibly" but other than that I think people are giving him too much flak.

A lot of people probably don't know or remember his background/trading style. Don't forget he was buying big sub $200 in 2016 so he's already made a ton of money. He is also mostly stock and needs a huge drop to get a margin call so making short term predictions that are wrong doesn't hurt him so don't use his tweets for short term trading. Eventually he will be right and we will go over $400. Whether we go over $400 next month or next year he and everyone long stock will do well. Most of the people upset at him are doing short term options. Reminds me of people upset at Elon for not meeting deadlines and losing money on options. I had a bad trading year last year because of the double push back of the 5K/week goal that killed my calls and call spreads but I don't blame TT007 or Elon for my losses. It was my own hubris for thinking I was outsmarting the market.

Remember, options should be priced by the market so that over time the sellers of the options make money. This has to be true or call sellers would never sell them because they are taking a LOT more risk than the buyer. No one in their right mind takes more risk for less reward over the long term. For call buyers to repeatedly make money buying calls means the market is consistently not pricing them right or the buyer has remarkable timing for when to get in and out before time decay works in the seller's favor. Before you ever buy calls or puts ask yourself this first: "why do I think I have an edge over the seller here; why is the seller willing to sell this option to me at this price?"
 
Status
Not open for further replies.