Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
I'm not sure that "self discharge" and "vampire drain" are the same thing. Some kinds of batteries really do go down even when not connected to anything at all, and that's usually referred to as self-discharge. But I could be wrong here, maybe vampire drain is what they meant.
Vampire drain happens only in sunlight.
 
I think bringing seat production in house is a great idea. Producing seats automatically should be part of the dreadnought plan. Model 3 is a high volume product, all seats are identical except color. Automate the seat production will greatly reduce the cost. Over time this approach will create a strong moat that other companies can't do much about it.

I looked at the inside of a car seat (leather, heated, 8-way power), on the mechanical and electrical side there is nothing that can't be auto produced. The cover is tricky, but I am 100% sure it can be automated. I can think of ways to do it, I am sure Tesla can do it with even better approach. The initial cost is high, then you produce millions after millions with low cost and consistent quality.

We should not view Dreadnought as a future plan, pieces of it probably are being built and put into use one at at time.

There is no way shorts can cover with my shares. In stead I am adding more.
 
screenshot67.jpg
 
In the spirit of the village idiot like Nasrudin, all my knowledge is based on the old paper, "Dave," by NVDA and a recent series of articles in the New York Times on AI. On the latter it is mostly a survey but one idea stood out. AI researchers are concerned about the machines resistance to the off switch.

In my simpleton understanding of the "Dave" approach it appears above you are doing the work of the machine. I think the problem is easier to understand. What programmers must do, it seems to me is far simpler than you sketch out. The machine is told, here is a data set. What you are receiving are inputs from an environment. We call it a car being navigated. Learn what "the car" is doing. They are teaching the machine how to drive. They are not specifying what the machine is supposed to learn. The drivers of the car are doing in many subtle ways the thinking you want to formalize. With the variety of drivers and situations encountered the machine will learn from us how to drive. No?

To protect ourselves from malicious AI we have to show the machines good behavior and why bad behavior is to be avoided. Since in a first principle sense all intelligence is a hybrid of machine and human, what we must do is lead by good examples, and teach that morality to the machines. Bearing in mind, of course, George Bernard Shaw's observation "if you must provide yourself as an object lesson for your children, do so as a threat and not an example." That is the dilemma of the off switch.

I refrain from elaborating on the example, "you're fired," or observing we have a Potemkin White House. That would be OT for investors.

Please let me know where I am wrong. Obviously, I don't know **sugar** about programming, although I was exposed to html in preparation for a stand alone course online at just about the time html 5 was being introduced.

I'd say in the 2nd paragraph, you have a very good sense of the problem solution approach and how it can work. If you have enough data, it's even possible for the network to figure out the rules of the road (including unwritten rules of the road). Heck, with enough data and training time, I'd expect such a network to have a reasonable chance at identifying right turn on red and left turn on red situations, and acting on them appropriately.


The idea I've been trying to articulate is different though. There's been a recurring stated fear / concern that AI is going to take over. For me that concern is overblown because even if the AI is better at solving defined problems, AI doesn't yet have evidence of being able to decide what problem to solve. There are more closed environment situations (such as driving a car) where AI / numerical analysis techniques are starting to be able to discern through the data and patterns of behavior, what the objective in the environment is, and therefore the "rules of the road" that creates success in that environment.

That's still different from deciding that autonomous driving is a problem to be solved.

And it's different from establishing the inital parameters of the learning environment. In Tesla specific terms, designing the cars to capture driving data from us humans while we drive and ship it back to the mother ship, and then feed that data into a learning environment from which a neural network can start learning recurrent patterns and hopefully eventually the rules of the road and what safe driving looks like. And yes - it's difficult, but you can "pollute" a neural network with examples of bad driving behavior - it's just that you need enough of them that they look like they're the normal instead of the exception. Billions of miles of good to ok driving swamps dozens of miles of bad driving.

And it'll be humans that translate whatever models are learned into a program that gets downloaded back into our Teslas and generate boundary conditions with each release around what driving decisions that program will and will not make. Or it'll be humans that design the direct translation of learned and validated models into updates that are downloaded to our cars - I figure we're at least years and probably decades away from revisions to learned models going through automated testing and validation, and then being automatically downloaded, so that the round trip between driving experience, learned model, and updated driving model/logic has no humans directly involved.
 
Status
Not open for further replies.