Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomy Investor Day - April 22 at 2pm ET

This site may earn commission on affiliate links.
Tesla Raises the Bar for Self-Driving Carmakers | The Official NVIDIA Blog

Interesting that Nvidia feels that they and Tesla are the only places offering the best AI computing power.
Well, that is not really surprising. They are less concerned about Tesla at this moment (since they are not competing with them as long as Tesla doesn't sell its SoC to other companies) than Intel/MobilEye (which is a direct competitor in the automotive space).

Contrary to what Nvidia claims in this blog, they are not "the only other company framing this problem in terms of trillions of operations per second, or TOPS". MobilEye has touted the scalability and efficiency of their SoCs in TOPS/watt for years (see e.g. here). It's also interesting that Nvidia didn't comment at all on the power efficiency, which was rightly emphasized by Musk yesterday. And, of course, they couldn't resist throwing some shade on Tesla: "Third, Tesla is working on a next-generation chip, which says 144 TOPS isn’t enough."
 
That is a good question. My guess is that they used this clause from the CA DMV regulation on autonomous vehicle testing:

"An autonomous test vehicle does not include vehicles equipped with one or more systems that provide driver assistance and/or enhance safety benefits but are not capable of, singularly or in combination, performing the dynamic driving task on a sustained basis without the constant control or active monitoring of a natural person."

So as long as the system is considered just a driver assistance system, they don't need to report. Also, the reports are due only once a year (in January). If they cross the threshold from driver assistance to autonomous driving this year, we'd only see a report in 1/2020.

That was Uber's argument and it failed about as well as any attorney would've surmised.

If it was an employee car and they did this then they must have a FSD testing permit in hand and report disengagements or none by Jan 2020.
 
Comparison between human eyes and the car's cameras are not that meaningful. It's the processing behind them that's truly important. Human's possess a higher ordering thinking compared to a NN. NN is based on probability and approximation. At some level I'd say it's equivalent to human's first order processing. Human intuition is a lot less straight forward to model. Basically, when you're driving you 'know' not to drive into the car in front of you or drive off the road, through a stop sign, into a deer, or into a barrier. The NN 'learns' that humans don't do this, so it doesn't do it either. But if all of a sudden you hit a puddle and your entire car gets covered in mud and you can't see anything, as a human you can use all the prior knowledge of driving, the immediate past, etc to decide of your best course of action. It's likely a situation you've never encountered but your brain's higher order thinking can figure out what to do. The NN will be completely lost and (at least in the current implementations) it will just force the driver to take over.

I'd like to see these 'unknown' situations applied to the FSD and see how it reacts.
 
  • Helpful
Reactions: rnortman
What the average person saw?
Lots of boring techno babble
A new computer
A cell phone app

(No video? Is that it? Why did I watch this...)

And a surly, snippy, short tempered Elon
looking like he was barely holding his temper
snapping at employees
*
Financial times has an article about Skabooshka

Over-promise

Points the finger at journalists, Big Oil, short sellers,
and irrelevant jerks on twitter

Rookie errors, repeated.
The once great media support
Long Gone
*
If there is anybody who can talk to the man, do it
Dont be polite about it
Maybe that is what it will take
 
I'd like to see these 'unknown' situations applied to the FSD and see how it reacts.

The part where they mentioned driving on a snow covered road was pretty interesting. They said it actually did pretty well, even without seeing the lane lanes, but that they hadn't actually trained it to drive in snow yet. So that might be an "unknown" situation that it handled adequately.
 
  • Like
Reactions: Enginerd
Always or just in winter? I don’t have this problem, and it rains a lot here on the east coast.
It's not always. It was raining hard two weeks ago and it went out due to bad weather which is a different use case. Normally it rains then when it stops raining it will go out due to road grime from the interstate.
My repetitive route is i71 between Cincinnati and Columbus OH. It's not like it's a gravel or dirt road. I got my Tesla in September so I'll so how it goes from here on out.
 
That is a good question. My guess is that they used this clause from the CA DMV regulation on autonomous vehicle testing:

"An autonomous test vehicle does not include vehicles equipped with one or more systems that provide driver assistance and/or enhance safety benefits but are not capable of, singularly or in combination, performing the dynamic driving task on a sustained basis without the constant control or active monitoring of a natural person."

So as long as the system is considered just a driver assistance system, they don't need to report. Also, the reports are due only once a year (in January). If they cross the threshold from driver assistance to autonomous driving this year, we'd only see a report in 1/2020.

I wonder if Tesla will just not do the DMV disengagement reports at all. Their methodology does not seem to require it since they don't need to drive test cars around CA roads to validate or gather real world data. Tesla can gather a ton of real world data directly from the fleet. They can develop software, run it in shadow mode in the fleet of existing cars on the road, fine tune it, then release features as L2 to the fleet again for more validation, rinse repeat until the software is ready. Tesla seems to aim to do this for all of FSD. So I wouldn't be surprised if Tesla does this: eventually releases FSD as L2 to the public, fine tunes it, and then eventually validates it for L4 without ever doing any public testing that required disengagement reporting.

Although, just as Tesla does report autopilot accidents, I would love it if Tesla did release the number of miles between disengagements for the entire fleet once FSD is released.
 
As far as redundancy, all we need is enough to get the car safely off the road in the case of a failure. Dual computers seem required for that, but a broken B-pillar camera would not be a serious problem. The car would still be able to pull over and stop, and could still continue forward until it was safe to pull over. L5 doesn't require the car to continue a cross-country trip even with a broken camera does it?

I'm fine with no LIDAR as well. What kind of innovation would we have if Tesla had to use it just because everyone else was using it? It's clearly not required for FSD, but might make it "easier" or better in certain conditions. Tesla has chosen not to use it. If it turns out LIDAR enables FSD much earlier maybe I'll buy be buying a car from some other company. That's the bet Tesla is making. We'll see the answer in the future.

I think feature complete just means the car will be able to drive without driver intervention on some routes, with the last two capabilities Tesla has listed for FSD. I don't think any reliability is implied. Thus, it will stop at simple stoplights and signs, maybe just 99% of the time, make turns at intersections, and I might include read signs at least. Enough to make some easy drives without intervention. Very much like AP is currently on limited access roads, with nags. Sure, lots of specific cases will eventually need to be handled, but navigating through a construction zone is not a "feature" I would list on a sales brochure. Tesla will then work to improve it by reducing interventions to the point where nags are not required and broadening the range of conditions it can handle. Then we'll see if they can convince a regulator that it's safe for use with an inattentive driver.
 
  • Disagree
Reactions: rnortman
As far as redundancy, all we need is enough to get the car safely off the road in the case of a failure. Dual computers seem required for that, but a broken B-pillar camera would not be a serious problem. The car would still be able to pull over and stop, and could still continue forward until it was safe to pull over. L5 doesn't require the car to continue a cross-country trip even with a broken camera does it?

Usually sensor redundancy does not mean just a broken sensor, it also means independent ability to confirm what is sensed. If the B pillar camera misses a fast approaching object from the side for whatever reason there is no secondary sensor to catch it in the 90 degree intersection scenario. There literally is just that single B pillar camera.
 
  • Like
Reactions: rnortman
Musk says they don't do premapped. So just thoroughly tested probably. Obviously, it is a route that they have used in simulations so it's been tested a lot so it was good for their demo.

What Elon says versus what's actually going on is night and day. He will say whatever he feels makes him superior.

But Tesla actually uses HD lane maps, exactly he says they dont do. Infact the HD map development has advanced and it's now in version 3.

Using NOA on area with insufficient maps lead to disatorous results.

But like I said again the demo was a complete non-event and a pr spin job
 
Parody moment:

Andrej was channeling his inner @Bladerskb when he answered that question about triggers only affecting very small, specific cases creating small amounts of specific data. In the meanwhile Elon was doing his @strangecosmos about how every mile trains the fleet. :)
I don't think Elon's words about the fleet being trained by billions of miles driving is wrong at all.

Look at it this way: Training the network is not about feeding some Matrix-style computer with 1000's of terrabytes of plain well marked roads. That data is useless. Even if you had the ability and capacity to actually feed everything into the network, the results wouldn't be good because basically any uncommon scenario would be blurred out by all that common data.

What you are interested in is the edge scenarios. Every time the driver did something not predicted, every time you see something not recognized, generally every time you disengaged the system. Edge scenarios happen frequently in the beginning, but with every iteration of the NN it happens more rarely. At some point maybe you have just 1 abnormal situation every 1000 km, but to get that abnormal situation you still have to drive those 999 km first. You still count as 1000 km fleet experience as improving/training the system, but you just send the useful data needed for the next iteration. Not the flawless data, or data you don't need until a future iteration.

It's like a carpenter with 10 years work experience. If he mounts drywalls for 30 days he's not learning anything new until a difficult problem he needs to handle shows up. He finds a solution and next time maybe it takes 60 days for next problem because he has more experience. In the end despite low learning rate (transferred data), he still has 10 years of carpenting experience. What you'd do is ofcourse let the carpenter get a more challenging project, which is why Tesla also gradually increases difficulty rate of which rare events to actually ask the fleet for detailed info about.
 
Not sure why it would in Tesla's case. The system that we currently have is specifically not designed to allow autonomous driving "on a sustained basis" without "active monitoring of a natural person". Maybe once FSD is "feature complete" that will change.

California Halts Self-Driving Cars in San Francisco

The issue is that regulatory agencies don't play fast and loose with semantics. Tesla's FSD software, if it exists now, does not intend to have active monitoring. It was hands-free unlike AP.

Even if you add in red lights and intersections, if it nags you every 30 seconds or lacks active driver monitoring, it might trigger regulatory oversight if its intended to be FSD (and Tesla is unambiguous that certain branches will fall into that category by the end of this year, thus necessitating the same kind of investigation as what happened to Uber).

Tesla should be leaning on California legislators to green-light laws to permit end user use of FSD features in cars to bypass the current regulatory scheme which is aimed at testing and not end-use scenarios (presumably because FSD is always a fleeting reality).
 
  • Like
Reactions: Daniel in SD
California Halts Self-Driving Cars in San Francisco

The issue is that regulatory agencies don't play fast and loose with semantics. Tesla's FSD software, if it exists now, does not intend to have active monitoring. It was hands-free unlike AP.

Even if you add in red lights and intersections, if it nags you every 30 seconds or lacks active driver monitoring, it might trigger regulatory oversight if its intended to be FSD (and Tesla is unambiguous that certain branches will fall into that category by the end of this year, thus necessitating the same kind of investigation as what happened to Uber).

Tesla should be leaning on California legislators to green-light laws to permit end user use of FSD features in cars to bypass the current regulatory scheme which is aimed at testing and not end-use scenarios (presumably because FSD is always a fleeting reality).
To quote the article:

It added that all of Uber’s “self-driving” cars have a driver sitting in the passenger seat to take over if needed.

Assuming the article is correct (usually a driver sits in the driver's seat), this is very different from Tesla autopilot. If the driver is not in the correct seat, the car is by definition self-driving as he cannot take over like a normal driver. They should have permit.
 
But like I said again the demo was a complete non-event and a pr spin job

65080087.jpg
 
  • Funny
Reactions: cwerdna and CarlK
To quote the article:

It added that all of Uber’s “self-driving” cars have a driver sitting in the passenger seat to take over if needed.

Assuming the article is correct (usually a driver sits in the driver's seat), this is very different from Tesla autopilot. If the driver is not in the correct seat, the car is by definition self-driving as he cannot take over like a normal driver. They should have permit.

I'm pretty sure they sit in the driver's seat but it was a while ago. They literally argued they were the same as AP and required constant human intervention. It's the same argument that failed.

The driver was in the driver's seat were in the Arizona fatality and AZ has like no laws and welcomed that unfortunate doomed project with open arms.