Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Almost ready with FSD Beta V9

This site may earn commission on affiliate links.
Even for humans stereo vision (depth perception) really is only useful for near/close vision. At distance, we are using monocular cues like relative image size, shadows ect.. not stereo vision. That isn't to say that having two eyes isn't helpful, things like binocular summation does improve overall vision. But its different from depth perception. We would need eyes very very wide apart to have effect on depth perception at distance.

Teslas have their cameras pretty close together, so don't see much use in depth perception, more likely just redundancy. I did read a paper a while back where they placed cameras at far ends of the vehicle to gain some stereo depth perception.
Your points are well-taken and I generally agree strongly, but I might say you're a little pessimistic about the separation effect. Rangefinder cameras have a triangulation baselength of one to three inches or so, and they can reliably separate 25 feet from 30 feet from infinity. Optical fire-control rangefinders with baselength of three to six feet could accurately direct artillery for targets a mile or more distant (sorry for the non-metric examples).

So I think the width across a Tesla should be quite sufficient for triangulation of objects within the range of reaction-safety and accurate braking calculations. The effectively continuous video-stream updating helps to average (filter) and refine the estimate at the frame rate - not to mention all the other supporting clues about depth from the rolling perspective shifts etc. as you discussed.
 
  • Informative
Reactions: pilotSteve
Traffic laws are pretty simple.

So much so even 16 year old kids can pass a written exam on them in most places.

It's trivial to program a computer to know all of em.

Not sure why you think this is hard- other than you appear to have not read some of them yourself and seem to imagine they're more more complex than they actually are.

I am not arguing they are "complex" or a computer would run out of memory trying to know all of them (although they do tend to be some of the longer sets of laws in many states, and traffic laws go all the way down to the city level). I am arguing many of them are often ignored by the general driving public, and they are generally not absolute and cannot be directly measured. They can also change dramatically in different jurisdictions. For instance, this is the Oregon yellow light law:

Steady circular yellow signal. A driver facing a steady circular yellow signal light is thereby warned that the related right of way is being terminated and that a red or flashing red light will be shown immediately. A driver facing the light shall stop at a clearly marked stop line, but if none, shall stop before entering the marked crosswalk on the near side of the intersection, or if there is no marked crosswalk, then before entering the intersection. If a driver cannot stop in safety, the driver may drive cautiously through the intersection.

So, you cross the border from CA into Oregon and suddenly the car is braking HARD for every light that goes yellow. Sometimes it stops, sometimes it drives slowly through.
And what lawyer codes the definition of "stop safely" logic? Is that a full ABS stop? Half a G? Do you have to consider cars behind you and their chance of stopping? If you have stickier tires, do you have to brake harder?

Here's the WA law on following distance:
(1) The driver of a motor vehicle shall not follow another vehicle more closely than is reasonable and prudent, having due regard for the speed of such vehicles and the traffic upon and the condition of the highway.

How do you code "reasonable and prudent"?

Here's a good one from one city in Colorado. Would you expect your car to refuse to take you somewhere?:

"It shall be unlawful for any person to operate a motor vehicle, or as owner of a motor vehicle to permit its operation, past a traffic control point three times in the same direction within any three-hour period between the hours of 9:00 p.m. and 4:00 a.m"

I guess one question: Should a self driving car be allowed to drive in a city where it has not been fully vetted to be in complete compliance with every law on the books for that location? There are 19,500 incorporated places in the USA- every one could have a unique restriction. If it does break a law, who pays?
 
@gearchruncher, you worry too theoretical

... It will need to obey every law. Speed, waiting at stop signs for X seconds, stopping for yellow lights, only moving left to pass, turn signals, etc. It's going to drive quite a bit differently than human drivers around it, which will cause some interesting integration issues ...

Not so much in reality. That would be senseless. TESLAs can be set up to exceed speed limits, and by how much. And in the Street NOA ("FSD beta") you can see the default "California stop sign" behavior, never actually coming to a complete stop.
 
Not so much in reality. That would be senseless. TESLAs can be set up to exceed speed limits, and by how much. And in the Street NOA ("FSD beta") you can see the default "California stop sign" behavior, never actually coming to a complete stop.
All of this is in a car with a driver that is ultimately responsible. This is still L2. It has nothing at all to do with a self driving car. Tesla keeps warning us that actual FSD (L4/5) is up to regulatory approval. Will regulators approve a car that speeds and rolls through stop signs? Who gets the ticket when nobody is in the car?

But it is interesting. Tesla already, in a vey beta program, seems to acknowledge that cars which obey all laws are kind of annoying and their customers won't like them. I wonder if they are working with all 19,500 regulatory bodies to get excemptions for their cars.
 
I've said a number of times that the FSD core driving behavior was apparently learned from alpha testers / Tesla employees who really aren't very good drivers. I mean not very smart, smooth, safe or realistically predictive/defensive.
Currently the path planning and vehicle control logic is hand-coded in C++. That portion is not using a Neural Net. So the “driving behavior” was not learned from anyone but coded with logic.
 
  • Like
Reactions: emmz0r
Traffic laws are pretty simple.

So much so even 16 year old kids can pass a written exam on them in most places.

It's trivial to program a computer to know all of em.

Not sure why you think this is hard- other than you appear to have not read some of them yourself and seem to imagine they're more more complex than they actually are.




Why would I care what other drivers think when I'm not even driving?

I don't care what they think now

Once the car drives itself I'll be too busy enjoying all that reclaimed free time.




More likely they'll either be reassigned to actual useful work, or police departments will get smaller, and again they'll move on to a likely more useful to everyone job than handing out traffic tickets.

But people end up having to change jobs every time there's a major technology improvement in society.

The folks who made horse whips had to find new work- doesn't mean we should've not switched to cars at all does it?





I can't speak for everyone on the road.

Then again- neither can you.

Realistically though- for those as can afford one- they will think it's awesome when they see someone napping while the car drives- and will likely go buy one as soon as they can.

Which means it'll help accelerate the transition to sustainable energy.

If that sounds familiar, that is literally Teslas mission statement




There are interesting, unintended, consequences to nearly everything.

Again if all you do is waste time worrying about em you never get anything done.


We DO know there'll be lots of interesting intended consequences though.

Far lower rates of accidents and deaths for example (Also lower car insurance!)

Less gridlock since you don't have idiots blocking the box, refusing to properly zipper merge, and so on.

LOTS more free time for folks who own such vehicles.

Lower rates of drunk driving.

Greater mobility for the elderly and disabled.

The list of benefits is pretty long.


The list of downsides so far appears to be.... less traffic ticket revenue.

Boo hoo.

I read some of these posts, and think I really want to reply to this, and articulate how wrong they are, but respond as clear and concise as possible, so they can understand, and maybe see the light.
Then you reply to these posts. They are exactly how I want to reply, but much better than I would have in every way.
Thanks. :)
 
Currently the path planning and vehicle control logic is hand-coded in C++. That portion is not using a Neural Net. So the “driving behavior” was not learned from anyone but coded with logic.
Interesting, but to be clear, are you saying this is the case for "FSD Beta 8.x" i.e. City Streets Beta as shared by several testers on YouTube? It was my understanding that unlike released AP, NoA etc., City Streets Beta is now heavily concentrated on NN machine learning cycles, so-called "Software 2.0" approach as they're all discussing online, to minimize and in principle eliminate most of the need for hand-coded AI Rules.

If rule-coded, it's even harder to understand certain behaviors e.g. one of my favorite examples:
Tesla's Northbound-to-Westbound left turn path initiates with a radius too sharp, causing the Tesla to briefly threaten a collision with a stopped Eastbound car in the leftmost (or turn) lane of the cross street. Tesla FSD realizes this problem as the gap closes, then it awkwardly corrects, sometimes with an indecisive and striking wiggle of the steering wheel, finally negotiating the corrected turn path to reach its target Westbound lane.. Even worse, I think I've observed this when the cross-street is divided with a concrete median.

Theory 1 is what you're explaining: It's simply an incorrectly-calculated but deliberately-coded turn path plan, creating a lot of very awkward left turns (at least as of late-January FSD videos) but hasn't yet been addressed by the C++ jockeys. Thats a little hard to fathom, but perhaps so.

Theory 2 is what I concluded: It's a poor but understandable learned behavior resulting from imitating a very common human-driver habit: "If there is no car occupying the cross-street lane (and no hard median in my way), then I'll just cut the corner off my left turn and cross right through that unoccupied Eastbound lane... Well, of course I can't do that if there's a car sitting there, but usually there isn't so that's how I drive!"

So I was thinking Theory 2 is at least a logical result of the training regime:
  • Machine trains on left turns from reams of example recordings..
  • Gets ingrained with a bad path-planning habit from seeing thousands of typical fast & loose unobstructed left turns.
  • Unsurprisingly runs into trouble in real life when it encounters the rightful occupant of the lane it wanted to cut through.
Overall I'm very intrigued by this auto-learning paradigm being developed on a slightly public stage. But I do wonder how easy it is then to guide it towards a deemed-correct habit rather than a more common but faulty habit.
 
Last edited:
  • Like
Reactions: dckiwi
My concern is with fog. I am originally from the UK and have been in fog so dense that I could hardly see my feet when standing, and could not see the curb more than 5 away from the car even with fog lights on, making driving almost impossible. Vision only system would have virtually zero bitstream in that situation and be almost useless.
With radar and ultrasonic sensors, the car could get around albeit at a considerably slower pace in most situations in order to be able to stop within the range of the higher resolution ultra sonic sensors. The issue is rather which sensors should be believed and used in which conditions. I think that radar is a must have for these situations. Other situations helped, are during smoke from a fire and during sandstorms.
With boats navigating in the fog (which is common), this is the primary method of "seeing" along with ship horns (primitive version of sensor).
Am I missing something?
 
My concern is with fog. I am originally from the UK and have been in fog so dense that I could hardly see my feet when standing, and could not see the curb more than 5 away from the car even with fog lights on, making driving almost impossible. Vision only system would have virtually zero bitstream in that situation and be almost useless.
With radar and ultrasonic sensors, the car could get around albeit at a considerably slower pace in most situations in order to be able to stop within the range of the higher resolution ultra sonic sensors. The issue is rather which sensors should be believed and used in which conditions. I think that radar is a must have for these situations. Other situations helped, are during smoke from a fire and during sandstorms.
With boats navigating in the fog (which is common), this is the primary method of "seeing" along with ship horns (primitive version of sensor).
Am I missing something?
Did you drive in the fog so dense that you could barely see your feet? If so, what did you use to navigate?
 
One other improvement Tesla can do is to change over to stereo vision (they have already hinted at that - saying they can find the distance by using stereo vision of the front two cameras). Ofcourse, color blind people - as well as one eyed, drive fairly well - but stereo vision would add to the accuracy.

In the above cases, my guess is we can make out those are walls because of stereo vision.
There’s also the small detail that the car wouldn’t attempt to drive through a wall because it’s not part of its GPS route.
 
  • Like
Reactions: Microterf
I read some of these posts, and think I really want to reply to this, and articulate how wrong they are, but respond as clear and concise as possible, so they can understand, and maybe see the light.
Then you reply to these posts. They are exactly how I want to reply, but much better than I would have in every way.
Thanks. :)
Indeed. In fact one must question if the gearcruncher is just trolling ....
 
  • Love
Reactions: Microterf
I honestly bet that vision-only can't tell these aren't tunnels, or at best gets confused most of the time. Radar & Lidar would say "Wall" from any distance or approach angle.

StreetArt_Painted_Tunnel_02.jpg


StreetArt_Tunnel_To_Nowhere.jpg
Tesla is working on the Road Runner FSD program where it will easily pass through that picture. The previous Coyote version didn't work.
 
  • Funny
Reactions: Darmie and goRt
No, I really really won't.


Right now if I have a 30 minute drive, that's 30 wasted minutes I can't be doing much else besides maybe talking to someone or listening to some audio.

If an L5 drive takes 36 minutes that's 35 of them that are useful minutes. I could be working on a phone or laptop, I could be reading a book, playing a game...hell I could be sleeping so the 6 minutes I lost getting up early is repaid 5 fold.

You'd have to be nuts to be upset about that trade. You end up with way less wasted time overall in your life.





Humans already do this all the time- and sometimes DON'T pull off at those.




Yes I can.

It'd prevent gridlock

Most traffic is caused by idiot humans. Less of them driving, the less traffic you'll have.




That still sounds great.

That was a terrible way to fund anything in the first place.




If there's far less traffic violations you need far less traffic cops.

They can go get other jobs.



BTW, the revenue issues you list aren't really as widespread as many imagine them to be nationally, last study I saw on this the majority of places that got a LOT of revenue this way were in only a handful of states-Arkansas, Georgia, Louisiana, New York, Oklahoma and Texas.

Since most states don't heavily rely on this in the first place surely they can move to whatever those other states do.
"Won't something think of the poor police departments that are funded from excessive speeding tickets" has to be the worst argument against autonomous vehicles that I've ever heard :D. (I know this wasn't your only point and I agree with it be an interesting secondary effect from autonomous vehicles)

In all seriousness, police should be funded exclusively from taxes with the department size and funding scaled up or down as needed to meet the actual level of crime. Funding mechanisms like traffic violations and civil asset forfeiture that create perverse incentives for law enforcement are never a good long term sustainable solution. It's the nature of every bureaucratic organization to try expand their scope, budget, and importance, but that's why it's important to require that organizations like the police have to generate "revenue" to survive.
 
How do you code "reasonable and prudent"?

This is yet another great example of where the self-driving car will be far better at it.

It knows the EXACT distance to the car in front, the EXACT speed of the car in front, and the EXACT distance required to stop without hitting him.

And adjusts the follow distance according to physics and math.

Unlike a human who just guesses broadly and will far less often get the right answer.


I guess one question: Should a self driving car be allowed to drive in a city where it has not been fully vetted to be in complete compliance with every law on the books for that location?

Humans aren't vetted like that before they can drive- why should FSD be?

And FSD is far more likely to actually know all the relevant laws since they can be easily, reliably, permanently, stored by the system.


It's remarkable how almost every "problem" you raise is actually a great case for why the self-driving car is better


There are 19,500 incorporated places in the USA- every one could have a unique restriction. If it does break a law, who pays?

Depends on the laws regarding self driving cars in that state.

In most it would be the human who activated the self-driving system for that particular drive (who interestingly in some states need not even have a drivers license)



BTW to the other guy asking- last report I saw from Green was that NNs continue to ONLY be used for perception.

All driving policy is still hard-coded[/B][/B]
 
Going to cross post this here, since I found it interesting, and this seems to be where discussion is happening rather than the radar removal thread.

I guess this likely captures the state of V8.3 (?), but interesting that they are apparently undistorting (and stitching together) the camera images in that version as well. I wonder if V9 does it differently somehow.

A whole 18 frames of memory, allegedly. So much processing! I wonder how close HW3 is to getting fully loaded? 25%? That would be pretty high at this point; hopefully it is not that high. No comments on that in this specific thread though you can peruse the guy’s replies for some speculation.

Discussion of some analysis of internals of FSD
 
Last edited: