Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
It is gently assertive at nosing its way into busy intersections or pedestrians where FSD is just fully stopped. In fact it may be “too good” at being assertive around pedestrians. It is sometimes a little rude at approaching them too closely and then barely waiting for them to get out of the way.

I think Cruise can be a little more than assertive around pedestrians. It's just a single anecdotal case, but doesn't look like an appropriate response to pedestrians:


Cruise replied saying it had behaved accurately to "minimize risk for both passengers and pedestrians," but I don't think the passengers were in any risk:

 
...and pedestrian traffic so dense that the only way to clear an intersection is to nose into the crosswalk and go after the light has changed red.

Not sure how this could even be coded/teached given the rolling stop debacle.

Rolling stop was a convenience feature that Tesla tried hard to keep. The NHTSA came down hard on them because the law on stop signs is clear and there is no other practical justification for not stopping.

But just as the NHTSA doesn't stop Tesla speeding in AP/FSD, I expect it to be flexible with AV behavior in dense urban environments.
 
I think Cruise can be a little more than assertive around pedestrians. It's just a single anecdotal case, but doesn't look like an appropriate response to pedestrians…
I just posted a video of my first daytime test ride on Cruise a few weeks ago.

At the 1:30 minute mark you will see a somewhat similar example, this time with an older woman with a cane, where the car seemingly steers towards a pedestrian. In this case, there was nobody lunging off-screen from the right towards the car to the best of my memory.

I did not perceive this as a real safety threat but certainly it was a bit weird and inappropriate and it could easily have seemed menacing to the pedestrian. In fact, she sort of gives the car a “what the hell are you doing?” glance.

 
I just posted a video of my first daytime test ride on Cruise a few weeks ago.

At the 1:30 minute mark you will see a somewhat similar example, this time with an older woman with a cane, where the car seemingly steers towards a pedestrian. In this case, there was nobody lunging off-screen from the right towards the car to the best of my memory.

I did not perceive this as a real safety threat but certainly it was a bit weird and inappropriate and it could easily have seemed menacing to the pedestrian. In fact, she sort of gives the car a “what the hell are you doing?” glance.

Whoa that’s pretty bad.. I would personally feel threatened if I were that pedestrian with the cane. The Cruise AV is basically in the other lane at that point, and oncoming cars must’ve been like “wtf is this stupid AV doing?”

Then it does something else weird a minute later while behind a bus, almost like it’s trying to drive around the bus but into a divider of some sort, and almost onto oncoming traffic again.

Surprised to see these L4 mistakes tbh. Any time I see Cruise, Waymo or Zoox AVs in SF, they’re driving more slowly/carefully than I would, but at least they don’t seem to be exhibiting stupid FSD-like behavior.
 
Last edited:
I dont recall Tesla ever claiming that FSD would solve for that level of environment though.
Huh? That’s exactly all they have been marketing / need to solve for their own robotaxi. The new platform they are building is designed for city driving, and this is what they will be dealing with everyday.

If they don’t solve this then FSD will never be anything more than a glorified driver assist…
 
One idea would be to just keep training the prediction/planning module, hoping the AV can get smarter at reading when it can be assertive.
Who on earth thinks this is going to work?

We have hundreds of examples now of this not working for any company. Yet somehow people think this machine-learning training nonsense is going to work.

It’s a dead end. ML/generative AI seem like they’ll never ever get there. Just too fragile, and no one has figured out how to have them not make errors and hallucinate! Need more than just that! They are spectacularly powerful and capable tools which will have enormous impact in general. Just not enough.

“Just a bit more powerful hardware,” they said.
“Just a little more training,” they said.

😂

What a nightmare.

What they need is billions of dollars of continued development and research on neural nets and a comprehensive approach to somehow figure out how to emulate human behavior, while ensuring way better than human performance. It will require a breakthrough.
 
...Yet somehow people think this machine-learning training nonsense is going to work.
Starting to sound a bit grumpy, aren't we now? ;)
What they need is billions of dollars of continued development and research on neural nets and a comprehensive approach to somehow figure out how to emulate human behavior...
Hmm, isn't that what is happening now?

You want billions of dollars of R&D effort in neural nets, but you've ruled out the efficacy of all that "training nonsense".

You may well be onto something good and important, but I have to admit that I don't get what it is.
 
You want billions of dollars of R&D effort in neural nets, but you've ruled out the efficacy of all that "training nonsense".

You may well be onto something good and important, but I have to admit that I don't get what it is.
As I said: they need a breakthrough. One that prevents hallucination and possibly looks very very different in structure and behavior than what is used for LLMs and the generative AI of today. This will requires billions of continued investment with unknown prospects of success. It seems extremely unlikely to “just” require “a bit more training.”

Yes this is happening now! I did not suggest it was not.

Remember, it has to be right all the time. (It can’t be wrong once then correct itself.) This is not a characteristic of current architectures it seems.
 
Last edited:
As I said: they need a breakthrough. One that prevents hallucination and possibly looks very very different in structure and behavior than what is used for LLMs and the generative AI of today. This will requires billions of continued investment with unknown prospects of success. It seems extremely unlikely to “just” require “a bit more training.”

Yes this is happening now! I did not suggest it was not.
I don't disagree, in a very general sense, that breakthrough(s) are needed. But this seems hardly insightful, as it's almost by definition how any nascent technology progresses. Not a linear or predictable smooth function of measurable progress, until the core technology is well understood and modeled.

I might point to Moore's Law as one of the most famous examples of progress prediction in a (now) relatively mature discipline. The end of the line for Moore's Law continues to be pushed out by mini-breakthroughs which are individually not easy to predict, but in aggregate the curve continues to be followed with perhaps a minor diminishing of the growth exponent. But, there were some decades of bumpy and less certain electronic and computing development, before the overall paradigm of shrinking feature size and formalized design processes could become predictable (and credit Gordon Moore at al for the insight to capture it in a macro-level prediction).

I don't think we're at that point yet with ML, more specifically self-driving and more generally AI. You and others are constantly reminding everyone of how seemingly slow and disappointing the progress is, but from a historical perspective that might seem to be unfounded heckling from the peanut gallery. Perhaps kind of a spoiled reaction to what is actually an amazing (and somewhat unsettling) dawn of an entirely new and disruptive technology - probably at least as significant as the last century's world-changing developments in transportation, telecommunications, computing and the internet.

Regarding "billions of dollars needed", well yes but that can go in so many different directions. Raising (or printing) billions of dollars to throw into a pot stirred by councrls of "experts" it's probably not the right starting point (and I know you didn't say that). Billions of investment dollars is already happening; fortunately not yet highly captured, directed and grifted by governments and high councils.

I don't think either of us will be surprised if, in 20 years, we can look back and see some dead ends, some correct but initially failed approaches, and some conclusions that will seem easy and obvious later, but not readily apparent today. That's just how it goes with new technologies and industries. Billions of dollars maybe spent today in searching for and identifying the right techniques; it is after that point that further billions will be directed towards industrial scaling and execution of the increasingly predictable implementations. Maybe some of that will be like today's training on supercomputers, and maybe some of it will be something quite different.
 
  • Disagree
Reactions: EVNow
But this seems hardly insightful, as it's almost by definition how any nascent technology progresses

I didn’t say it was insightful!!! Was just pointing out the obvious.

but from a historical perspective that might seem to be unfounded heckling from the peanut gallery
There’s no heckling. It’s just about setting expectations reasonably. This stuff is amazing and it’s the sort of thing that creates its own new applications - it’s not necessarily directly applicable to existing applications.

Regarding expectations, in the early days of Moore’s Law you would not have been saying that the next year you’d have billions of transistors on a chip, enabling massive training through high operations count.

Perhaps kind of a spoiled reaction to what is actually an amazing (and somewhat unsettling) dawn of an entirely new and disruptive technology
No. As I said the new AI tech is pretty amazing and seems likely to change and disrupt certain activities - will be very interesting to see it unfold.

Prospects for driving at the current time appear poor to me though - namely because 90% correct or 99.9% correct is thousands of times worse than it needs to be. So need to address that.

If there is any “insight” in the post that would be it. But really just stating the obvious I think.
 
Starting to sound a bit grumpy, aren't we now? ;)

Hmm, isn't that what is happening now?

You want billions of dollars of R&D effort in neural nets, but you've ruled out the efficacy of all that "training nonsense".

You may well be onto something good and important, but I have to admit that I don't get what it is.
Starting? Alan is vying with Grumpy Cat for top honors.
 
Billions of investment dollars is already happening; fortunately not yet highly captured, directed and grifted by governments and high councils.
Why do people keep posting political / ideological statements - that never get the notice of mods ?

But anyone responds to these political statements in any form, those get banned / deleted. Its as if only one form of ideology is allowed here without being challenged.
 
Why do people keep posting political / ideological statements - that never get the notice of mods ?

But anyone responds to these political statements in any form, those get banned / deleted. Its as if only one form of ideology is allowed here without being challenged.
Well, I happen to think that what I said is not highly controversial, or would not have been until recently. I appreciate much of what you write, but I also note that you frequently throw out opinions or random comments that I would consider politically motivated and that I often (though not always) disagree with.

Yet when a fairly innocuous and certainly inoffensive observation is made by someone else, that you disagree with, you're wondering where are the mods.

Anyway thanks for the Disagree and maybe we can agree on the next topic. :)
 
Well, I happen to think that what I said is not highly controversial, or would not have been until recently.
You would be wrong on both counts. Pls go back to FDR era and read history.

I appreciate much of what you write, but I also note that you frequently throw out opinions or random comments that I would consider politically motivated and that I often (though not always) disagree with.
You are welcome to point those out. I'll try to edit those ... I hope those are not the type where people consider non-whites right to live with dignity itself to be controversial ;)
 
Looking right at the dam road bored as hell and it just gives me a strike.

I haven't confirmed it myself, but based on some reports like these and some edge cases I've seen in my car, I'm starting to suspect that whether your eyes are moving may be a factor.

An attentive driver is at least shifting their eyeballs constantly (scanning mirrors, etc), and often moving their head at least slightly. Maybe if you're staring at the road with your head and eyes locked forward for too long, it assumes you've zoned out and warns about attention?
 
  • Like
Reactions: JB47394