Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Update from Wayve. Their LLM called LINGO1 can segment the video to show what it is thinking, not just say what it is thinking:

Our latest breakthrough @wayve_ai ! Introducing #LINGO1 with 'Show&Tell' capability. It identifies hazards at road scenes, improves response trust, and reduces errors for better AI reasoning & planning. https://wayve.ai/thinking/lingo-1-referential-segmentation/…#AI #LLMs #Selfdriving #AutonomousVehicles

 
Update from Wayve. Their LLM called LINGO1 can segment the video to show what it is thinking, not just say what it is thinking:



Not “thinking”. Rather “finds in the scene and is going to do”. I am allergic to anthropomorphism, sorry.

I‘d love to understand if it is reliable and what the point of it is. And what input it trains and does inference on.

Is it only for customer communication? They cant likely run it in car rn.
 
Last edited:
Not “thinking”. Rather “finds in the scene and is going to do”. I am allergic to anthropomorphism, sorry.
Keep up the good fight! I slip into this all the time and while I certainly know my car isn't thinking, it is easy to do especially since my car has a name (and email and Youtube accounts!) It is important to understand the difference and not let our lazy language let the programmers off the hook.

I started my working days selling out microcomputers to end users and the guys were all gung ho at getting the things and the wives used to say they would never be able to program (back then, you'd have to program specific tasks because there was NOT 'an app for that.') I used to tell them I bet they already knew programming and they'd stare at me as if I had horns. Then I'd ask if they could use the Time Bake feature on their oven. They could. I explained they had programmed their oven to do what they wanted it to do.

If they wanted to, they could start with typing in the instructions found in computer magazines, to add programs to the the home computer. Or if they wanted to go beyond that, they could learn the instruction language and create their own programs. Most importantly, they didn't have to think that they didn't already have the basic concepts of setting out instructions to get a dumb thing to do something. After all, they were wives, they had lots of experience dealing with dumb things!
 
  • Love
Reactions: spacecoin
Not “thinking”. Rather “finds in the scene and is going to do”. I am allergic to anthropomorphism, sorry.

Fair enough. I can try to be more careful in the words I use.

I‘d love to understand if it is reliable and what the point of it is. And what input it trains and does inference on.

Yeah, I would love to know more about how reliable it is. It is a great feature but if it is not 99.99999% reliable then I don't think it is much use to training end-to-end. In fact, it could be harmful if the dev team trust its responses too much.

Is it only for customer communication? They cant likely run it in car rn.

Well, one purpose is PR. Wayve is really promoting this new LLM as a way of showing their progress with end-to-end training. They are basically saying "look how smart our autonomous driving AI is, it understands how to interact with roads, traffic lights and other vehicles! And it can even answer when asked to explain its actions!". Additionally, it does not just put out a text answer, it can actually show why it did what it did.

I suspect the other purpose is internal development and troubleshooting, not for the customer. It is designed to make the end-to-end NN less of a black box since now they can ask the AI why it is doing what it is doing. For example, Wayve can put a training scenario into LINGO1 and ask the AI to explain what it would do and why to see if the end-to-end training worked to produce the desired outcome. Wayve could plug in scenarios where their autonomous driving required a human intervention and can prompt the AI to explain its actions to get a sense of why the intervention was needed or if it was needed. Imagine if we could ask FSD beta why it screwed up and FSD beta could actually tell us and show us. That is basically what LINGO1 does.
 
Well, one purpose is PR.
We have a winner! :D
I suspect the other purpose is internal development and troubleshooting, not for the customer. It is designed to make the end-to-end NN less of a black box since now they can ask the AI why it is doing what it is doing. For example, Wayve can put a training scenario into LINGO1 and ask the AI to explain what it would do and why to see if the end-to-end training worked to produce the desired outcome. Wayve could plug in scenarios where their autonomous driving required a human intervention and can prompt the AI to explain its actions to get a sense of why the intervention was needed or if it was needed.
Sure, but basically zero value for the core mission at this point, but a really good tweet.
Imagine if we could ask FSD beta why it screwed up and FSD beta could actually tell us and show us. That is basically what LINGO1 does.
Ohh, I'd love that.

Q: Why did you veer into oncoming traffic?
A: I'm sorry but I'm just a large language model ontop of a driver-assistance system and can't answer that (or)
A: It was the closest route.
 
Thinking is not limited to humans. There is nothing wrong with saying that an animal or a KI thinks. That is not anthropomorphism.
I never claimed thinking was limited to humans?

The point was that computers nor ML-models can't reason or think. If you claim it can - when it clearly can't - that must surely be anthropomorphism ("the attribution of human characteristics or behaviour to a god, animal, or object)? If you have a better word for it, let me know!
 
  • Like
Reactions: JB47394
What ever happened to NURO delivering Groceries and food to People. Dominoes was supposed to be delivering Pizzas in Self Driving Cars. What happened to that plan?
"Robot delivery startup Nuro announced plans to layoff a portion of its workforce and to pause its commercial operations as it pivots to more research and development. The news comes amid a broader set of financial challenges for the burgeoning autonomous vehicle sector."
May 2023

 
Japan is rushing to lay the groundwork for autonomous vehicles, thinking they will go mainstream around 2040. In April, the government lifted a ban on Level 4 automated driving on public roads. Pilot programs using self-driving cars are expected to launch in about 50 locations nationally around fiscal 2025.

The government is discussing rules regarding who would be liable for traffic accidents involving autonomous vehicles. The private sector is also progressing, with Honda Motor and General Motors joining forces to launch driverless taxis in Japan in 2026.

 
  • Like
Reactions: spacecoin
That paper is really good. I get what they're trying to do, but likely years and years away from commercialisation. Mayor obstacles except for latency is hallucinations. Direct link: https://arxiv.org/pdf/2311.10813.pdf

Yes, there are many challenges and it will be years before commercialization. Same with Tesla's V12. It will not be solved overnight. But I find it interesting to see several companies now jumping on the LLM/e2e bandwagon for autonomous driving. LLMs are clearly the next Big Thing.
 
Yes, there are many challenges and it will be years before commercialization. Same with Tesla's V12. It will not be solved overnight. But I find it interesting to see several companies now jumping on the LLM/e2e bandwagon for autonomous driving. LLMs are clearly the next Big Thing.
Yeah, it's not great but it better than not using it (for research). Clearly there is a need to fusion human knowledge into the agents.
 
Bloomberg has a piece on Hyundai that makes robotaxis (based in the Ionic5) for Motional.

Also, Hyundai robotaxis coming in 2023 | Hyundai Worldwide

I love this little bit from their press release:

We’ve also celebrated the driverless technology in our exterior design by displaying our suite of sensors prominently. This way, passengers will know right away that they’re getting into a driverless vehicle, and we think this will excite them. In this way, the robotaxi design has taken a similar approach to the design process for WRC rally cars as the performance-enhancing ducts on our rally cars are also displayed prominently on the exterior to show everyone that they’re looking at, or stepping into, a high-performance vehicle.

Next we'll have car makers taping phony "performance-enhancing Velodynes" to the roof just like they do with fake hood scoops and pointless spoilers.
 
  • Funny
Reactions: AlanSubie4Life