Dec 21 MiC and no alarm bug that I can see/hear.
Off topic but I want to raise it: Yes, software testing is seen as a last minute thing, always has been. Long periods of UAT seen as bureaucratic and time wasting, an unfashionable throwback to the olden days of waterfall design approaches. Tesla are not doing this - they clearly use "Agile" development and have been quoted as having a "move fast and break things" approach to software development. I have said this before - that's fine if you're building a web site, but irresponsible for a safety critical thing like a car. We all know what this means - dangerous phantom braking, mostly. You can add the performance of the auto wipers and lights to that list, and now, inconsistent speed limit recognition (a safety issue if it increases the TACC speed automatically, which I think it doesn't). The system also raises alerts when it misreads junctions and can apply corrective steering in response to these alerts (very dangerous, happened to me on a motorway last week). Don't forget, if you hear a loud alarm going off in the car while you are driving it, you are likely to react improperly out of shock and this is also dangerous. However, much of what the car does is in that grey area called "AI" and so you have a system being "trained" on how to react to driving situations, rather than an algorithm someone has thought out and has absolute control over. I see a flaw with this appoach..! How do you debug a spohisticated AI system that has been trained rather than coded? How do you know a change made in one area won't have repercussions elsewhere?