Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2021.12.25.7

This site may earn commission on affiliate links.
one model of system I worked on had 3 partitions: 2 that were flashable (the A and the B parts) and a C that was ROM and a boot-of-last-resort. if you ping-ponged on A and B and then back to A (watchdog caught hangs) then it would eventually boot to C and C would be able to properly flash A or B or at least fsck it if the filesystem was not cleanly unmounted.

sadly, I have not seen that model in the car world even though I've pushed for it, many times.

the model where we had 3 parts was on microwave IP radios that you'd mount on very high commercial towers, for ISP to ISP use. apparently, a tower climb can cost $10k (really? they say so, though) and so you REALLY dont want the customer to have to suffer that if your product hung or was in a bad firmware state. so we had that 3 partition scheme and it worked out very well.

I really wish the car guys would catch on and even let us have 1 level of UNDO.

its not too much to ask. but it does mean it has to be CAR WIDE. you dont just undo one ecu, you find the dependancy tree for the new version you want to revert to and you flash everyone back that needs it.
 
  • Like
Reactions: ElectricIAC
when we did software versioning for a system, we would break it up in to a several significant pieces. For example maybe db, ui, intersystem comm. Each major chunk had to be upgraded as a whole but the versioning checking only had to figure out a small number of dependencis and we'd force upgrade at some point. Like db 35.0.0 requires a min of ui 31.2.0 or something like that.
 
I hear you. I did have the .25.6 and figured going to the .25.7 would not be a big deal.
If anything when they push an update that quick usually there is something wrong with the current version that they are trying to fix, so I was more inclined to do the update.
yes, agree typically those little updates are not a big change. That's exactly what I was thinking about 4 years ago when I did a minor update a couple days before a long trip. Got burned on that update, so I said, never again will I do that. ;)
 
  • Helpful
Reactions: T-Mom
yes but your sacrifice will not go unnoticed. Years from now our children’s children, and their children will gather around the fire and recall your story. “This is the story of our great grandfather Sluri” they’ll shout. “Let it not be forgotten that time he went on a trip with only the front speakers” they screamed. and thus was born the modern saying “where there’s a will there’s a way”. Blessings Sluri, for your sacrifice 🙏
You forgot about the books and movies that will come from this sacrifice of mine. I like your positive spin on this, much appreciated and made me feel better.
 
Saturday July 31st, I went from 2021.4.18.2 to 2021.12.25.7. Drove 2 hours on Saturday and 1 hour on Sunday, using FSD AutoPilot about 40%-50% of the time. I did not see any unusual behavior, but was not offered AutoPark on a parallel parking spot I have used it on successfully, in the past.
 
  • Informative
Reactions: Relayer
I'm still back on pre-xmas firmware. for weeks, back then, it nagged me each time I entered the car (and I have it set to the non-advanced update stream so not sure why the nag).

then the nags went away for months. for months, it was a pleasure getting in the car with no nags.

now, they're back (since last week). I still am a refusenik and I'm mostly happy with the ancient fw that still controls THIS car.

car wash mode would be nice, but I'm not willing to push this into my working partition. maybe not ever, as long as I can refuse it. my car is radar based, btw, so that's even more a reason to keep on with the old fw.

with reports that things break and performance of AP is worse, I'm still going to refuse. will be a bad day if/when I finally cant refuse anymore. its the thing about the tesla fw model that I hate hte most. even with android (being a pita) you -can- make an image backup and revert if things go sideways. no such luck with cars. not even ONE level of undo!

absurd that after all these years, not one level of undo after experimental updates.

shameful, tesla. we dont need GAMES. how about an undo, elon? huh? that too hard for you? maybe you should disable more sensors....

(yeah, this bugs the hell out of me, being a sw guy)
Sorry but with no radar the car is more more confident , no fantom breaking and visualization is more stable, I saw many video last one yesterday and with no radar AP is more stable.
 
More than confident yes. And we’re still early in the autonomous game.

But cameras (and ultrasonics) aren’t going to work in every situation. Radar, LiDAR, predictive and whatever else can help fill those gaps.

The current target seems to be along the lines of if the human eye can see it then the cameras can see it. And there are more cameras, in more places then we have eyes, even with mirrors. Which is great.

The longer term goal should be to use anything that gives us an improvement that’s not achievable without it. But I’m fine with getting it as perfect as we can with cameras and then layering other tech in along the way.

The argument that if it’s not safe for a human to drive in because they can’t see enough is fine and good for arguing that cameras are all we need and this a car shouldn’t be driving with cameras if a human can’t drive in the same scenerio. Expect there are scenarios where something comes on all of a sudden and a human )or cameras) is going to have a hard (and not safe) time trying to stop, pull over, etc where having radar and / or LiDAR could have provided that next level of safety.

Safer than human, as much as possible, in as many scenarios as possible should be the end game objective.

I already paid for FSD and use (what’s available now) daily. Not thrilled with radar being gone, but if it gets us a better FSD without instead of taking more time to try and leverage both to right ahead with no radar, for now anyway.
 
  • Like
Reactions: pilotSteve
the PB's that happen on the level of sw I have kept back is minimal. its one reason I dont want to dump my radar based sw. it works ok enough and I dont want to go thru more bugs just to get back to where I am now. my sw version is as bug free as I've seen in a tesla which is why I'm NOT dumping it. if tesla forces me to upgrade, I'll have no say, but until then, this version is the best one I've seen so far. and yes, its 2020 dated. way before the xmas version, in fact.
 
the PB's that happen on the level of sw I have kept back is minimal. its one reason I dont want to dump my radar based sw. it works ok enough and I dont want to go thru more bugs just to get back to where I am now. my sw version is as bug free as I've seen in a tesla which is why I'm NOT dumping it. if tesla forces me to upgrade, I'll have no say, but until then, this version is the best one I've seen so far. and yes, its 2020 dated. way before the xmas version, in fact.



Again- if the car physically has radar, even the newest SW still uses it.

You appear to be forcing yourself to keep an old version for no actual reason.
 
The current target seems to be along the lines of if the human eye can see it then the cameras can see it. And there are more cameras, in more places then we have eyes, even with mirrors. Which is great.
I've recently been watching a lot of YT videos from a philosophy guy (and great thinker) dan dennett. on one of his videos when he was talking about the brain and how it builds a model of the world it sees, he made a fine point about how our eyes dart back and forth and we get very high resolution when we capture what is straight ahead, but our periphery is utter crap, while that happens. and yet, the brain does its 'stitching' of the last good memories of the other regions (above, below, left, right) along with the high-res forward capture and it builds a seamless 'world map'.

my point is that the resolution of OUR maps is extremely high. we have much better dynamic range than most camera systems, we can squint and even do things like look away a little bit and capture some of the image we WANT, while not looking directly into sun. we do intelligent things in order to keep fresh, that internal high res map.

does the tesla logic do this to the same level the brain does? CAN it, even? I don't even know if we're at the point of being ABLE to do that, in the best labs we have in the world, much less at a single company. (I keep harping on why this really has to be a world-wide research effort as its way bigger than any one single company can solve, at least if we want this solved in our lifetimes).

the sensors are too thin, the cameras dont angle and tilt and nothing self-cleans except the front windshield. the cameras don't even have heating elements, which would be the least they could do, to try to clear off debris from snow and maybe rain.

the argument of 'well, humans can drive with just their 2 eyes' - that is such a misleading simplification of how human eye/brain perception works.

if I can find that dan dennett video, I'll post the link.
 
  • Like
Reactions: GlmnAlyAirCar
the sensors are too thin, the cameras dont angle and tilt


Outside of a couple of -really- close to the car spots the cameras see 360, so they don't need to tilt or angle for driving purposes.... and most of the really-close blind spot is in front, where tilting won't help anyway since the hood exists- and while driving you'd see anything before it got INTO the blind spot.


the cameras don't even have heating elements, which would be the least they could do, to try to clear off debris from snow and maybe rain.

The cameras do have heating elements. Always have.


That's from 2016 pointing this out.
 
AI doesn't need to "move the camera or focus point" like our eyes do. The eyes get great resolution for a small area per frame and must move to get good resolution all around, as the guy says. With multiple frames, the brain creates a suitable representation of the world. Cameras get good resolution all around in every frame. There is a short-range and a long-range camera up front on Teslas, to get great close resolution and good far resolution. There are also side cameras, pointing front and back, to get constant view in all angles. That replaces the turning of your head constantly in almost 360 degrees.
AI is a complicated field, I won't pretend I can explain it concisely. But it is a worldwide research field and has been for many, many years. Image recognition AI is probably the most researched, most used and most understood subfield... Tesla do 3D recognition and that might be less mainstream but still... 3d scene reconstruction from images has been a thing for 10-15 years already.
We might call these techniques artificial intelligence but I don't think you should compare them directly to how the brain works. They are a computer science technique to gather information / detect things from data, in this case sensor (camera) input.
 
  • Like
Reactions: Zhe Wiz
I don't believe Tesla has reached the limits of the hardware yet, although more processing power is always better and will most likely be needed at some point. My feeling is that the weakest point right now is the human aspect - the algorithms that involve the decision making. It still does dumb stuff it should not (like unprotected left turns or almost running into obstacles that are clearly in view). Is there a talent gap that need to be filled?

Overall, it's still impressive, and will improve, but I can't help but feel the AI coders / engineering talent is lacking. What do others think about this?
 
I don't believe Tesla has reached the limits of the hardware yet, although more processing power is always better and will most likely be needed at some point.

For L2 that's true.

If we expect higher levels will require redundancy of compute, they're already past the limits of HW3.

They've had to borrow compute from node B since mid-last-year, which means they can't run the full stack on a single node with the second acting as failover redundancy.

You'll need HW4 for that (which may, or may not, be enough- we can't know that until they "solve" vision since before than it's just random guessing how much is enough)
 
For L2 that's true.

If we expect higher levels will require redundancy of compute, they're already past the limits of HW3.

They've had to borrow compute from node B since mid-last-year, which means they can't run the full stack on a single node with the second acting as failover redundancy.

You'll need HW4 for that (which may, or may not, be enough- we can't know that until they "solve" vision since before than it's just random guessing how much is enough)

Good info, thanks. I haven't been following the FSD discussions recently. Yes, hardware aside, L3 is a ways away if they can't get these fundamental issues (obstacle avoidance, safe left turns etc) resolved (or at least improved) at L2.

I was a little disappointed with some of the stuff I saw in this latest Beta. Some things have improved, but they seemed to have regressed with others which brings me back to my point above about coding talent.

It didn't take very long to see these blunders on the streets in some cities. I'd love to see their testing methodology.