Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
Sure, accidents will happen with City FSD.

If FSD is the cause of those collisions (we don't call them accidents anymore, they're collisions), then that's a problem. Being t-boned by someone running a light is one thing. FSD running a light is a complete failure and means the system is not working. Saying "there will be collisions" is entirely misguided here. FSD must not cause a collision where a human would not have.

And before anybody says "BuT HUmaNs rUn REd LigHtS", humans that are paying attention do not. And since the computer can't not pay attention, this isn't a reasonable comparison in any way.
 
If FSD is the cause of those collisions (we don't call them accidents anymore, they're collisions), then that's a problem. Being t-boned by someone running a light is one thing. FSD running a light is a complete failure and means the system is not working. Saying "there will be collisions" is entirely misguided here. FSD must not cause a collision where a human would not have.

And before anybody says "BuT HUmaNs rUn REd LigHtS", humans that are paying attention do not. And since the computer can't not pay attention, this isn't a reasonable comparison in any way.

Humans paying attention with FSD is on, won't run red lights either.
 
I do think part of the problem is that Tesla insists on reinventing the wheel by doing their own in house NN that will do everything. I am pretty sure that there are commercially available, off the shelf NN for reading speed limit signs, traffic lights etc... that Tesla could have simply purchased and plugged in. But Tesla wants to do everything in house. It's like the issue with auto wipers. Tesla could have simply purchased a rain sensor but instead they insisted on reinventing the wheel with their own in house deep rain NN.

They can reinvent all they want but I am not sure they have a solution yet! Again, the Mobileye patent specifies in a general way how it's read from the sign. It seems very hard to circumvent.

https://patentimages.storage.googleapis.com/3b/34/b0/78bbf6df59bfb3/US20080137908A1.pdf

[0007] For the classification task, most approaches utilize well known techniques, such as template matching, multi layer perceptrons, radial basis function networks, and Laplace kernel classifiers. A few approaches employ a temporal fusion of multiple frame detection to obtain a more robust overall detection.

It's not enough to deviate slightly from the patent by using a tiny variation of an algorithm.
 
  • Like
Reactions: APotatoGod
Humans paying attention with FSD is on, won't run red lights either.

We've already covered the fact that humans with AP enabled pay less attention. If they called if "full self driving" then you can expect far fewer people will pay attention. In any case, you're ignoring the point I'm making, so I'm going to guess you agree but it hurts to accept that Tesla is wrong and your 6+ months of hard dedication to defending them has been for naught.

FSD isn't coming by end of year, robotaxi fleet isn't coming next year, Tesla isn't running shadow mode, they aren't sending data sample tasks down to the fleet. They embellished their story for marketing purposes. It was an investor meeting, so they were never ever going to show their warts.

I just hope this gives people the chance to be more skeptical in life. Especially when a company is selling their own story.
 
The second phase will be Elon's definition of "It's perfect, I'm just waiting for the regulators" but in reality will still feature the car getting overwhelmed or uncertain of the situation and demanding you take over as it occasionally does today.

Musk's argument will be "statistically safer than a human driver". But yes, we will hear a lot about the evil regulators when Tesla makes substantial improvements. In reality the regulators will keep Tesla from assuming massive potential liability by releasing level 4/5.
 
In any case, you're ignoring the point I'm making, so I'm going to guess you agree but it hurts to accept that Tesla is wrong and your 6+ months of hard dedication to defending them has been for naught.

What point was that again? That Tesla will miss their end of the year deadline? We already knew that. Or is your point that it would be a big problem if FSD causes collisions? Yes, it would be a problem... if the system were certified as a fully autonomous system where the driver is not responsible. But you are forgetting that Tesla says that the driver must pay attention. So the driver will be responsible for any accident.

FSD isn't coming by end of year, robotaxi fleet isn't coming next year, Tesla isn't running shadow mode, they aren't sending data sample tasks down to the fleet. They embellished their story for marketing purposes. It was an investor meeting, so they were never ever going to show their warts.

I've been consistent that City FSD is not coming this year and that robotaxis are not coming next year either.
 
... During this phase, I expect the nags to go away and you'd reasonably be able to expect to not have to pay attention until the car tells you to, but with some risk since you'd have to ascertain your surroundings and react quickly. I expect it to live in that state for years before it's actually capable of driving on its own in most scenarios, much less approved by regulators to do so. But I'd be happy with that level and think it's fair to call it "full self driving" - it will drive itself, except when it can't. The leap from Level 2 to 3 seems imminent to me. The leap from 3 to 4 is massive.

In city driving, where everything happens more quickly (kid runs out into the street, car pulls out from a driveway unexpectedly, driver swerves in front of you or makes a turn from the wrong lane or stops suddenly because somebody stepped out in front of him) you will have to pay absolutely continuous attention until the car reaches level 3. And level 3 is a big leap up from level 2. We've been at level 2 for years and no hint of level 3 in sight other than the speculation from the more optimistic among us that HW3 will be a sea change. It will be much harder to get from level 2 to level 3 in the city than it is on the highway, and they don't even have level 2 in the city.

It's definitely doable to intervene in time if you are paying attention and thinking ahead.

My point is that in city driving you have much less time to react.

This problem [braking for a cross-turning car] has largely been solved in my driving experience. I find that AP does not brake anymore in those instances when it does not need to brake.

Mine still does. 2019.36.2.1. HW 2.5 EAP (no FSD package.)

I remain optimistic that HW3 will be a quantum leap in capability.

I agree. But I think you meant to say a big leap. A quantum leap would be the smallest leap the laws of physics allow. ;) I expect a quantum (tiny) leap. :D

If FSD is the cause of those collisions (we don't call them accidents anymore, they're collisions), then that's a problem. Being t-boned by someone running a light is one thing. FSD running a light is a complete failure and means the system is not working. Saying "there will be collisions" is entirely misguided here. FSD must not cause a collision where a human would not have.

This is actually the wrong measure, and I've encountered this idea before, that FSD is unacceptable if it ever causes an accident.

Accidents (or collisions) will happen. And the kinds of collisions FSD causes will be very different from the kinds of collisions humans cause. Just as in chess, the kinds of mistakes a computer makes are very different from the kinds of mistakes a human makes. A cheap chess program will beat nearly all human players, but when it makes a mistake it's one a human would never make. FSD will cause collisions and deaths, and will make different kinds of errors than humans make. The goal is to have fewer accidents, fewer injuries, and fewer deaths than humans cause. Zero, or even just "none that a human would not do" is an impossible goal. We will save lives when FSD is mature, but there will still be deaths, and some of those will be accidents a human would not have had.

Musk's argument will be "statistically safer than a human driver". But yes, we will hear a lot about the evil regulators when Tesla makes substantial improvements. In reality the regulators will keep Tesla from assuming massive potential liability by releasing level 4/5.

Regulators will be 100% on board with FSD because insurance companies with their lobbying power will be fully on board, because it will reduce collisions and deaths and injuries. And Musk will not knowingly release an unsafe product because Tesla has shown how dedicated it is to safety. (But he might use "regulators" as an excuse for missed deadlines. Nobody should ever believe Musk's time-lines. He's not lying. He's just way too optimistic.)

Buy a Tesla. It's the best car ever. But buy it for what it does when you buy it, not for what Musk promises it will do in a week or a year.
 
  • Like
Reactions: APotatoGod
My point is that in city driving you have much less time to react.

And my point is that if you are paying attention, you will still have enough reaction time to intervene if needed so it won't be an issue. That's because the reaction time to intervene if you are paying attention is exactly the same whether you have AP on or off.

Of course, if you are not paying attention, you won't have enough time to intervene but that is no different than if you were driving manually.

I agree. But I think you meant to say a big leap. A quantum leap would be the smallest leap the laws of physics allow. ;) I expect a quantum (tiny) leap. :D

Yes, I meant a big leap.
 
  • Like
Reactions: APotatoGod
Regulators will be 100% on board with FSD because insurance companies with their lobbying power will be fully on board, because it will reduce collisions and deaths and injuries.

This is wrong in several ways. First, regulators do not often do what insurance companies want them to. In fact, regulations are often drafted, receive comments, and are refined by industry experts and researchers. Insurers infrequently have any real impact on this at all.

As for regulators being 100% on board with FSD, we already see strong evidence that they are absolutely not. There are already regulations in place that impact driver assistance and self driving vehicles in the US, and EU member nations at the very least. These regulations are only likely to get more strict rather than lax as we move forward.

And finally, insurance companies don't want to eliminate drivers, because once FSD exists, you the owner aren't responsible for what the vehicle does when it's driving itself. The manufacturer is. This means private insurance will in effect disappear, and the manufacturers will either provide blanket insurance (like Tesla is starting to do), or if you have private insurance it will be a much smaller policy with lower premiums. That's a huge blow to the revenue of the entire industry.

And Musk will not knowingly release an unsafe product because Tesla has shown how dedicated it is to safety.

Do not ever delude yourself this much in real life. With the right pressure, anybody will do nearly anything. Musk is no different. He's a person, he's fallible, and he's corruptible. Don't ever forget that, and you'll be taken advantage of much less in life.

(But he might use "regulators" as an excuse for missed deadlines. Nobody should ever believe Musk's time-lines. He's not lying. He's just way too optimistic.)

You just said he'd lie, and then say he's not lying. You can't hold both of those positions. Either a deadline is missed and that's the reason a feature doesn't exist, or regulators disallow a feature that is present and functional to some degree. If the function doesn't exist and someone says "regulators" prevent it from being used, that's a blatant lie. If the function does exist, is deployed, and can not be enabled by the operator, then it's not a lie.
 
They can reinvent all they want but I am not sure they have a solution yet! Again, the Mobileye patent specifies in a general way how it's read from the sign. It seems very hard to circumvent.

https://patentimages.storage.googleapis.com/3b/34/b0/78bbf6df59bfb3/US20080137908A1.pdf

[0007] For the classification task, most approaches utilize well known techniques, such as template matching, multi layer perceptrons, radial basis function networks, and Laplace kernel classifiers. A few approaches employ a temporal fusion of multiple frame detection to obtain a more robust overall detection.

It's not enough to deviate slightly from the patent by using a tiny variation of an algorithm.

This patent sounds really dated, like from the days before deep CNNs took over all the machine vision papers and people used to hand tune filters for classification.
 
As for regulators being 100% on board with FSD, we already see strong evidence that they are absolutely not. There are already regulations in place that impact driver assistance and self driving vehicles in the US, and EU member nations at the very least. These regulations are only likely to get more strict rather than lax as we move forward.
I disagree with this. If you look at the rules that have been implemented I think that they are 100% supportive of FSD. You could register a Level 3-5 self driving vehicle today in California (if such a vehicle existed). With regard to driver assistance features I expect that there is going to be much more rigorous study of the safety of such systems.
And finally, insurance companies don't want to eliminate drivers, because once FSD exists, you the owner aren't responsible for what the vehicle does when it's driving itself. The manufacturer is. This means private insurance will in effect disappear, and the manufacturers will either provide blanket insurance (like Tesla is starting to do), or if you have private insurance it will be a much smaller policy with lower premiums. That's a huge blow to the revenue of the entire industry.
I agree. I'm pretty sure that insurance companies will make less money when they're insuring manufacturers instead of drivers.
 
  • Informative
Reactions: APotatoGod
I'm curious to see some Tesla owners track on the compute load on their FSD Computers. In April, Elon tweeted:

“The Tesla Full Self-Driving Computer now in production is at about 5% compute load for these tasks [i.e. Navigate on Autopilot] or 10% with full fail-over redundancy”
The same day Elon also tweeted that that the compute load on HW2.5 was “~80%”.

Presumably Tesla wants to use most if not all of the FSD Computer's compute. Before the city driving features go to wide release, presumably Tesla will want to run them passively, i.e. in shadow mode. So, if Tesla owners with HW3 can track the compute load in their cars, we should be able to tell when the city driving features are released in shadow mode.

I'm game. How do I track the compute load in my HW3 MS?
 
This patent sounds really dated, like from the days before deep CNNs took over all the machine vision papers and people used to hand tune filters for classification.

Indeed, but I am not sure if it's enough to not be affected by the patent. Seems like it's not enough since they haven't speed sign reading yet. I could be wrong, but looking at the dependency on (often incorrect) maps makes me worried.

The specific method examples are dated, but the patent is using the same methodology as a whole.
 
Last edited:
Indeed, but I am not sure if it's enough to not be affected by the patent. Seems like it's not enough since they haven't speed sign reading yet. I could be wrong, but looking at the dependency on (often incorrect) maps makes me worried.

The specific method examples are dated, but the patent is using the same methodology as a whole.

The only thing that actually matters in a patent is the list of claims on the last page, not the title or abstract or specification or whatever. If any one step in a claim is missing from Tesla's implementation then it doesn't infringe. For example, in Claim 1 of the Mobileye patent (that the other claims are based on) it states...

The method comprising the steps of...
(a) programming the processor for performing the detecting of the traffic sign and for performing another driver assistance function;

(b) first partitioning a first portion of the image frames to the image processor for the detecting of the traffic sign and second partitioning a second portion of the image frames for said other driver assistance function; and

(c) upon detecting an image Suspected to be of the traffic sign in at least one of said image frames of said first portion, tracking said image in at least one of said image frames of said second portion.


If Tesla has steps A, B, and C then they're infringing. If they have A but not B or C, then it doesn't infringe. So if they don't divide the image frames into separate partitions then I guess they're good. Seems pretty easy to avoid to me, just don't partition the image frames, process each frame in the same way.

This whole partitioning system MobileEye came up with, i.e. keeping one set of camera settings for traffic signs, and another set for other driver-assistance stuff seems to be the actual invention they're claiming.
 
Last edited:
The only thing that actually matters in a patent is the list of claims on the last page, not the title or abstract or specification or whatever. If any one step in a claim is missing from Tesla's implementation then it doesn't infringe. For example, in Claim 1 of the Mobileye patent (that the other claims are based on) it states...

The method comprising the steps of...
(a) programming the processor for performing the detecting of the traffic sign and for performing another driver assistance function;

(b) first partitioning a first portion of the image frames to the image processor for the detecting of the traffic sign and second partitioning a second portion of the image frames for said other driver assistance function; and

(c) upon detecting an image Suspected to be of the traffic sign in at least one of said image frames of said first portion, tracking said image in at least one of said image frames of said second portion.


If Tesla has steps A, B, and C then they're infringing. If they have A but not B or C, then it doesn't infringe. So if they don't divide the image frames into separate partitions then I guess they're good. Seems pretty easy to avoid to me, just don't partition the image frames, process each frame in the same way.

This whole partitioning system MobileEye came up with, i.e. keeping one set of camera settings for traffic signs, and another set for other driver-assistance stuff seems to be the actual invention they're claiming.

Hope you're right! I am scratching my head, though. Why wasn't this out, like last year?
 
Something nobody has mentioned yet is that overly generic or overly broad patents can be invalidated. So, for example, the claim of separating an image into "sign" and "not sign" is basically junk. That's both an obvious technology and overly broad. So there's a high chance that a good lawyer could have those claims thrown out.

We'd just need a few companies willing to pool resources, discover prior art, and litigate.
 
As a holder of a number of patents, I’d be very surprised if that patent would actually hold up. It basically patents a system that reads speed limit signs and says “and there are all these ways it can be done, and etcetera.” But, ultimately, the point of the patent is to expressly lay out to a person skilled in the art how to achieve the aims of the invention. The trade off is that by disclosing that info, you get protection from the government to pursue your business with monopolistic protection. So you can’t patent bread and say “this invention describes the fashioning of foodstuffs from grain using heat and bacteria.” It’s far too broad. There needs to be a “preferred embodiment” that anyone skilled in the art can follow and replicate. With NNs, I’d say that’s very vague. That’s akin to your bread’s preferred embodiment being a recipe out of better homes and gardens with “yeast” replaced with “bacteria.” You’re liable for get a tuberculosis marble rye.

On the other end, trivial subtleties of description can yield defensible patents. I remember one regarding answering machines. Somewhere there’s a totally valid patent from AT&T (or Lucent - I don’t remember) that codified the ability for a message for which less than X number of seconds had been replayed to be skipped and marked as “new.” So you come home, press Play, and “Hi Honey, it’s Mom...” <skip/> and it’s still a new message. Totally upheld. Seemed stupid to me, but I guess someone skilled in the art could follow that description and replicate it

Ultimately, I don’t think the MobileEye patent is holding up speed limit sign recognition. It’s really hard and requires significant contextual understanding. And the MobileEye’s doesn’t work that good either
 
  • Like
Reactions: croman
The only thing that actually matters in a patent is the list of claims on the last page, not the title or abstract or specification or whatever. If any one step in a claim is missing from Tesla's implementation then it doesn't infringe. For example, in Claim 1 of the Mobileye patent (that the other claims are based on) it states...

The method comprising the steps of...
(a) programming the processor for performing the detecting of the traffic sign and for performing another driver assistance function;

(b) first partitioning a first portion of the image frames to the image processor for the detecting of the traffic sign and second partitioning a second portion of the image frames for said other driver assistance function; and

(c) upon detecting an image Suspected to be of the traffic sign in at least one of said image frames of said first portion, tracking said image in at least one of said image frames of said second portion.


If Tesla has steps A, B, and C then they're infringing. If they have A but not B or C, then it doesn't infringe. So if they don't divide the image frames into separate partitions then I guess they're good. Seems pretty easy to avoid to me, just don't partition the image frames, process each frame in the same way.

This whole partitioning system MobileEye came up with, i.e. keeping one set of camera settings for traffic signs, and another set for other driver-assistance stuff seems to be the actual invention they're claiming.

So as long as you don't send only portions of a frame for traffic detection, but rather send the entire frame (i.e. as long as you would obey a speed limit sign in the middle of the road), it sounds like you wouldn't violate this patent. Ergo, the only thing this patent does is cover a particular optimization that they do for performance reasons, to save CPU horsepower. And in a modern world, where every object in the image has to be classified, this patent doesn't seem to cover much.

Unless by "portion of the image frames", it means a subset of the frames (e.g. send every tenth frame for speed limit sign detection), in which case the patent is even worse.

I'd imagine the patent's robustness goes rapidly downhill from there. Then again, IANAL, so whatever. :)
 
  • Like
Reactions: DanCar