You're all missing the point .. London to Edinburgh on a single charge at 37mph while bings watching netflix. Just add the adult incontinence pads and a 12v microwave for the TV dnners in the coolbox - a mere 12hrs
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
If we assume an average speed of ~ 30 mph over 15000 miles, that will take ~ 500 hours to drive. At £6/hour Thats £3000 per year.Just noticed and article on the Top Gear website where VW are talking about charging €7 (about £6) an hour for level 4 autonomous driving. No timescales for when this will be available (don’t think they’ve gone past level 2 yet), just that they think that is the level they’ll need to cover the cost of the hardware and be profitable. It does sounds better value than Tesla’s £6,800 up front cost, unless of course you do a reasonable amount of driving (break even point is about 7 hours a week over 3 years). Will be interesting to see where Tesla pitch their subscription service, should that ever appear.
Just politics. The govt decided a while back that they wanted the UK to be seen as cutting edge in adopting self-driving technology. That's fine, but they didn't really look at the details of how hard it would actually be to achieve. Their timescales for implementation were also a complete joke. It didn't help that the likes of Musk thought self-driving would be done and dusted several years ago. Now the goal-posts are starting to move and "self-driving" doesn't actually literally mean "self-driving" anymore.I’m surprised the govt is using the term Self Driving.
From a computer AI perspective, none of this was possible prior to compute reaching minimum requirements around 2017.It reminds me of some joker on piston-heads a few years ago seriously arguing that within a year or so, a fully automated car would be "easily" capable of out-pacing a human driver along UK B-roads. I'm tempted to even dig up the ancient thread and ask him when it's going to happen, lol.
Just noticed and article on the Top Gear website where VW are talking about charging €7 (about £6) an hour for level 4 autonomous driving. No timescales for when this will be available (don’t think they’ve gone past level 2 yet), just that they think that is the level they’ll need to cover the cost of the hardware and be profitable. It does sounds better value than Tesla’s £6,800 up front cost, unless of course you do a reasonable amount of driving (break even point is about 7 hours a week over 3 years). Will be interesting to see where Tesla pitch their subscription service, should that ever appear.
By my maths you're doing 18k+ miles a year with active AP engaged, given when you can use it you must be doing double that mileage in total which isn;t typical. You're also not on Level 4, not even level 3, you're at level 2, and if Tesla did ever get to deliver Level 4 then Musk has said he'd charge a lot more. You've really paid for a punt that you'll still own the car when/if it happensHmm not very cheap for people who drive a lot and would mostly benefit from FSD. For me for example based on VWs charging it would cost me around 3k per year. 6800 up front makes more sense. IF we ever get it though
Its hard to know. Lets not forget that in nearly 6 years of development so far Tesla haven't been able to get windscreen wipers to work reliably. What they need is not evolution as if they're edging closer as its with adiminishing returns, what they need is some form of step change breakthrough which isn't more grunt, maybe the 4D modelling will be it but I have my doubts as the sensor suite isn't there. I've worked in AI/NN/ML etc for years, my university project 30 years ago was solving NP complete problems using statistical methods rather than brute forcing all the permutations and the one key take away is that you never know what the right answer is, just what statistically is the best answer you can come up with in the time with that limited by local minima and ergodic barrier challenges that the car (or whatever) simply can't see options beyond a defined set of parameters from its current best answer resulting in stalemate. The current crop of techniques are derivatives of those mathematical approaches, and it wasn't new back then 30 years ago, just the computing power meant the problems needed to be simpler,From a computer AI perspective, none of this was possible prior to compute reaching minimum requirements around 2017.
Assuming adequate funding is applied I predict significantly better than a human driving system to be out within this decade. You can bookmark this and laugh at me in 2030 if you like.
I worry when Musk talks about not being able to work out the truth when mixing together multiple sensor types and hence dropping radar, it seems he'd prefer to be placing his bets on one sensor (which ironically is 3 for the straight ahead camera) rather than being able to resolve the differences. If he said the radar was just rubbish, badly calibrated, told them nothing etc then it would make more sense, but why on earth has it taken them 6 years to work that out? They're the same sensors today as back then. I think its because the 4D approach has dumbed down the task as its too hard to resolve multiple sensors over time.
Its a good point you make, maybe they don't overlap the cameras they just work with different resolutions, ie the long range forward camera fills in the middle bit at high resolution, the edges with the medium range and the sides with the wide angle, but that approach then unpicks the redundency aspects when you start looking at overlapping camera feeds which was also the idea. The world is also seem with a monical and no depth perception which we get with 2 eyes (we know people drive with one eye, but I suspect thats more a nod to civil liberties than thinking they're just as safe) so how they're working that out reliably is anybodies guess from an image.My biggest concern is that for similar reasons why they struggle to synchronise radar with the other sensors, they may have issues with synchronising the cameras with each other. When I use to work in broadcast TV graphics, we often has to synchronise multiple video streams. It was dead easy as a synchronisation signal was piped around with all the other signals so everything could sync with that. However if you are trying to sync multiple sources with no way of synchronising them (can you synchronise the multiple cameras?), you are going to get very mixed results.
iirc that the cameras are running at 30fps (just halve the distances for 60 fps, 1/4 for 120fps etc) so from a bit of schoolboy maths, if you are travelling at 60mph (~27m/s), in the 1/30th second for each frame (33ms), it represents a distance window of approx 0.9m. If you are looking at the output of 2 cameras, let alone 8, is that level of uncertainty between the position of objects in each frame going to be acceptable? That is getting close to half a cars width or 1/4 of an average motorway lane.
Of course, Tesla may have the ability to synchronise their sensors, so this may not be an issue.
I agree. I was just commenting that it's issues are not a limitation of mono-vision. It's just not clever enough to build an accurate 3D model of the world like our brains do effortlessly with one or two eyes. When you shut one eye the world around you doesn't suddenly turn into a flat 2D surface.AP fails at depth perception all the time - it's why lorries jump from one lane to the other.. it can't tell whether it's a big lorry further away or a smaller one close to you.
Radar will give you distance, but Elon doesn't want to use that.. so it's entirely possible FSD will have the same problem.
I struggle to understand how we can get reliable L3 or L4 in the UK within a reasonable timeframe even though I already use Autopilot on motorways, dual carriageways and some A roads. Occasional erratic behaviour means I will continue to keep my hand on the wheel regardless of this forthcoming legislation for traffic queue situations.
From what I have read this could be solved in the upcoming V9 release, where "actual probability distribution of objects" should give smooth representation of the real world.. more details were in this electrek article:AP fails at depth perception all the time - it's why lorries jump from one lane to the other.. it can't tell whether it's a big lorry further away or a smaller one close to you.
Not about radar being rubbish, just the video data now being interpreted as being much more accurate by the current analysis. The radar is now providing a very low "importance score" in most models. I wonder how well this has been tested in poor weather.Its hard to know. Lets not forget that in nearly 6 years of development so far Tesla haven't been able to get windscreen wipers to work reliably. What they need is not evolution as if they're edging closer as its with adiminishing returns, what they need is some form of step change breakthrough which isn't more grunt, maybe the 4D modelling will be it but I have my doubts as the sensor suite isn't there. I've worked in AI/NN/ML etc for years, my university project 30 years ago was solving NP complete problems using statistical methods rather than brute forcing all the permutations and the one key take away is that you never know what the right answer is, just what statistically is the best answer you can come up with in the time with that limited by local minima and ergodic barrier challenges that the car (or whatever) simply can't see options beyond a defined set of parameters from its current best answer resulting in stalemate. The current crop of techniques are derivatives of those mathematical approaches, and it wasn't new back then 30 years ago, just the computing power meant the problems needed to be simpler,
I worry when Musk talks about not being able to work out the truth when mixing together multiple sensor types and hence dropping radar, it seems he'd prefer to be placing his bets on one sensor (which ironically is 3 for the straight ahead camera) rather than being able to resolve the differences. If he said the radar was just rubbish, badly calibrated, told them nothing etc then it would make more sense, but why on earth has it taken them 6 years to work that out? They're the same sensors today as back then. I think its because the 4D approach has dumbed down the task as its too hard to resolve multiple sensors over time.
However if you're talking about the whole industry doing it then maybe somebody has something we're not aware of.
I've really not seen this while just cruising down a motorway, frankly it's fine and I'm simply not getting involved until a lane change is required.Occasional erratic behaviour
It's not being 'programmed for', that's what makes this AI rather than what we've had so far (which has far more programmed behavior). It's been trained with real world experience, just like how human drivers learn. I would add that there are a wide range of driving abilities on the road, like the utter morons blocking yellow boxes on my commute this morning. It's common for us to over estimate our own ability, and overlook the times we mess up, just look how many accident there are every day, each of those is a human reaching the limit of their ability. I'll be happy to trust a machine, just as I'm happy to trust a plane, train, tube etc.is it capable of acting when faced with eventualities that go beyond what it was programmed for