Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla has blocked Teslafi from getting information from our cars.

This site may earn commission on affiliate links.
It’s ALL about scale. When your monthly AWS bill is mid-8 digits, you figure out real quick that maybe running things yourself might actually just be cheaper. (And yes, I have a customer doing exactly this - and re-homing from AWS.)

Lift and shift the monolith applications to the cloud and you pay the price. Refactor your applications to leverage Lambda (serverless), functions, RDS, and micro-services architecture and you will be far ahead of anyone running this in a local DC. Not to mention scalability, fault tolerance and redundancy. When the apps are idle you are not paying for the serverless functions. When that Black Friday sale hits and you need to scale quick the resources do it automagically. Running all this on a monolithic application with servers in a DC and you are paying for all those idle resources, time to provision and gain budget for capital expenditures from finance, etc. There are use cases for companies that embrace cloud native application architecture. Those with a 1995 mindset will never realize the benefits.
 
  • Like
  • Love
Reactions: ucmndd and camalaio
Lift and shift the monolith applications to the cloud and you pay the price. Refactor your applications to leverage Lambda (serverless), functions, RDS, and micro-services architecture and you will be far ahead of anyone running this in a local DC. Not to mention scalability, fault tolerance and redundancy. When the apps are idle you are not paying for the serverless functions. When that Black Friday sale hits and you need to scale quick the resources do it automagically. Running all this on a monolithic application with servers in a DC and you are paying for all those idle resources, time to provision and gain budget for capital expenditures from finance, etc. There are use cases for companies that embrace cloud native application architecture. Those with a 1995 mindset will never realize the benefits.

100x this. I'd forgotten to term "cloud native".

It's small beans, but I was on a team that moved a ~$5000/month service from EC2+RDS to various of the more specific cloud services... for essentially free. I think it was something like $20/month instead, and would actually scale once it got more use instead of flopping over.

It's completely ridiculous what you can do for a low cost, but it must be designed with that optimisation in mind, often requiring a lot more engineering effort (but not necessarily). Even at its most basic, things like network latency, RAM usage, CPU utilization, etc. all start to matter again - something many newer applications have lost sight of with the ridiculous amounts of resources cheaply available.

Which is partly why something like TeslaFi might be a bad fit for something like Lambda, actually. Their function runtime is at the mercy of the Tesla API, which in many normal use-cases takes multiple seconds to respond. But then there's other ways to cost-optimise it and still scale.
 
This would do both Tesla and the users a favour at this point. Is it still blocked?

I'm glad I abandoned work on my Tesla API service since it would've been based on AWS. Phew.



This is overlooking huge issues.
  • Address space is not cheap, nor widely available.
  • Servers come with maintenance. Something the size of TeslaFi absolutely cannot justify running a personal server farm and employing people to maintain that around the clock.


I've often wondered why Tesla even tolerates it. Surely they must see the massive amount of traffic coming from services like TeslaFi.

The app only uses the API briefly, and slowly. Maybe a few requests per day. But these other services? They're pounding the Tesla servers with requests. Some of them poll every few seconds. It's completely ridiculous, orders of magnitude more traffic than they should be serving for just app usage.

Even if Tesla is running this out of their own data centre, this surely impacts their load, number of servers needed, bandwidth, etc. All of those ultimately have costs associated with them, which is why someone like Amazon charges for all of those things (and various other granular things).



Yeeeaah, no. Elon is still the CEO of a company that needs to survive and make business decisions.

I'd wager that third party services already make up >90% of the API load on those servers that serve the API. If I was on that team, I'd start asking questions like...
  • Who is the "customer"? (it's supposed to be the official Tesla app only, but TeslaFi perhaps? TezLab? TeslaMate?)
  • What changes can we make without upsetting our customers? (Again, they normally have full control over the app so they'd only upset themselves... but if they change how something works that TeslaFi depends on, who has the right to get angry?)
  • What are the design/testing requirements? (with only the app, the load can be well defined. with other services, this completely changes requirements)
And I'd only ask that from the developer perspective. A product manager has further, harder questions to ask. They're at some weird in-between where it's not an official product (not even documented externally), but people use it as such. Heck, people are paying third parties to use it. That's really weird!



Anyone and their dog can run a service on AWS. Being on AWS does not imply you have the means, ability, employees, or other logistics to handle running and owning a physical server farm at a real, owned building.



This. But AWS bill optimisation is a thing, and I personally find it fun. Transforming services so they're more financially efficient on the AWS offerings (basically, leverage specific AWS services and don't just run an EC2 [VM] fleet hooked to RDS [VM with Database]). Plenty of success stories there.

Well that’s why I want Musk to know how many owners use it, how we’re unhappy it’s blocked, and that we want it back. We don’t have to have the unofficial one back, but supporting the idea officially would be awesome. Maybe they make it in house, or maybe they just release an API with official guidance on what’s accepted. Define ping frequencies and use cases. Maybe charge services that want to access the API. It would suck to pay more, but honestly if Teslafi had to pass on $1.99/mo charge to customers, wouldn’t you pay that? I would for the same current functionality... I’ve got to believe that would cover the cost of bandwidth and resources for Tesla, at least I’m the grand scheme of it.
 
Lift and shift the monolith applications to the cloud and you pay the price. Refactor your applications to leverage Lambda (serverless), functions, RDS, and micro-services architecture and you will be far ahead of anyone running this in a local DC. Not to mention scalability, fault tolerance and redundancy. When the apps are idle you are not paying for the serverless functions. When that Black Friday sale hits and you need to scale quick the resources do it automagically. Running all this on a monolithic application with servers in a DC and you are paying for all those idle resources, time to provision and gain budget for capital expenditures from finance, etc. There are use cases for companies that embrace cloud native application architecture. Those with a 1995 mindset will never realize the benefits.
I helped architect and build out our SaaS solution in aws using all Lambda functions for the front end and api gateway for all our rest endpoints. Our monthly costs are next to nothing to host our front end.