Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
I didn't see this posted and couldn't find via search

Tesla sales did well in Turkey / and then trailed off (well, I haven't checked for ages and no one includes them in regional stats - certainly not in "Europe" sales).

I knew there had been tax changes, but around 2+ months ago, Tesla responded by lowering Model Y power. 80% tax on ICE, 10% on Model Y RWD (lower power) instead of 50-60% tax if Tesla had kept the normal power level.

Similar in Singapore. US$8455 difference (article, varies each auction) plus some stuff about COE bids. "A successful COE bid gives you the right to own a vehicle that can be used on the road for 10 years"


"In Turkey, for example, the 159 kW Model Y RWD has let Tesla offer a price up to 50% lower compared to its peak tag if it would've slotted in the tax bracket for cars with 160 kW and above power output. Turkey's "special consumption tax" slaps at least 80% over all ICE cars.

To encourage EV adoption, that tax is 10%-40% depending on their power output and price bracket. Unsurprisingly, the software-limited Tesla Model Y RWD falls into the 10% tax category, while a 160 kW version would've been taxed either 50% or 60% depending on the price, so the move has allowed Tesla to offer the Model Y way cheaper without doing much but a "soft performance limit" flagging.

The 110 kW Model 3 in Singapore follows a similar logic that avoids high local taxation for more powerful vehicles. As can be seen in the Tesla Singapore screenshot below, the price difference between the Model 3 RWD 110 and the regular Model 3 is not significant.

The software-limited 110 kW model, however, now falls into Singapore's Certificate of Entitlement (COE) category A which is capped at 110 kW output for EVs. Were it to go into the next category B, the COE tax difference would've been the whopping US$8,455 equivalent, so Tesla and other EV makers are giving buyers there a choice."

Singapore has a quota/auction system - better to be in Cat A rather than Cat B Latest COE Prices and Bidding Results 2024 | Motorist Singapore

Prices in SGD - 8,427.84 USD difference (11445 SGD)

1719850323722.png
 
I wonder if this means Tesla will need to upgrade the FSD computer in all vehicles. Could be expensive.

Elon said: Our next-gen AI model after this has a lot of promise: ~5X increase in parameter count, which is very difficult to achieve without upgrading the vehicle inference computer.

Feel like this is salesmanship from Elon. Yes it is hard to do, but we are up to the task.
Likely doing quantization: Example quantization that could be done: 8 bit / 5 = ternary quantization, also known as bitnet. The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Things that can be done to lower latency:
Usually a quantized model is faster to decode than FP32. You can also make a model wider rather than taller to keep latency down. This assumes the hardware can do plenty in parallel. You can also do tricky things like have two models, one fast and one not so fast. The fast model reacts to emergency situations while the not so fast model does most everything else. Other ideas include architecture improvements like block transformers: \scalerel* Block Transformer: Global-to-Local Language Modeling for Fast Inference
 
Do we know if HW3 is upgradable in a Service Center? They could just swap the chip if the cameras don't change... So theoretically would be a opt-in thing, the customer should take the time and money to do it, and the care would be the same. Not a recall.

I dimly remember the HW4 computer doesn´t fit in the place where the HW3 one is (form factor? power requirements?). This memory is back from the time when HW4 was announced or so..
 
Last edited:
  • Funny
Reactions: SOULPEDL
Elon said: Our next-gen AI model after this has a lot of promise: ~5X increase in parameter count, which is very difficult to achieve without upgrading the vehicle inference computer.

Feel like this is salesmanship from Elon. Yes it is hard to do, but we are up to the task.
Likely doing quantization: Example quantization that could be done: 8 bit / 5 = ternary quantization, also known as bitnet. The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Things that can be done to lower latency:
Usually a quantized model is faster to decode than FP32. You can also make a model wider rather than taller to keep latency down. This assumes the hardware can do plenty in parallel. You can also do tricky things like have two models, one fast and one not so fast. The fast model reacts to emergency situations while the not so fast model does most everything else. Other ideas include architecture improvements like block transformers: \scalerel* Block Transformer: Global-to-Local Language Modeling for Fast Inference
That's some detail there, thanks. I'm still trying to figure it out myself. (Maybe this next para helps dumb it down some.) This is actually important to avoid the AI wool effect over the eyes.

From a higher level, I just read about the basic difference between an AI chip (like the new AMD on Windows laptop) and legacy. They talk about using only 8 bits for the word size, I assume to quantify the full range of each parameter (like the neuron). But you mention 1.58 bits? Likely OT, I'm gonna have to read up I think. But the idea is that 256 levels of anything offers fairly good control. My digital pots are still only 8 bits for this same reason. Affects speed and cost.

The other things mentioned where that these laptops would have local Chat NNs that would integrate into Windows Office etc. This is some earth shaking stuff here. The integration especially.
 
  • Like
Reactions: unk45 and DanCar
...Why would I pay extra money for a thing I was promised with a prior purchase for no additional cost? That seems extremely foolish advice...
Since I did that multiple times I respond to that question only. I, not necessarily anyone else bought that option each time it was available for my new Tesla explicitly because I knew I was defraying cash needed for further development, not in the P&L since Tesla did not recognize that as income, but cash flow counts for me.

Silly, perhaps, but I did not ever expect it to become so good as quickly as it is now with v 12xx.
I still seriously doubt there will be widely distributed Robotaxis anytime soon. That skepticism does not in any way make me regret m purchases. In context, my most recent purchase did include a transfer of FSD. Frankly, for me only, I'd have bought without it being transferred and would have paid again. From a solely economic perspective, absolutely stupid. From a developmental perspective, I'm sure it will have been worth all the 'wasted' money.
 
Why would I pay extra money for a thing I was promised with a prior purchase for no additional cost? That seems extremely foolish advice.

Doubly so when we have no idea if HW4 will be enough either any my existing HW3 car continues to perform flawlessly--- what sense would spending tens of thousands of extra dollars make here?

Are you holding on to the old car just to see if Tesla will eventually make good on the HW3 robotaxi promise (or otherwise make you whole)?
 
I hope people can read between the lines here.

Robotaxi-level FSD will not operate on HW3

Notice, in order to get the purported 5x - 20x improvement in reliabilty (according to Musk), the model had to increase 5x in parameter size.

So no, Tesla could not simply train with more data for longer to get that large of an improvement. They needed to increase model size. As I've speculated in the past.

Now the # of parameters barely fits on HW3, and probably due to efficiency moves like reducing compute precision.

But we are still say 50x away from robotaxi levels.

How much larger is the model going to need to get? Certainly at least another 10x in size.

No way in the world HW3 will handle that. And HW4 will also be challenged.
Here we go again…..
 
  • Funny
Reactions: StapleGun
Are you holding on to the old car just to see if Tesla will eventually make good on the HW3 robotaxi promise (or otherwise make you whole)?

I'm holding on to it because it still works flawlessly, is fully paid for, still has a couple years of warranty left on it, and I don't see anything especially compelling, personally, in the refreshed version of it.

Certainly some QOL improvements, but nothing compelling enough to throw what would be north of $30,000 cash (30k plus interest if financed) into above what the trade in value of the existing- still perfectly working and under warranty- car would be.

The fact it ALSO has a version of FSD no longer sold is something I keep in mind, but if that 30k number was a ton smaller it wouldn't be something that held me back or anything.