Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Software quality and security (out of Market Action)

This site may earn commission on affiliate links.
It may be more than you but it appears you belong to a minority group that believe this.

It seems to me everyone is moving to this model led by Apple and Google.



Is it just me, or is the trend towards remote pushed updates of everything seriously problematic? I believe there should always be a step of having to personally approve updates on site, because otherwise you've introduced a single point of failure for the entire system. A malicious attacker can take down the entire Supercharger network with a single point of access. And since you can always introduce an update which prevents the device from accepting further updates, it's an extremely high risk attack.

Except in scenarios where there's extreme urgency in getting updates out, the extra time required to go to locations to "pull" updates seems well worth it in security terms; it mitigates any attacks.
 
  • Informative
Reactions: neroden
While it's true the process of figuring out how the stuff is set up is part of the attack, if you notice, this person is extremely vague, *and* his information is years old

Much of that isn't vague at all, and are things that aren't going to change often. His complete lack of vaguery is precisely what made that discussion so interesting.

Suppose you have a SQL database which has widespread read access by a fairly generous front-end program, because access to the data is needed on a routine basis for business purposes by a very large number of low-level people. (Such as an insurance company's customer data, or Tesla's customer data.) Anyone doing a targeted attack knows the columns and tables *for essentially legitimate reasons* before even starting the attack.

Not even remotely true. That's the entire purpose of a frontend: an abstraction of the backend.

I don't know about you, but I've actually done SQL injection attacks before (and a couple non-SQL injection attacks, such as bash injection... all too many people forget that injection attacks can apply to any command parser! Once discovered a bug in a small MMORPG (Eternal Lands)'s text parser that would let you run arbitrary commands on the user's system if they clicked an innocent-looking URL in chat... the rest of the dev team didn't believe me until I wrote an exploit. They were opening a browser to view URLs with popen... ugh). As a general rule, when it comes to SQL injection knowing what you need to change is the limiting factor. If you've got access to an end-user-facing frontend for a database, as a general rule, you have no clue how the database is structured. And far more often than not, you're working blind - you don't get to see the feedback of your injected commands. For good reason - that "security through obscurity" that you hate makes it "best practices" to hide detailed errors from users and only give them vague ones.

Say it's a vehicle management database, and you want to uncork "performance" on all Model 3 AWDs. By all means, go on and tell me what commands you need to inject to do so. The simple fact is, you're very unlikely to just know what you need to do in order to do this. The "obscurity" is your worst enemy. Anything that gives you a "flashlight" into the darkness - any way you can trick it into reporting more detailed errors, for example, or to help you learn their naming scheme - (or if not, brute force guessing) - is your key to success.

40,000 employees. You can't prevent most internal information from getting out, and you shouldn't even try.

You absolutely should try. Security leaks are not okay.

This is actually the mistake that the US Department of Defense is making; the best thing they could do would be to declassify *nearly everything*.

Great, and compromise all sources and facilitate all attacks, both physical (designing weapons systems against all of the freshly disclosed weaknesses in US hardware) and digital (basically handing them an attack plan).

If you have a problem with hacked and bulk-leaked data, the best you can do is honeypots. Fake sources targets that make it hard for an opponent to know what data is legitimate and what is simply designed to deceive them.

For making sure a person who does leak gets caught, subtle automatic data watermarking allows you to determine who the leaker was. Reality Winner was caught precisely through this means: all NSA printers print subtle yellow markings on printouts in a unique pattern, allowing them to match a physical document with the person who printed it. Watermarking can be automatically applied to almost any data - subtle wording / spelling / spacing changes, altering insignificant digits in numbers, minor alterations to graphics or positioning, etc.

The answer to good security isn't "give up", as you apparently seem to think it is.

This actually dates back to *safecracking*

You're mixing up security of algorithms (aka, technique) vs. architectural security, and security through minority.

I'll let Wikipedia explain the difference:

Security through obscurity - Wikipedia

Obscurity in Architecture vs. Technique[edit]
Knowledge of how the system is built differs from concealment and camouflage. The efficacy of obscurity in Operations Security depends by whether the obscurity lives on top of other good security practices, or if it is being used alone.[7] When used as an independent layer, obscurity is considered a valid security tool.[8]

In recent years, security through obscurity has gained support as a methodology in cybersecurity through Moving Target Defense and cyber deception.[9] NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment.[10] The research firm Forrester recommends the usage of environment concealment to protect messages against Advanced Persistent Threats.[11]

In short, you're saying NIST is wrong. Sorry, but I'm going to go with NIST.
 
Last edited:
Super cool code.

Thanks :) Found that __isleap was a macro, not a function by... ahem... taking advantage of the lack of obscurity in commonly available include files. ;) The rest flowed from there, as macros are dangerous to begin with. And I'd already had the localtime-overwriting issue in the back of my mind. Pretty much any code where the man page will warn you of a caveat or bug or whatnot is a ripe target for a hidden exploit.

But that entry doesn't hold a candle to what I planned to submit in the next competition. I mean, it's serious weapons-grade evil stuff. ;) You look at it and you see... "There's nothing here". But then you run it, and not only does it inexplicably do whatever evilness the contest wants, it then goes on to ruin your target's life. Executing hundreds to thousands of lines of unseen code, with nothing visible in the program, many executing long after the program stops running. I skipped the 2016 competition, and now I'm so disappointed that they haven't held it again :Þ
 
Last edited:
Software and security at a company like Tesla is going to be dynamic.

Old holes are closed, new ones are open.

As new holes are closed, even newer ones open. Look at the never ending Spectre/Meltdown shitshow. Tired of ensuring compliance only to be incompliant next week when Intel announces “but wait.. THERES MORE”.

Tesla has grown explosively since the alleged IT guy was there.

When companies form, security is really a non consideration. It’s just deploying rapidly.

As an organization matures, they start specializing and moving away from the one or three people that did EVERYTHING.

Security departments form, change management is introduced, outside organizations are hired to pen test.

Due to all the high profile hacks in recent years with examples like Sony and Target , Security Departments have become more powerful than ever. They have strong mandates to not be news at all costs. The line of “afraid to change things” can be broken by an edict from IT Security.

TLDR - nothing said by the “Tesla IT guy” validates his employment or employment capacity with certainty. Assuming all was true, I don’t think the shared “knowledge” would pose a risk today.
 
  • Like
Reactions: neroden
Asking those of you with better programing/system chops, do this guys criticisms seem to have merit?
Yes. Everything he said rings true...for any large software project. Just the nature of the beast...the code always sucks...it's always the other guys fault, my code isn't like like, and any random on the internet will tell you they can do it better.

But they don't.

Why not use the car’s connection for that?
It does. At least according to @Ingineer posts from the past.

I would not be so sure.
Superchargers don't have any vin blacklists, all "supercharging allowed" and billing is handled inside the car. Hack the car = free supercharging, allowed supercharging no matter if salvage or not.
Indeed.
 
  • Like
Reactions: neroden
Asking those of you with better programing/system chops, do this guys criticisms seem to have merit? Also, do we have insight into the general level of satisfaction of current IT team?
Everyone's software sucks. I'm VP of Software Engineering.
Realistically, Tesla is not on par with Google, Facebook and such, but still has very significant software chops, and no direct competitor is even in the same ballpark; except for Waymo/Google - in the Western world. Don't know about China.
 
Did anyone else catch the bit where this guy mentioned he was the one who (tried to) lock out @wk057 when he found the P100 images in the code?
What is your nickname in somethingawful?

The only difference from usual prank is use of old account this time.
It's always something hot popular elsewhere in social media, it's always in the open part of the forum.
You have "woken" up storyteller, you have a few of mild "critics" "relativating" given message and you have "discoverers" who "find" matching pieces of the provided "story".
There is always preparation and there is always planting of "credible" elements. It is the essence of a good prank.

He does made a few essential mistakes, for example he attributed to him actions done in multiple groups (not talking about individual positions). There are also factual mistakes about Tesla hardware.
BTW Tesla has enough of open positions with job descriptions to correlate proper dependencies.
They clearly made a search.
Dude is clearly a programmer, but apparently they didn't have enough time to prepare. Paid job?
In any case it was a sloppy work.
 
  • Informative
Reactions: neroden
Some remarks about his claims:

"tesla isn't encrypting their firmware and it's really easy to glean information from the vpn with a packet cap because nothing inside the vpn (was) encrypted. dumping tegra 3 model s and x is trivial and tesla's cars are nowhere near as secure as they'd have you believe."
From video: "firmware locked", hackers downloaded partial firmware using "man in the middle" (using diagnostic eth connector and few tricks in between). At the moment of conference this option was also locked. I remind that the video is from 2015.
half 3G connection
LOL, I just couldn't resist not to quote it.

also on the supercharger note - you can get blacklisted from using them if you charge on them all the time. that's because the supercharger bypasses the charging regulator boards and dumps directly into the pack at 300A/450v which creates a ton of wear on the battery. want to keep your range high? don't supercharge often.
I hope there is no need to comment these drivels here on Tesla users' forum.
the infotainment system and gateway don't have a battery-backed rtc. when the system reboots (sleep, deep sleep, reboot, whatever) the car is at tyool 1970 until it gets ntp again.
No need to comment...
it isn't advertised for obvious reasons. tesla was partially able to bring autopilot to market so quickly because they had seeded all cars after a certain time with autopilot sensors and hardware, just not activated - they then dialed up the tracking across the customer fleet to gather the data - and with the density of cars they had at the time with hardware in, they had a goldmine within hours. pii was stripped, but i don't think they could have pulled it off quite the same way if the US had GDPR.
Factually not true. Tesla started with MobilEye.
What is interesting: everything, and I mean everything he wrote can be traced to web pieces somehow related to Tesla.
Nevertheless plenty of his claims are impossible to be factual, and plenty of traceable mistakes.
 
  • Like
Reactions: neroden
Much of that isn't vague at all, and are things that aren't going to change often. His complete lack of vaguery is precisely what made that discussion so interesting.
I basically saw a list of generic frameworks, and a rather substantial complaint that different teams were hacking together different frameworks. There was some more detail about the single-point-of-failure server architecture, which I'm pretty sure everyone already knows about since there have been DoS attacks already.

Did I miss something more detailed? Login IDs, table column names, server logins?

Not even remotely true. That's the entire purpose of a frontend: an abstraction of the backend.

I don't know about you, but I've actually done SQL injection attacks before (and a couple non-SQL injection attacks, such as bash injection... all too many people forget that injection attacks can apply to any command parser! Once discovered a bug in a small MMORPG (Eternal Lands)'s text parser that would let you run arbitrary commands on the user's system if they clicked an innocent-looking URL in chat... the rest of the dev team didn't believe me until I wrote an exploit. They were opening a browser to view URLs with popen... ugh). As a general rule, when it comes to SQL injection knowing what you need to change is the limiting factor. If you've got access to an end-user-facing frontend for a database, as a general rule, you have no clue how the database is structured.
If the front-end is truly restricted, so that the only widespread access to the computer running the database is through the frontend, then sure. If the front-end communicates with the database through any open channel, everything the front-end does is public and should be assumed public. If the front end is on a widely accessed computer, *it is insecure* -- I can copy it, disassemble it, and figure out what it does.

And far more often than not, you're working blind - you don't get to see the feedback of your injected commands. For good reason - that "security through obscurity" that you hate makes it "best practices" to hide detailed errors from users and only give them vague ones.

Say it's a vehicle management database, and you want to uncork "performance" on all Model 3 AWDs. By all means, go on and tell me what commands you need to inject to do so. The simple fact is, you're very unlikely to just know what you need to do in order to do this. The "obscurity" is your worst enemy. Anything that gives you a "flashlight" into the darkness - any way you can trick it into reporting more detailed errors, for example, or to help you learn their naming scheme - (or if not, brute force guessing) - is your key to success.



You absolutely should try. Security leaks are not okay.
NO. You are BEYOND TOTALLY WRONG here. You do NOT know what you're talking about.

Fundamentally, if you try to secure too many things, you *create a culture where people bypass the security routinely*, because that's *the only way to get any work done*. This is *already the culture in the US military*.

The psychological aspects of security are *primary*, and substantially more important than anything else.

Great, and compromise all sources and facilitate all attacks, both physical (designing weapons systems against all of the freshly disclosed weaknesses in US hardware) and digital (basically handing them an attack plan).
You really really don't get it. While it's important to keep things like battle plans secret, and secret weapons if you actually *have* any (the US does not), trying to keep other stuff secret is just dumb. The Iranians and Chinese lready know how to attack everything the US has and defeat it, and the smarter people in the US military *already know that they know this*.

If you have a problem with hacked and bulk-leaked data, the best you can do is honeypots. Fake sources targets that make it hard for an opponent to know what data is legitimate and what is simply designed to deceive them.
This is true. This isn't security, though; this is disinformation.

For making sure a person who does leak gets caught, subtle automatic data watermarking allows you to determine who the leaker was.
And this is effectively auditing, or an attempt to know when your data gets out, not an actual attempt to keep the data secure. It doesn't seem to have a deterrent effect.

FWIW, in some situations auditing is all that's needed. Banking, specifically, can get by with awful data security (and they have awful data security) because they have full auditing.

Reality Winner was caught precisely through this means: all NSA printers print subtle yellow markings on printouts in a unique pattern, allowing them to match a physical document with the person who printed it. Watermarking can be automatically applied to almost any data - subtle wording / spelling / spacing changes, altering insignificant digits in numbers, minor alterations to graphics or positioning, etc.

The answer to good security isn't "give up", as you apparently seem to think it is.



You're mixing up security of algorithms (aka, technique) vs. architectural security, and security through minority.
Has the Tesla ex-employee revealed anything other than what software is being used, and some algorithm choice? I didn't see anything other than that. Did I miss something?

Security through minority is a deprecated approach.

I'll let Wikipedia explain the difference:

Security through obscurity - Wikipedia



In short, you're saying NIST is wrong. Sorry, but I'm going to go with NIST.

The writers of that one NIST draft paper are wrong.

Regarding security through minority, good old Wikipedia is correct:
Security through minority may be helpful for organizations who will not be subject to targeted attacks,
Tesla and the US military are both subject to targeted attacks.
 
Last edited:
Thanks :) Found that __isleap was a macro, not a function by... ahem... taking advantage of the lack of obscurity in commonly available include files.
All macros are non-obscure by definition; I can expand them on demand.

;) The rest flowed from there, as macros are dangerous to begin with. And I'd already had the localtime-overwriting issue in the back of my mind. Pretty much any code where the man page will warn you of a caveat or bug or whatnot is a ripe target for a hidden exploit.

But that entry doesn't hold a candle to what I planned to submit in the next competition. I mean, it's serious weapons-grade evil stuff. ;) You look at it and you see... "There's nothing here". But then you run it, and not only does it inexplicably do whatever evilness the contest wants, it then goes on to ruin your target's life. Executing hundreds to thousands of lines of unseen code, with nothing visible in the program, many executing long after the program stops running. I skipped the 2016 competition, and now I'm so disappointed that they haven't held it again :Þ

There are some inherent problems with the standard "flat memory" Von Neumann computer architecture, which wasn't designed for security; there's been a lot of work trying to retrofit this, without much success, though virtual memory maps help substantially. The pile of hacks necessary to make modern RISC chips backwards compatible with the 8086 create their own problems. C is practically assembler and a poor choice for anything above systems-level work; I'd rather be using ALGOL. (I am reminded of the saying, regarding ALGOL-60, "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors.") And most of computing goes on and on like this; underlying design problems where nobody can fix them because it would require starting from scratch, which nobody wants to do.