While it's true the process of figuring out how the stuff is set up is part of the attack, if you notice, this person is extremely vague, *and* his information is years old
Much of that isn't vague at all, and are things that aren't going to change often. His complete lack of vaguery is precisely what made that discussion so interesting.
Suppose you have a SQL database which has widespread read access by a fairly generous front-end program, because access to the data is needed on a routine basis for business purposes by a very large number of low-level people. (Such as an insurance company's customer data, or Tesla's customer data.) Anyone doing a targeted attack knows the columns and tables *for essentially legitimate reasons* before even starting the attack.
Not even remotely true. That's the entire purpose of a frontend:
an abstraction of the backend.
I don't know about you, but I've actually done SQL injection attacks before (and a couple non-SQL injection attacks, such as bash injection... all too many people forget that injection attacks can apply to any command parser! Once discovered a bug in a small MMORPG (Eternal Lands)'s text parser that would let you run arbitrary commands on the user's system if they clicked an innocent-looking URL in chat... the rest of the dev team didn't believe me until I wrote an exploit. They were opening a browser to view URLs with popen... ugh). As a general rule, when it comes to SQL injection knowing
what you need to change is
the limiting factor. If you've got access to an end-user-facing frontend for a database, as a general rule, you have
no clue how the database is structured. And far more often than not, you're working blind - you don't get to see the feedback of your injected commands. For good reason - that "
security through obscurity" that you hate makes it "best practices" to hide detailed errors from users and only give them vague ones.
Say it's a vehicle management database, and you want to uncork "performance" on all Model 3 AWDs. By all means, go on and tell me what commands you need to inject to do so. The simple fact is, you're very unlikely to just know what you need to do in order to do this. The "obscurity" is your worst enemy. Anything that gives you a "flashlight" into the darkness - any way you can trick it into reporting more detailed errors, for example, or to help you learn their naming scheme - (or if not, brute force guessing) - is your key to success.
40,000 employees. You can't prevent most internal information from getting out, and you shouldn't even try.
You
absolutely should try. Security leaks are
not okay.
This is actually the mistake that the US Department of Defense is making; the best thing they could do would be to declassify *nearly everything*.
Great, and compromise all sources and facilitate all attacks, both physical (designing weapons systems against all of the freshly disclosed weaknesses in US hardware) and digital (basically handing them an attack plan).
If you have a problem with hacked and bulk-leaked data, the best you can do is honeypots. Fake sources targets that make it hard for an opponent to know what data is legitimate and what is simply designed to deceive them.
For making sure a person who does leak gets caught, subtle automatic data watermarking allows you to determine who the leaker was. Reality Winner was caught precisely through this means: all NSA printers print subtle yellow markings on printouts in a unique pattern, allowing them to match a physical document with the person who printed it. Watermarking can be automatically applied to almost any data - subtle wording / spelling / spacing changes, altering insignificant digits in numbers, minor alterations to graphics or positioning, etc.
The answer to good security isn't "give up", as you apparently seem to think it is.
This actually dates back to *safecracking*
You're mixing up security of
algorithms (aka, technique) vs. architectural security, and security through minority.
I'll let Wikipedia explain the difference:
Security through obscurity - Wikipedia
Obscurity in Architecture vs. Technique[edit]
Knowledge of how the system is built differs from concealment and
camouflage. The efficacy of obscurity in
Operations Security depends by whether the obscurity lives on top of other good security practices, or if it is being used alone.
[7] When used as an independent layer, obscurity is considered a valid security tool.
[8]
In recent years, security through obscurity has gained support as a methodology in cybersecurity through Moving Target Defense and
cyber deception.
[9] NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment.[10] The research firm
Forrester recommends the usage of environment concealment to protect messages against
Advanced Persistent Threats.
[11]
In short, you're saying NIST is wrong. Sorry, but I'm going to go with NIST.