Finally getting around to watching DefCon videos, and I started out with Bruce Potter’s Dirty Secrets of the Security Industry presentation. I’ve seen recordings of Bruce Potter talks before at ShmooCon, and I’ve enjoyed his presence. Definitely a cool guy with a lot of passion for the industry, and I think he’s open to creating discussion, even if he knows he’s wrong and just trying to get everyone to think. I can’t help but admire that! Here are some notes, followed by reactions of mine. I definitely recommend watching this talk. Everything in blockquotes are paraphrases or quotes lifted from the slides and presentations.
Bruce opened by talking about some foundational concepts and history of security. He made a point to show that security is still growing and making more and more money. He then went into his dirty little secrets.
Secret #1 – Defense in Depth is Dead – The problem is in the code. We’ve always had bad code. Fix the code. Firewalls don’t help things that have to be inherently open, like port 25 to the Internet for the mail server. Spending way too much money and time with defense in depth! Need type safety (programming), secure coding taught in schools, and trusted computing. We need better software controls on our systems, not better firewalls.
I’m hearing a lot more about this lately, about how we need inherently secure systems and devices and protocols. 🙂 All his points are good, and I really don’t oppose outright a viewpoint like this. We need better training for software developers and we really do spend a shit-ton of money on more and more defenses that are band-aids to deeper problems.
However, I don’t think defense in depth is dead. I think he has great points, but I’d throw a shmoo ball at him for the sensational title of the secret. 🙂 We’re humans, and humans are producing code. It just takes one incident (which he says in a later slide) and defenses can break. That’s the point of defense in depth. Not necessarily about band-aiding insecure code, but rather ensuring that 1) we account for mistakes and unknown holes, and 2) we make sure attackers have to really try, or collude, or take a lot of time. If I can solve issue GER, and that’s your only defense, I win. If I have to solve issue GER plus LIG, I’m stuck…or I have to find help or spend more time breaking in.
This defense in depth approach only makes it *look like* we’re just band-aiding insecure code, which we kind of are, but that’s just an ancillary issue. To put it better: it’s an arguable position. (Marcin, if you’re reading this, yes, I use these $10 words all the time!)
Secret #2 – We are over a decade away from professionalizing the workforce – Much of our jobs is learned through self-education, not professional education centered around security. How do we codify and instruct the next generation? Security is everyone’s problem…because no one really knows how to properly do it. We can’t train all our professionals, how do we expect to train all our users? Users need tools that they can’t screw up; that don’t require education to be used securely. Years and years away from making this better.
A-fucking-men. First of all, he’s got a point about not being professional yet. I went to school and got an MIS degree, which is, in effect, Comp Sci Lite. Did I get any information about security? Not a bit. Hell, I was barely prepared for a real technical job…I was more prepared to be a clueless analyst than technical. Bruce is absolutely fucking right that we’re almost all completely self-taught, either on the job or on our own. That’s not a professional workforce or industry. Not yet anyway.
I love his mention that security is not everyone’s problem. I love his mention that users don’t need training, they need tools they can’t fuck up. Absolutely! Likewise, if we pour a bunch of money into training, and an idiot or new user shows up and makes a mistake, all of that is wasted. We need the technological controls, and we need the secure systems, and we need the simplicity more than we need high-end training such that security can be everybody’s problem. That’s not to say I’m all about de-perimeterization!
But that gets back to defense in depth. Users will make mistakes, which is also what defense in depth helps to mitigate. Yes, I think the industry has gone overboard and yes, we spend way too much money on many levels of defense, and we need to start spending that money smarter, on better defenses, and more secure foundations. More on that coming up…
Secret #3 – Many of the security product vendors are about to be at odds with the rest of IT – The security industry has sold a lot of defense in depth; a lot of money that isn’t going to securing the foundation. Bruce uses Microsoft as a case study: Microsoft tries to make a more secure foundation, but then the vendors start complaining, and Microsoft has to bend and allow unsigned driver interaction.
Excellent points. In fact, this is an issue in more than just security. Lots of money are being spent on software and systems and security, and we’re starting to question, “Why?” “Why did I spend XXX on 3 years of software assurance for MS SQL Server, when no new product came out from 2000 until 2005?” I have used the example of Microsoft trying to secure its own product in the past, because it dramatically illustrates how our landscape has changed, and how the maturation of the security industry has agendas to protect. I’ve been saying that Microsoft can’t just out and create a secure OS anymore. The vendors won’t let them. They’ll have to do it slowly, like boiling a lobster.
Defense in depth may not be dead, but he has a point that we really are spending too much on it.
Secret #4 – Full Disclosure is Dead – There is too much money to be made in selling bugs; even companies are paying for vulnerabilities. We want to make live systems more resilient to attack, but this market for vulns means those companies are (potentially) profiting at the expense of the end user.
Again, very true, and that last sentence I don’t think I had realized outright before this talk. I still believe in full and/or responsible disclosure, but at least now I have some logic behind the bad taste those “pay for my vuln” scams leave in my mouth.
Bruce quickly closed out by re-emphasizing some of his suggestions:
Recognize that the landscape has changed. Push vendors to make products that actually create a secure foundation, not just more layers. We need to create a more formal body of knowledge for info security, and hold each other accountable.
This is an excellent talk, and I really love what he brings to the table. He wants to stir things up a bit, open discussions, and maybe even be wrong. But that’s the sort of openness we need to keep striving for. He had a real brief mention about being open and sharing information rather than bottling it up to sell in a non-disclosure vulnerability; to not stand in line politely but to keep the energy we know we have when it comes to toeing the line. I can only imagine how a group conversation at a bar can likely last all night long about this stuff!
i’m afraid that you and bruce and probably a large chunk of IT and mainstream security are in for either a rude awakening or a long fruitless wait because what you’re hoping for is just not coming…
for secret #1 – defense in depth is not going anywhere because we need it for malware… your quote of bruce mentions fixing code but that’s a vulnerability issue, not a malware issue… when it comes to malware, the idea of an ‘inherently secure’ system is a pipe dream – we have 20+ year old proofs (not idea, not theories, not opinions, but proofs) that say the ability to support viruses (and therefore by extension malware) is inherent to the general purpose computing platform… given that and the facts that special purpose computers are basically glorified calculators and that there can be no ‘almost’ general purpose computer, malware and our need to defend against it is here to stay…
for secret #2 – security IS everyone’s problem… people are responsible for their own security in the physical world (you lock your car, you lock your house, etc) so the expectation that they shouldn’t need to be responsible for their own computer security is disingenuous… while there is the problem that people’s computer security practices haven’t caught up to the computer security realities yet, the fact is that you you think about it the same is true in the physical world too (how many people simply toss out mail with personally identifying info on it without shredding or otherwise destroying it first?)…
for secret #3 – microsoft TRYING to make the security of their operating system better is not the same as microsoft SUCCEEDING in making the security of their operating system better… locking down one avenue of attack and entire classes of security controls in the process (and they admitted that there were currently no other supported ways to get the equivalent functionality) is not a net security gain…
I think you and I would actually agree on #1 and #2. I don’t think defense in depth is going anywhere either. I like trying to approach inherently secure, but I won’t take it to an extreme to say I want fully inherently secure. That’s not possible, imo. I also think people should have some sort of responsibility for security, but right now, I still think we’re putting on “everybody’s” shoulders because we don’t know what else to do. I think the pendulum should shift slightly more towards technology, not user training.
I like the middle of the road, as you can see. 🙂
I know this is a late response but I just saw this discussion.
You make some very good points but the problem with this discussion is that it is based on conventional or status quo approaches to security. Solutions that take innovative approaches to security may provide answers, but we know the status quo is not. While the jury is out on de-perimeterization, the real issue there is data-centric vs. network security, but what happens when that evolves? When we have white list access controls that are granular at the data file level, endpoint device issues will likely go away. We will have defense in depth, but it will start at the core.
Another example, Kurt is a very smart guy, but suppose that you alter the nature of the kernel so that it is alien in nature to malware? Can they still bite on to the system? Never is a long time, how can he be sure that we will not beat this problem?