the big gamble of security

Gawker recently had an issue that exposed the security of their web code (and overall posture) as crap. Not surprising. Reading the >comments to an article about it on The Register also yields no surprises.

There are plenty of managers and others who don’t understand the consequences and risks of not paying proper respects to security. They truly do need educated.

But there are others who *do* understand the risks, and who *still* make decisions that leave security lacking. This is what I call the big security gamble. And it is just a matter of the risk a company wants to accept, or at least put off until such a time (if ever) that something does happen. See, it’s that “if ever” part that really starts the shoving matches. In security, we really should be talking about the inevitability of an incident. But human nature won’t necessarily accept that inevitability. You really might be able to go for many, many years without suffering (or at least knowing you suffered) an incident. Kinda like not having car insurance and yet still driving…

It’s hard to argue that deadlines should be pushed in order to get security done right, especially when a product may be new and no one even knows if it is viable yet or going to succeed at all! What comes first, the product (and resultant revenues) or security spend? [I like to also say, to head off a natural line of argument: which comes first, learning how to assign a variable or learning how to assign a properly bounded and verified variable?] Of course, once it does succeed, that inertia of ignoring security is hard to turn around until something bad happens…

The fact is, economics will trump security. Hell, economics trumps *safety* even (though few people like to talk about that). This is life.

That sounds exceedingly defeatist and cynical, and in a way it is. But it really, really helps keep a security geek sane by coming to terms with reality every now and then. That won’t stop me from always giving the ideal suggestions when asked for, or trying to gain as much security ground as possible when given the chance. Or strive for doing security correct in the first place.

If I got pissed off at everyone who had a security incident or lapse or who didn’t cover every hole and feasible issue, I’d be pissed off at everyone. Granted, there is negligence and stupidity…but….you get my drift, I’m sure.

bad things still happen to good systems

I’ve been quiet about the whole Wikileaks thing, and I likely will remain so. I don’t have anything to add that hasn’t been said already, and I gravitate closer to the fence than even I probably admit to myself.

Nonetheless, I won’t refrain from posting to nice articles on said subject, like this one from Chris Swan posted at Fudsec. I like his practical thoughts on the subject.

To add: This was a failure in a trusted user leaking docs. Would technology have prevented/alerted on this? Perhaps. But ultimately this still boils down to humans (talented staff, not just in security log-watching…) solving human problems (background checks, education, management…)

Now, maybe if they had body scanners and pat-downs whenever you enter or leave locations where you can view/manipulate sensitive data…

a little bit of blog history

Just because I was curious, I did some checking on my site here. I have 1,454 posts here on Terminal23.net dating back to 8/9/2004. That’s 19 posts per month. Prior to that, I made all my posts on my personal blog at HoldInfinity.com (less geek, more personal blog), which has 268 posts since 10/05/2001. I’d say I’ve been blogging about security since 2004.

Even prior to that, I’ve had a web site since 1997 (maybe late 1996 if I really push the definition), but are no longer available except maybe on a floppy somewhere in a desk.

jay adds 5 infosec rules to live by

I like lists. Jay Jacobs over at his Behavioral Security blog posted a list of infosec “rules to live by.” Can’t say I disagree with any of them, but thought I’d add to the discussion a bit!

Rule 1: Don’t order steak in a burger joint. I don’t really have much to add to this excellent point!

Rule 2: Assume the hired help may actually want to help. I agree with this, but I’d also play with changing the wording in one of two ways. First: “Don’t assume anything.” Second: “Assume the hired help will follow the path of least resistence.” I know, I’m twisting that rule around almost 180 degrees. I get that awareness can (and does!) foster the ability for people to make proper decisions. But I can’t assume or rely on that enough to call it a rule. I really like the last line in Jay’s paragraph on this, though. Still, I think he makes a similar point he went after in this rule, in the next few rules.

Rule 3: Whatever you are thinking of doing it’s probably been done before, been done better, by someone smarter, and there is a book about it. Absolutely! This is where being in touch with the greater security community is invaluable.

Rule 4: Don’t be afraid to look dumb. I can’t say this enough, especially to myself. Don’t be afraid to look dumb! We only get one life, usually one shot at things like first or lasting impressions. Don’t waste yours and other people’s time with false facades. Take a shot, fail, learn, do it better the next time. Lay your balls out there. As I’m fond of saying in the sysadmin world: we learn the most only when we’re troubleshooting issues or in the middle of failure. This is why “fail” and looking dumb need to be intrinsic cultural values in an IT organization.

Rule 5: Find someone to mock you. I’d probably reword this rule, but the point absolutely stands: find people who will honestly challenge you, mutually. This is the age-old, “Surround yourself with people smarter than you,” maxim. But really, it’s about mutual respect and being able to follow rule #3 and still be a man (or woman).

exotic liability 70 on honeypots

I have made my opinions on honeypots known, and while I think they’re fun and useful to those who have the time or focus on analyzing attackers and their tools (I can’t stress enough that there *are* orgs that *should* be using honeypots [like F-Secure!]), they’re just not useful to most organizations (in fact, almost all, if you ask me).

So I was a little skeptical when listening to Exotic Liability #70 and Lenny Zeltser came on and the topic of his recent blog post about honeypots came up (skip to 56:30). Chris Nickerson gave excellent reasons against bothering with honeypots. That could have been me talking, almost word for word. Researchers love honeypots, but that’s part of the problem where researchers sometimes just don’t get what really gives value to an organization *right now* in their security posture when they have limited resources (not grants or research funding).

But Lenny made one interesting observation about giving your talented staff a honeypot to play with otherwise they may get bored and quit the organization for somewhere more exciting. I think that’s an interesting point, but probably not one that will matter too much. First, not many orgs have honeypots, so it’s not like a lack of a honeypot to play with is something that a staff member can go to another org that has them. Second, if sec staff is bored, something is wrong. I can’t imagine that any real security pro is ever bored. Frustrated and disheartened, yes. But truly bored? Never. Truly, never.

Lenny’s article makes a bit more sense when you dismiss the idea of putting honeypots out on the public internet, which Lizzie helped expose in the interview. Then you’re really just using honeypots as another internal tripwire (or for those with the time and talent, a way to examine attacks). Honestly, I’d still suggest putting more of other tripwires in the environment. Just like Chris says, I can’t think of any situation where I would ever suggest a company try out a honeypot in their environment. There are far, far, far too many other things that can be done.

(In economics, this is called opportunity cost.)

Next, Lenny’s article mentioned that really honeypots are just for mature security programs. But how many executives and even middle managers will *think* they have a mature security program? Then hear about honeypots and how infosec researchers said honeypots are useful, then made that a new project or outright purchase? I really don’t think anyone should think about honeypots until outside infosec professionals “certify” their programs as mature *and* they have some vested reason to analyze attackers and their tools (i.e. you research and then sell security). It’s important to make sure that an outside entity labels you as “mature.”

Lenny also mentioned the idea that an IPS could, instead of just preventing the attack, to actually pretend that the attack will work and entice more interaction with the attacker. This is also interesting, but really does break down once you analyze it with any experience in security teams in real organizations. First, the level of sophistication in that IPS/IDS or whatever tool would have to be huge in order to entice anything except very specific scripted events. Second, why bother? I would rather my IDS/IPS present me with packet captures on what it alarms on, and not bother with enticing those attacks and giving me even more captures. And so on…it’s an interesting idea, but way too sophisticated for any of these companies or boxes that try to be “turnkey” or automated. This still all comes back down to talented staff, as usual, anyway.

german hackers target celebrities

German hackers gain access to celeb computers [namedrop Lady Gaga for more attention]. I know it is fairly common to have a Twitter of Facebook account hijacked, but I’m always surprised we don’t hear about more celebrity accounts being hacked. Then again, just because we don’t hear about it doesn’t mean it isn’t happening on a regular basis.

What’s really fun is how Twitter/Facebook expose the interaction between celebrities. You want to target a high-profile celeb? Maybe start by examining all those people whom they follow on Twitter and find the normal joes they trust/listen to. (I can’t be the only one who sometimes wonders who that 1,000,000-follower celeb has on their tiny 75 followed people list.) And so on. You can really spread some damage once you get into a few systems and start preying on the cyber-social aspects.

I once had a dream (as in, a daydream, not a life ambition) about being a security/computer expert for celebrities. I mean, they’re just the same as any old joe (or any old C-level) and have the same issues and lack of knowledge as anyone else. Plus extra money to throw out for dedicated service. I imagine that market would be lucrative with some word-of-mouth.

Though I guess PR agencies and agents would rather cover those zones. Who knows.

Article via infosecnews mailing list.