wild leaps of logic from tech journalists

Cringely has a strange article which continues the RSA SecureID attack mystery: InsecureID: No more secrets?. I can’t say I’ve ever read Cringely before, so maybe he’s just some tech commentator with no real insight here other than a wide following and sensational, wild speculations… (After writing this, scanning his recent articles pretty much shows me he’s just a tech blogger and that’s it. Yes, I mean it when I say that’s “it.” And yes, I’m being ornery today and particularly spiteful in my dislike of tech commentators dipping a crotchety toe into deeper discussions than they’re suited for.)

It seems likely that whoever hacked the RSA network got the algorithm for the current tokens and then managed to get a key-logger installed on one or more computers used to access the intranet at this company.

Wow, that’s quite the leap in logic there (on multiple fronts), especially since RSA hasn’t revealed what was pilfered from their network. Common discussion tends to speculate that the master list that maps the token seed list to organizations that are issued those tokens (probably keyed by serial number) is the most likely divulged piece.

How would a keylogger assist in this? Well, first, a keylogger alone could be enough to divulge login credentials, although any captured credentials are quite ephemeral when using a secureid token. Second, it could reveal any static PIN numbers used and usernames. I assume the PIN number is the “second password” mentioned in the article. *If* (and that’s a big if) the attacker was the same one who *may* have the seed list and algorithm, that attacker could theoretically match up that user and their fob based on keylogged information.

Does this mean “admin files have probably been compromised?” No; that’s an even bigger leap in logic. Possible, sure. But only with the correct access and/or expanded control inside the network. Hell, I’m not even sure Cringely knows what he means by “admin files.”

Of bigger concern is how a keylogger got installed on such a system long enough to caused this issue. Granted, something was detected (though I suspect it was *after* a theft or attempted VPN connection), but being able to spot such incidents on the endpoints or in the network itself should be a big priority.

microsoft’s waca v2.0 released: web/app/sql scanner

Microsoft has recently released their Microsoft Web Application Configuration Analyzer v2.0 tool. This is such a straight-forward tool to use, and includes rather clear checks and fixes, that it’s really not acceptable to *not* run something like this, especially if you run Microsoft IIS web servers or SQL instances.

The tool has a nice array of checks when pointed against an IIS box, and even does decent surface checks against SQL. While this tool does include “web app” in the name, I don’t think it goes much beyond inspecting a site’s web.config file on that front. It also requires Microsoft .NET 4.0 on the system you install the tool on, and predictably needs admin rights on any target systems it scans. If you’re curious about any checks, they’re pretty clearly spelled out. Also, if you want to supress any checks because they don’t apply, you can do so. The report then mentions the presence of suppressions (yay!), and you can even take off the supressions after the fact, since the tool still does the checks but just doesn’t include them in the end tallies.

This does make a great companion scan tool to add to your toolbelt for appropriate systems, even if it has a herky-jerky interface.

As a sort of cautionary piece of advice, I wouldn’t be totally surprised if some organizations request this tool be run by potential vendors/service providers whose systems meet the tool’s criteria. Which means you hopefully will have run this tool before such a request! It’s much more palatable to request something like this as part of an initial security/fit checkbox when it is an official Microsoft tool. Just sayin’…

some security practice hypotheses

I’m not sure if I jotted these notes down here or not, but wanted to move these from a napkin to something more permanent.

What is the hardest part of security? My thought: Telling someone they’re doing it wrong when they don’t know how to do it right, and you can’t explain it properly. The more technical, the worse it is?

Two examples: First, someone makes a request that is just kinda dumb and gets denied. They come back with, “Why?” And you have to figure out why the value of saying no is higher than the value of just doing it, or what it would cost to accomplish the request while maintaining security and administrative efficiency. (i.e. You want *what* installed on the web server?!) This can be highly frustrating in a non-rigid environment. It’s also the source of quite a lot of security bullshitting.

Second, a codemonkey makes poor code and you point it out. Codemonkey asks how it should be done. If you’re going to point it out, I’d really kinda hope for some specific guidance appropriate to the tech level of your audience. This brings up the pseudo-rhetorical question: Should you be pointing out poor code if you don’t know how to give suggestions on fixing it? (Answer: depends. On one hand, don’t be a dick. On the other, anyone should be able to point our security issues, otherwise people wouldn’t point them out! It’s extremely nice when someone *can* help those with questions, though, with actionable answers beyond just “go read OWASP.”)

And here’s a hypothesis: You’re not doing security if you’re not breaking things, i.e. pushing boundaries. Follow-up: Security pursuit breaks things unless you have expert knowledge and experience.

answering some questions on siem

(I should name this: how I can’t type SIEM and keep typing SEIM…) Thought I’d ramble about SIEM for a moment (as I’m also in the midst of waiting on a report to spin up in my own SIEM), sparked by Adrian Lane’s post, SIEM: Out with the Old, which also channels Anton Chuvakin’s How to Replace a SIEM?

Adrian echoed some rhetorical questions that I wanted to humbly poke at!

“We collect every event in the data center, but we can’t answer security questions, only run basic security reports.” – That probably means you got the tool you wanted in the first place: to run reports and get auditors off your butt! More seriously, this is a good question as it somewhat illustrates a maturing outlook on digital security. I’d consider this a good reason to find a new vendor. That or your auditors are worth more than you’re paying them, and asking harder questions than usual. Good on them! (Though I’d honestly hope your security or security-aware staff are asking the questions instead…)

“We can barely manage our SIEM today, and we plan on rolling event collection out across the rest of the organization in the coming months.” – Run. Now.

“I don’t want to manage these appliances – can this be outsourced?” – You want to…outsource…your…security…? You may as well just implement a signature-based SIEM and forget about it, because that’s the value you’ll get from a group that isn’t intimately aware of or caring about your environment. Sorry, I would love to say otherwise and I’m sure there are quality firms here and there, but I just can’t bring myself to do so. It is hard enough to manage a SEIM when you know every single system and its purpose.

“Do I really need a SIEM, or is log management and ad hoc reporting enough?” – That’s a good question! You’d think the answer goes along the lines of, “Well, if you want it to do work for you, get the SEIM, otherwise you’ll need to spend time on the ad hoc reports.” But really, it’s the opposite: you need to spend time with the SEIM, but the reports you likely can poop out and turn in to your auditors. This might also depend on whether you do security once a quarter or want to do it as part of ongoing ops. It amazes me that people know about this question, have it asked to their face, but then go about life in the opposite direction.

“Can we please have a tool that does what it says?” – Probably the most valid question. The purchasing process for tools like this is too often like speed dating, when really it should be about doing multiple, intimate dates with several candidates; you might even spend some memorable moments together! With such an advanced tool like SIEM that has an infinite number of ways it can be run and slice an infinite number of types of logs, you can’t believe what the marketing team throws at you. Hell, you can’t even listen to what the purchasing manager says either. You need the people with their hands in the trenches to talk to the sales engineers and get real hands-on time. Nothing can fast-track that other than some real solid peers (industry networking! oh shit!) who can give you the real deal information on living with a tool.

The biggest issue in this? No SIEM reads and understands every log you throw at it, especially your internal custom apps! No matter what the sales dude says! (Some will read anything you send in, but they’ll lump the contents into the “Log Contents” field, rather than truly parse or understand it.)

“Why is this product so effing hard to manage?” – Well, I’ve not seen a SIEM that is *easy* to manage, so who is in the wrong here?

Anton had this awesome paragraph:

By the way, I have seen more than a few organizations start from an open source SIEM or home-grown log management tool, learn all the lessons they can without paying any license fees – and then migrate to a commercial SIEM tool. Their projects are successful more often than just pure “buy commercial SIEM on day 1” projects and this might be a model to follow (I once called this “build then buy” approach)

I think this is a great way to go! But I’d caution: The team that has the time and skill to afford to roll their own or open source tools, are also the ones who will have the time and skill to afford to manage their commercial solutions. However, the real point is valid: You’ll learn a ton by doing it yourself first, and can go into the “real” selection process armed with experience. To build on the analogy above, you’ve lived with someone for a while, broken up, and now know what is *really* important in a concub…I mean, partner.

a case of digitally spilled milk

A security researcher presenting at BSides-Australia demonstrated Facebook privacy issues by targeting the wife of a fellow security researcher without permission. Sounds exciting, yes?

1. What he did was in bad taste, and maybe even unethical. Let’s get beyond that…

2. We all know in security that you don’t get shit done unless someone gets slapped in the face, hacked, or embarassed/shamed. This is human and group psychology. So, in a way, this guy probably made more impression on people who might read this than would otherwise have happened. Sad, but true. Will it get out beyond the security ranks? Probably not, unfortunately.

3. It doesn’t sound like anything embarassing or harmful was actually found. I mean, seriously, are people uploading kinky or extremely embarassing photos to Flickr/Facebook and truly not wanting them seen by anyone else? If so, you’ve already failed. (People who upload such content for public consumption and leave them up for future employers to harvest are a different sort of dumb.)

4. Intent does count for a lot in the perpetrator of a crime as well as the negligence of the victim, but Heinrich does have an interesting point, “‘I have no ethical qualms about publishing the photos,’ he said. ‘They are in the public domain.'” Facebook may intend to not make them in the public domain, but they may not be doing enough. Honestly, I’d consider the end result of this to be public domain, yes. Sorry, fix your shit. Wishing and hoping and saying it doesn’t matter. (Yes, I know if I leave my door open and someone breaks in, it wasn’t enticement, but still, shame on me…)

5. In addition, I’m not sure how pissed I’d be if it were my wife and/or kids. I mean, I’ve opted to put my photos up. As security aware as I am, I have the opportunity to know the risks. A real attacker is going to do much worse if they have it out for me, such as photoshopping images into even worse content, and so on. I’d rather have someone helpfully hack me and expose issues than a real attacker do so with vengeance, especially in something that doesn’t harm me any more than a little public ribbing and feeling a little used, like being the brunt of a non-harmful joke. In another way of thinking, don’t spend effort getting pissed over little things; know what’s important in life.

At some point in security, the “kid gloves” do have to come off, if you want to get shit done. And we’re all a little “grey hat” every now and then…or at least Bob is…

(Snagged off the InfosecNews wire via article.)

impassioned people tend to do quality stuff

I read this article about how Process Kills Developer Passion. I’m not a formal coder (more like a scripter), and I only work with a subset of coders (web devs), but I really believe this article hits several points squarely, particular in how process can kill creativity.

The caveat is how this applies to most anything, really. Process can be necessary, for instance in documentation on how things work or why they’re they way they are. Or to cover your own ass when requirements try to change in 6 months.

But the point remains: passionate people tend to do quality things; don’t kill the passion.

I’d also point out something that has been permeating a few of my recent posts: There are not always going to be universal, blanket answers for everything!

You won’t appease every developer you hire by having any one of a given coding process methodologies. You won’t cover every situation with a monolithic security tool. You won’t reach every student with a singular approach to learning. You won’t block every breach…

Picked this article up via Securosis.

why aren’t they using our technology? (tears)

ITWorld has an article: “Apps to stop data breaches are too complicated to use”, which itself is a rehash of this article on The Reg. The article makes 2 (obvious to us anyway) points:

1. Security software is too damned complicated to use. No shit.

2. “…the tendency of customers to not use even the security products they’ve already bought.” I think many of these tools don’t get used because they’re complicated, require experts to feed-and-love-it and review logs constantly, and when they get in the way business gets pissed. They cost money directly, they cost operational money, they cost CPU cycles, they cost frustration from users…

(I’m trying desparately, futiley to avoid the assertion in the second article: “…needs to change so that the technology can be up and running in hours rather than months…” Trying to meet that sort of goal is ludicrous…)

Strangely, the article finishes with this odd moment:

Security systems, intrusion protection, for example, are often left in passive mode, which logs unauthorized attempts at penetration, but doesn’t identify or actively block attackers from making another try.

“It’s a mature market – please turn it on,” Vecchi told TheReg.

I’m not going to deny or accept that these are mature markets, but I will say most *businesses* ren’t mature enough to just turn security shit on. There are 2 very common results when you “turn on” technologies to do active blocking or whatever you have in mind.

a. It blocks shit you wanted to allow. This pisses off users, gets your manager in trouble, and requires experts to configure the tools and anticipate problem points, or extra time to figure it out (with the danger of some nitwhit essentially doing an “allow all” setting).

b. It doesn’t get in the way, but doesn’t block much of anything by default. I imagine far too many orgs leave it this way thinking they’re safe, when in fact it’s only blocking the absolute most obvious worm traffic and port probes (31337). In order to get it better tuned, you need experts who know what to look for and block.

The ideal is usually a state where you bounce between those two outcomes: you tune security to butt right up against the point where you’re negatively impacting people, but still providing security protection. Unless you’re a perfect security god, you will bounce in between those two states.

Business doesn’t like that. They want to create a project with a definite start and finish, implement a product, leave it alone, and have it never get in the way of legitimate business.

This is bound to fail. It’s the same concept of a security checkpoint or guard at a door: it’s intended to *slightly* get in the way when something or someone is suspicious, and does so forever. This is why I have yet to buy into “security as enabler.” Security is designed to get in the way; even security to meet a requirement so you can continue business: the requirement is the part that delivers the security and gets in the way.

There are companies that “get” security; but I guarantee they are also companies filled with employees who can tell plenty of stories about how security gets in their way on a daily basis, whether justified or not. That’s how it is, and business hates that. Even something “simple” like encryption on all laptops, is a pain in the ass to support.

To dive into a tangent at the end of this post, let me posit that security tools-makers are just plain doing it wrong. They too often want to make monolithic suites of tools that cover every base and every customer and every use case and every sort of organization. This creates tools that have tons of features that any single org will never ever have a chance in hell of using. This creates bloat, performance issues, overwhelmed staff, and mistakes. It leaves open lots of little holes; chinks in the armor. I’d liken it to expecting a baseball player to perform exceptionally at every position and in every situation. It’s not going to happen. Vendors need to offer solid training as part of their standard sale (not an extra tacked on that is always declined by the buyer).

It starts with staff and they start with smaller, scalpel-like tools. Only when staff and companies are “security-mature” will they get any below-the-surface value out of larger security tools.

Maybe over the long haul we’ll all (security and ops) get used to these huge tools in a way that we can start to really use them properly. Oh, wait, but these vendors keep buying shit and shoving it in there and releasing new versions that change crap that didn’t need changing. And IT in general is changing so fast (new OSes, new tech, new languages, new solutions) that these tools can’t keep up while also remaining useful. So…in my opinion, still doing it wrong. The difference between real useful security tools and crappy monolithic security tools, as a kneejerk though: good tools don’t change their core or even their interface much at all, they just continue to tack on new stuff inside (snort?). Bad tools keep changing the interface and and expanding core uses; essentially reseting analyst knowledge on every yearly release.

Picked this article up via Securosis.

social media in the classroom

Saw an article linked from HardOCP about social media in the classroom (in Iowa, in fact), and I also read the accompanying forum comments. This is one of those situations where almost every comment is correct.

We, as an American culture, often seem to stumble when it comes to our strange drive to find the one right universal answer; even in a subject that really doesn’t *have* one single blanket answer, such as education (IT and security suffer the same problem). What about class size? What about subject matter? What about teacher personality? What about the extroverts? The introverts? The ones who actually need special attention? And so on. All of these factors really, to me as a non-teacher, stress that every situation is going to be different.

It is exciting to see the role of technology such as tablets (and the internet) working into education, and I think you should try anything you can to engage as many students as possible in the way they respond to best, on an individual level.

the tools do not make the analyst

Harlan echoed some of my own feelings in a recent post of his:

…I keep coming back to the same thought…that the best tool available to an analyst is that grey matter between their ears.

…Over the years, knowing how things work and knowing what I needed to look for really helped me a lot…it wasn’t a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.

Totally agree. This should apply to IT in general. If the tools replace knowledge, then you become a slave to the tool and it’s capabilities and weaknesses or lose the ability to ever work around the inevitable gaps of these tools.

skype 0day: pwning through messages

Every now and then I have to give reasons against something like Skype in the enterprise. Here’s a great reason why: 0day Skype messages. Wormable. (via @hdmoore)

The point is not to waggle fingers at Skype (though you could, since they’re closed and not very talkative), but to illustrate the risks inherent in any new technologies brought into the enterprise. (Not that I wouldn’t waggle fingers at Skype anyway, since I believe something like Skype wouldn’t be allowed to be so popular unless there were ways to tap into the voice streams.)

nsa publishes home network security tips

The NSA has published a nifty Best Practices for Keeping Your Home Network Safe fact sheet. This is a pretty good document which mixes easy-to-understand concepts with some more challenging ones. I really feel that people can get overwhelmed with the technical stuff, but usually do react favorably when given managable challenges.

I’d like to have seen more emphasis made on unique, complex passwords and the importance of passwords, but these are still excellent bullet points to cover with people. Entire books can’t cover the breadth of tips for good security these days, even for the layman….

just passed my five-year job anniversary

As I earlier mentioned as an afterthought, I just passed my 5-year anniversary in my current job.

My timeline:
1996-2001 college (5 years=studies change halfway through)
2001-2002 yeah, the tech hiring bust!
2002-2006 first job
2006-2011 second job

Let me tell ya, time flies.