least user access

I almost always read “least privilege” or “least user access” and click into the article wondering what it will be. Without fail, it is always about that age-old discussion on whether users should be running as admins on their local machine or not.
What about the other aspect of least user privilege? Namely, the file servers. How are company file server resources allocated? How are requests for access to information handled? Not everything is in databases or web applications. So, what about this very important topic?
I wonder if this is because very few people understand the nuances of managing security permissions in anything but a tiny environment (at least, the IT journalists anyway). While it might seem easy to isolate developer files, what about when we start talking about collaboration or dynamic teams that span multiple departments?
Weird, considering I would expect many organizations to be very bad about tracking and reporting on actual user access or even managing that access at all.

on the forefront of technology

A quote from an ITBusiness article:

“You gotta be mobile, regardless. While it may pose great [security] risks, its a greater risk to fall behind,” Levy said.”

It goes without saying that you can’t let your networks and systems linger and gather dust so much that we get another, “it’s 2004, why are you still running Windows 98 systems?” situation. As support drops off, so to should use. Just look at SCADA systems on what not to do…
However, there is still something to be said about being on the forefront of technology and to not be sitting around playing catch-up five years behind or more. I think it could help IT perception if IT were closer to the forefront of technology and enabling and assisting employees more. This might be a bit dangerous in some cases, but I think in most cases the only real danger is just overspending on new things that may or may not work out in the long run. Thankfully, technology these days does not necessarily have to be a bad decision made that will last 20 years…or even 5 years. Everyone in business makes mistakes. IT should be held in no different regard. If we move forward with mobile devices before they become fully mainstream and it doesn’t work out, so what?
I could go into a lot of the benefits and risks and goods and bads, but I think it is interesting to imagine the change in approach when it comes to just doing some things, and figuring out the security later. Perhaps this is a bad idea for most, but it is still something to always think about. Why wait 3 more years before encouraging mobility in the organization? Why not just do it now and deal with the risks, issues, and technology? Why wait for users to clamor louder for IM, and instead move forward with dealing with IM in the organization now?
Now, this is weird for me to be saying. I typically am not an early-adopter. But I do have an excuse. In college and beyond I have not had a very large amount of leisure money at my disposal in order to delve into new things. My attitude is certainly ready to change now that I am crawling out of debt such that I can see the edge clearly now.
Another quote from the same article:

“Levy suggested that access-based protections (like dual-function authentication) are imperative, and end-to-end encryption is necessary. These technical failsafes should form the foundation for rigorous employee training from the IT department, said Levy… The employees need to become experts in mobile security, he says.”

I don’t like this statement. I think the average user needs to get used to doing things with security in mind, but it is ridiculous to request that employees become experts in mobile security. Mobile security is tough enough for professionals working with it every day, let alone everyone else trying to do their own jobs. While training is necessary and employees do need to be at least a little bit security-conscious and accepting, it is up to technology and technology professionals to be the experts in security. We do not expect everyone to be an expert about the internal workings on their car or the proper use of complicated and ephemeral security measures. Instead, they just work, they just do their thing, and we take our cars to the professionals for anything beyond our control or understanding.

month of no posts

Wow, it looks like I’ve gone an entire month without making a post here. That was certainly a quick month, and I do have a backlog of things and links and tools to look at and post about.

My reasons for the lack of posts is two-fold, really. First, I have been holding back on a lot of stuff since I really want to convert this space into more of a wiki-format. A wiki is much more appropriate for what I am using this site as. I had some issues last month in getting Apache 2 and PHP5 to get along, so I have to check and see if that was resolved.

Second, I’ve moved a lot of my more discussion-style technical posts to my main blog instead of here. I am not sure if that is how I will do it in the future, as all my own non-technical stuff is being diluted by the technical jargon that many of my family and friends know nothing about. Maybe I’ll load it all back here once I get the wiki up, and still have a sort of techie blog/news listing on the front page.

In the meantime, I hope to post some more things here anyway, regardless of the wiki progress.

when security goes too far

An article just ran across my desk about a bank whose legitimate (albeit poorly implemented) email announcement to customers was mistaken for a phishing attempt. This is an example of a false positive. But just how damaging can a simple false positive be?
What we do now:
– automatic spam filters that “learn” what spam is
– manually populated spam filters
– spam blacklisting which can blacklist sources or content across a wide swath of customers
– heuristic and behavior-based virus scanning
– phishing site blacklisting
– blacklisting of DNS, domain, or IPs based on complaints or automatic alerts
– network and system shunning via IDS/IPS linked to firewalls
That’s a lot of stuff reacting to security incidents. What might have happened to this company? Someone may have reported them to a phishing blacklister or alerts may have automatically done this, blocking perhaps the domain, emails, website IP, or even DNS for this bank. This could cost tons of money in lost business, public relations, and direct costs to fix or workaround the issue.
In a previous job, we sometimes were blocked from emailing AOL members because, after a complaint or two, AOL would block our email servers for 24 hours. The sad thing is, we never spammed people unless their own employer or they requested it or agreed to it. Also, one of our clients, a major financial institution at one point had their domain blacklisted for spamming. Now, they may have really been spamming, but due to that disruption in service by being placed on a blacklister, they had to change their domain name and all the infrastructure that it used. Wow!
And as much as people like this stuff, mistakes will still be made. People will make bad judgements, misconfigurations, or poor decisions like the bank email security campaign linked above. To make a mistake and cause your company millions is just a bad situation waiting to happen.
Dan Kaminsky was correct in his talks last year (BlackOps of TCP 2005) decribing how scary it is to have IDS/IPS automatically making firewall rules and shunning networks. This means that attackers can actually write your firewall rules and can do some things as disastrous as having your own network shun its own name servers and be subjected to DNS poisoning.

linux as main box – part 5: windows strikes back

So, I have a VM of Windows XP running on my Ubuntu laptop now, so that I can do those few things that I need Windows for. Sadly, Windows and the Activation nag don’t seem to be on the same page. No matter how many days I wait, it nags me that I have 30 days of activation left, but I am unable to activate my Windows either manually by inputting the key found on my laptop case. Well, as long as it stays perpetually on 30 days, that is at least tolerable, but I need to research why this happens and if I can fix it or redo the VM creation to alleviate the problem. I remember a popup warning about it when creating the VM, and I may have done something wrong.
Of note, the only thing I do on a daily basis that has not been moved over to Linux is my email from Thunderbird. I guess I could take some time and just move over, but it is all the older email that I need to wade through and catch up on first. I’ll maybe just end up losing all that mailing list email I’ve built up…
Watching HOPE 6 presentations this weekend gave me more excuse to shore up Ubuntu’s media-playing issues, including mp3 support. Very happy with XMMS and MPlayer.

barriers to sharing information

At work my IDS popped up an alert that IP 123 performed a host sweep against our webservers on ports 80 and 443 (and maybe more, but the IDS is not that good…sigh). I check out the IP and it is a webserver for an NBA team. The website itself has little mention of how to contact someone about the site, but I do find an email in the privacy notice. Great. In the privacy notice I see a blurb about how the site is highly secure, blah blah. Great. So I sent an email to the legal address I see and get an immediate undeliverable message. Great.
By now, I have other things to be doing and so on, so I just drop the issue. This web site might be rooted, I might be seeing actual traffic from a malicious script, attacker, or something bad inside their network that I can’t see. Perhaps it is legitimate traffic and someone is just spending some spare time scanning all websites on the Internets to help with the Google. But unless there are clear avenues to report these things, they can only hope their own internal detections will find if something is really wrong. :\

taking back security

After reading far too much vendor-crap this week, and publications and reports whose basis is in the industry (“We now need to get away from firewalls and IDS and protect data…” translates into “We’ve saturated firewall and IDS markets and need to drum up the next big market to hawk our warezin…”), I’ve decided that security professionals (and IT in general) need to work hard to take back our reports. We need to wade through and chase away the ghosts of all these vendors pushing their own agendas as the next big thing, and get back to reality and what really needs to happen.
For all the hype and reports, you’d think we don’t need patch management, inventory control, or firewalls anymore. At all. Or that once these things are implemented, that’s it. Move on. Fuh-geddaboutit! Oh wait, we need to monitor and update and take care of these things and check logs and stuff? Wha…?
Yes, we need to take this all back and let the vendors shout noise at each other in the ad-driven mags. We need to make doubly sure that all this noise doesn’t blow in the face of our managers like so much thick hot air, sending them off to chase the next big thing and dragging us all with them whether it works or not.

movie insider causes revenue loss

We need more technical reports of incidents, damn it! However, it is fun to infer various tidbits based on traditional media reports like this article about a previous manager causing revenue loss in a movie theater chain. The man was able to cause the chain’s e-commerce sites to not process online ticket sales for a period of time.
What I found most interesting is that a wireless adapter was identified as a culprit. This implies that the movie chain had wireless employed. Enough such that this former manager was able to get into it and also access the web servers or other critical infrastructure. This is terrible network design, security, and architecture.
This man was the former director of information technology. Perhaps they didn’t have anyone around after they eliminated his position to ensure that passwords and access were revoked. Maybe they did change it and he just broke in on his own accord, but any time an employee is removed against his or her will, evaluation and action must be taken to ensure they do not retaliate.

the future battle in computing architectures

Every now and then an article is published this is not only a pleasure to read, but is just packed with information and true forward-thinking content. I just read such an article from Wired.com about the future of searching and computing.
This article intertwines the stories of Google, Ask.com, and other search engines with the future of technology. The rise of RAM. The age of low-cost massively parallel computing (cloud computing) and the fight it will have against decentralized computing (and information). The emphasis on network speeds. The usually unthought-of challenges and costs of electricity and cooling for such huge data centers. China and their pursuit of nuclear power.
An excellent article packed with tons of tidbits around the core themes and dressed up with beautiful writing.

incident disclosure and information sharing

They don’t post all that often, but when they post, they post excellent stuff over at ClearnetSec. The latest post touches on an investigation at a financial institution in regards to an apparent compromise.
We desparetly need more reports like this. No, I don’t need to know specifics or enough to know who the victim is, but we need to know how these things are found, what worked, what didn’t, why did it stay undetected for a year, what else did the attacker do? Was it just one mistake that let them in and they could slowly own the whole network?
We have tons of journalists and media reporting on best practices and how to theoretically protect data and what should and shouldn’t be done in retrospect to the big media-covered incidents. Very few of these reports seem to be written by people experienced in the trenches, experienced with the trials and realities of the network. They are all very pundit-sounding and academic dreams of puppy dogs and sunshine and flowers.
We need to move away from those media reports and theoreticals. We need to divulge information amongst ourselves and figure out the reality. It is golden when you can take out a pen tester for some beers and start shooting the shit about how they’ve yet to test a company that wasn’t rooted, or what works most of the time and what doesn’t, or where some of the oft-overlooked nooks and crannies of networks are, or the most obscure attacks they’ve completed.
We need more surveys and reports like Jeremiah Grossman’s surveys about web application assessments and security, only we need them about actual compromises either real malicious ones or pen-tested ones. We can’t wait and pretend they aren’t there, nor can we wait for the budget or big media events to remind the C-levels about the risks. We need real, technical reports. Give me a tehnical report, and I can distill that down to language my parents could understand. That’s what I soak up.

the pen testing team

Been thinking now and than about being on a pen-testing team. Oh how I would love doing that job! So, sometimes I think about the make-up of such a team. How would I design one? Now, I’m not a business manager so having a 50-person team may sound great but is likely not cost-effective. So, I’ll try to give my take on a “perfect” pen-testing team and their roles, as sketched in my own head. Note that some of these roles can be combined into single people.
The Lead – You need to have a lead person, most likely a very presentable and articulate senior person who is most likely to be the face of the team to the client. This person should also have coordination and delegation duties and be almost like a manger, most likely with some managerial experience to manage the team properly, keep them motivated, but also be able to relate to client managers. This is the coach and mentor.
The Interviewer – This role is an expert when it comes to policies, regulations, standards, and interviewing the proper people in a proper way to get definitive answers on a company’s strength with its people and processes and policies. Someone should, at the very least, be able to interview others properly and understand regulations inside and out (COBIT, PCI, etc). This person should be able to evaluate whether reality matches policy. This guy would be as close to an auditor as the team gets, and could also be familiar with risk analysis.
The Writer – Every pen-test includes reports and deliverables, and the more polished those deliverables look, the better. Every team should have someone who is strong with writing documentation, compiling information, evaluating results, correlating the risks to the client, and dealing with information in a constructive mannger. This person can also be the information-gatherer who can utilize search engines, DNS queries, and other reconnaissance means to profile a target. Even better, this person should be adept at vulnerability assessments and determining how important particular vulnerabilities are.
The Junior – Let’s get this guy out of the way early. There should always be some new blood on the team in the form of a junior guy. This guy may have any level of skill, but is the one doing the “easier” errands on the team. Host sweeps, port scans, Vuln scans, password cracking, and coffee-fetching. In fact, this guy can also do some of the widespread repetive things like exploiting various systems using automated tools, sifting through confiscated data and systems for juicy information, and might also best be suited to help support the systems for the rest of the team.
The Web – Any real pen-testing team should have someone proficient with web coding practices and languages, and the security of them. He or she should be the lead when it comes to source code analysis, web app scanning, fuzzing, SQL injections and queries, and best-practice approaches. A background in web servers and database servers would be beneficial.
The Exploit – Someone on the team should also be proficient with other coding disciplines such as Perl, Python, C++, and so on. They can work with and device exploits either pre-discovered from outside sources or custom scripts to discover new exploits. This person should also be able to evaluate and fuzz and test applications beyond web-based ones, such as web servers, email servers, DNS, etc. If a port is open on a server, this member should be the one poking at it the most. This guy should be an expert on buffer overflows (stack and heap) and most likely with malware creation and reversing.
The Packet Hound – Part of any pen-test should include networking devices and information leakage directly on the wire. Packet hounds tend to love sniffing traffic, tinker with networking devices, know the ins and outs (and arounds) of IDS/IPS and firewalls, map the network, and be able to penetrate and evaluate network devices and configurations. This guy should also be familiar with VoIP, phone systems, and wardialing. If you want a meaningful network tap in a crowded server room, this is your man.
The Wireless Expert – Anymore, wireless and mobility is a big thing. It is a benefit to have a team member who is proficient with wireless technologies to evaluate and penetration the security of mobile devices. This should include PDAs, laptops, and wireless networking.
The Social Engineer/Thief – Any team doing black box or physical assessments should have someone skilled with social engineering. There is no more successful an approach to breaking into a network than social engineering. This person should be adept at the common approaches to getting people to divulge information or do something that is otherwise a security risk, from opening email attachments to holding the door open after a smoke break. Lock-picking and physical security alarms and countermeasure knowledge is necessary; perhaps even someone with burgling experience and the willingness to get dirty with dumpster diving. (Note: since this is a rather fun and different task, other team members could enjoy helping out as long as someone on the team can act as a lead expert for this activity.)

my skills of the future: web coding

One thing I try to be cognizant of as my career starts to move forward is what skills are going to be in demand in the future. I don’t want to be awesome in Windows XP, only to find myself someday outdated like so many Windows 98 admins. Not that I support Windows XP on a desktop level right now, but that is just an illustration.
A manager just emailed out an Excel document that has maps of our building and numbers pointing to all our conference rooms (about a dozen) because people tend to ask, “Where is such-and-such room?”
It occurred to me how appropriate this issue could be solved by a web developer who knows his stuff. Carve out a small section of an intranet, tackle the issue, code up a solution, present it, and voila, a one-stop web-enabled location so that people don’t have to save a tomorrow-oudated spreadsheet “hack” of a solution that might be located at some mysterious location on a file server that I may or may not have access to.
Web application coding skills are amazingly useful and awesome these days. And the work is rather exciting when you can focus down on it and really pursue it as a team that can teach each other. Gone are the days when any stay-at-home kid could pick up a few clients and create cheesy web pages using straight HTML. Now, real web design skills are in demand and needed, coupled with code that more and more resembles actual programming languages in operation, suitable to those who can think in that way (not just make pretty pictures in Paint and arrange them in tables with possibly some database backend code in php…). .Net, Java, Ruby, Python, Ajax.
In fact, before I was in IT I wanted to become a web developer. That was my idea when I switched my majors into MIS 2.5 years into college and graduated with thoughts of making web pages for a living. Thankfully, I’ve had opportunities elsewhere to expand myself, but I still appreciate web development.
Someday, a ways down the road, I can still see myself satisfying my coding bug and doing some more web coding and application coding. I would love to be able to just throw out a quick solution to problems using an internal web site. Given experience and practice, that kind of stuff is amazingly easy and simple to do (ongoing support is always the hard thing). And with web and application security the hot topic for the year in security, this makes sense from that viewpoint as well.
However, for now, I want to remain grounded and focused where I want. Right now I am directing my career towards networking and security, moving towards certifications and learning networking since it is still something I’m working on, plus learning Linux and more deep security topics and pursuits. I’ve also decided I want to make sure I know wireless security as a specialty, as I believe the future is in wireless and mobility. Web coding as a major focus has simply been pushed aside a bit for now…but someday I’d love to dive back in and learn the new stuff.
I must say, if an opportunity opened up right now in an exciting and competitively-paying (for junior level) company to start learning and participating in Ruby or Ajax development, I would seriously think about it.

security really can stifle business initiatives

(Sometimes I do some thinking on my walk to my car for lunch; sadly, the time when I usually don’t have anything upon which to take notes…)
Since I openly contrasted my latest two jobs earlier, I was thinking about their differences. My previous job preferred to get things done, and think about security later. My current job has a few people who prefer to wave security around as a business barrier.
But perhaps that is just something security will very often be. Something tacked on only after it is known that something will work. Why stifle a business or initiative with security when you don’t even yet know if the business or initiative is even viable?
I think this is why developers and programming instructors have such a hard time with security in applications. Functionality is the key component. If it has security but is too late to save the business, what good is it? If it can be delivered on time and let the company flourish, but with less security, is that not better?
But how far do you go with security or insecurity? Therein is the art of risk (which I truly think is an art, and more difficult than anyone really expects). Do you kill a business by paralyzing it with security paranoia and control? Do you let it run rampant with zero security and not even any locks on the doors? Do you do just enough to satisfy negligence? Do you fling up stop signs or just directional cones?
Like every discussion on security, there are exceptions, there are varying levels and tolerances between technologies, companies, managers, and so on. Not only do we not have a silver bullet device to provide security (and never will), but we also don’t have silver bullet methodologies or even approaches that can cover all those differences. Therein also lies friction between finance/auditors, management, and IT/security. It can be artful, subjective, which flies in the face of objective approaches…
One thing we do need, as security practioners, is the constant harping of media about security issues, whether accurate or not. Too often security is only focused upon after an incident or after some insightful awareness presented to management in dreams of angels and fire…but at least media can help keep the minds that be where they ought be.

linux as main box – part 4: migration

I put my Ubuntu move on hold for a few weeks, but I’m back to it now. Having set up many Windows systems in the past, I know how important it can be to document the process, especially for something new like Ubuntu (hence some of my previous posts on this subject). I’ve taken to keeping a log of the apps installed, changes, and commands I run.
In migrating to the new system, I’m really happy when programs include easy-to-use exports and imports to transfer information from one system, or even OS, to another. Firefox allows me to export my bookmarks (which have swelled terribly!) and then import them into Ubuntu’s firefox. Wahoo! Sadly, Thunderbird does not allow this with mail and mail settings. I can do this from one Windows box to another (just copy the profile folder), but have not yet figured out how to do this over on a Linux box. Ah well, it would only take a few hours to set everything up as I had it before anyway. This just shows how valuable remote services like Gmail and Yahoo are for less technical users. Lose your system or get a new one? Just log into webmail and you’re back where you were before!
So, the migration is moving forward. The last task to (nearly) fully get away from booting Windows is to utilize wine and vmware. I searched for some information and stories on installing vmware workstation and found this amazing checklist for an Ubuntu install. Much like so much coding, why reinvent the wheel and make my own when I can just borrow chunks of this guy’s checklist? He even has most of the steps I’ve already gone through, and it looks current! Definitely an inspiration and a great help in making sure I have what I want.
Hopefully by the end of the week I will have a vm set up for Windows which I can pop open when I need to quickly use some Windows program without booting over to my Windows install. In addition, I’d like to get one or two things to work in Wine as well, but the VM is an easier and quicker step for me right now.
As far as getting more things to work, I’ve become very happy with mplayer as opposed to Totem (the default Ubuntu media player). Totem did not like Divx files (been downloading HOPE presentations) but mplayer rolled right with the punches and played them back just fine.

call it 0day please

For now, I am refusing to use the term “less-than-zero-day” for a vulnerability that is unknown but actively exploited. Zero-day then refers to an exploit in the wild that is not patched yet, but is known (the time between notification of vendor and vendor-issued patch). I see no use in this cutesy term…just call anything before a patch or vendor-issued workarounds a 0day for all our sakes…