passed giac certified forensics analyst (gcfa) exam

This past Friday I had the pleasure to sit for the GCFA (GIAC Certified Forensic Analyst) exam and pass with a 94% score. Quite the relief after a summer of (somewhat slowly) making study progress. In May, I attended the SANS FOR508 training at SANS West (San Diego). Shortly after, I took a bit of a break, and since then have slowly studied and gotten ready for my exam attempt. I’ve blogged about the course before, so I’ll try not to rehash anything. The course was my first SANS experience, and this exam was thus my first GIAC exam experience as well.

Did you take the practice exams? Yes I did. In late August I took the first practice and scored an 83% with only about 9 minutes remaining at the end. At this point I was pretty nervous, but I also was not quite done with my study plans, either. A week later I took the second practice and scored an improved 93% with 30 minutes to spare. They were definitely helpful to see the exam format, get familiar with the interface, and also get a feel for the question style and feel. The real exam felt extremely similar, and while the questions were not duplicated, they felt written by the same author(s) and with the same feel as the practice ones. For the second practice, I turned on the ability to see explanations for both correct and wrong answers, while on the first attempt I didn’t know that option was present and just saw my missed answers. Also, I limited myself to my books and my digital index with no spreadsheet searching functions; just scrolling and eyeballing. I also had paper nearby to write down any concepts I missed, or those that I got correct, but struggled with, for review later.

Would you recommend the practice exams? Yes! I probably could have passed if I had skipped them, but they did absolute wonders for allowing me some feedback on where I stood and gave me a chance to gain confidence and familiarity with the question styles. The practice also gave me two chances to test out my index, hone it, and become even more familiar with the books, adding to my efficiency in an exam where time is precious. Most importantly, this whole study process helped me grasp and “get” the content so much better than just the course alone.

Did you have your own index for the exam? Of course! My goal with the index was to use it to not necessarily answer every question for me, but to give me enough information to come to a probable conclusion, and to then point me to the correct places in the materials to confirm that answer. My true place for answers is the books, and I wanted to provide enough context to be able to look up the appropriate information in the right place when I came across a term or subject in the exam. My index ended up being about 45 pages landscape, with 1536 rows at 8 point font. Having it top-bound was wonderful (about $13 printed online at Fedex/Kinkos).

When creating my index, I started out with a spreadsheet tab for each book. I had four columns: SUBJECT, TERM, DESCRIPTION, BOOK-PAGE. In retrospect, the SUBJECT column was never used by me, and I’ll leave it out on future exams. For the spreadsheet tabs, I’d leave the notes in chronological order. On a separate MASTER tab, I would regularly copy/paste the other contents into it and sort by the TERM column to see my MASTER index. This MASTER tab was what I would later print out.

If a term appeared more than once, it would get more than one entry. I didn’t want to squish BOOK-PAGE numbers into a single row at all. For multiple page mentions in a row, I’d make highlighter arrows in the books to prompt me to look ahead if the topic continued. If a topic had multiple terms or an acronym, I’d include all of them in their own entries. I would try not to do the whole “See Topic X.” I did early on, but hated it, and went away from that later (the one time I came across such an entry during the exam, I cursed myself). The goal was to go from Index to Books, not Index to Index to Index. I tried to be complete enough in general in the Index, but invariably questions would ask for very detailed specifics. And I didn’t want to solely trust myself to transpose the terms correctly, so I didn’t try to be exhaustive; as said earlier, get to the books efficiently! I also indexed terms on the blue and red posters. (Both of which I used in the exam, though much of the information can, in fact, be found in the books.)

I initially limited myself to a single line of description per term, but eventually I acquiesced and allowed myself multiple lines (hold Shift when pressing Enter while in entry mode to add a newline inside a cell). My index would have been longer and even more immediately useful had I not decided that pretty late.

I also used sticky tabs at the top of the books to mark key pages and sections. This way I had the option to skip my index altogether if I knew what general section I wanted to flip to. I used them a lot, too, not just during the exams, but when studying as well! I honestly think doing this saved my butt.

To be honest, I’m a natural information organizer. If I were more of a social person, I’d probably be a project manager! I’m also a note-taker, so doing this index was a loving exercise, rather than a chore. It also helps to remember that this index is a one-time use item. It doesn’t need to be perfect or pass muster for inspection by an editor. Everyone has their own level of perfection they need, but I know my index isn’t without mistakes, has holes, and maybe has more or less than it should. But that’s why I wanted to make sure it led me to the books as much as needed; trust myself, but verify the answer!

What was your study plan? After the 6-day course in San Diego, I probably took a good two weeks off. After that, I started going through the course books again. My goal was to read every word of the books (slides and notes). And yes, that took a while. I would highlight orange every tool mentioned in the books, and write it into a separate notebook of mine (my own personal list of tools). I would highlight key topics and statements with a green highlighter. After about two books, I actually started adding key terms, concepts, tools, and topics into a spreadsheet to begin my actual index. I then went back and caught up the first two books with a quicker pass.

Once done reading the books, I accessed the On Demand content to listen to the lectures again, follow the slides, and follow along in my books. This essentially was another pass through the material, and a second full pass to populate my index with things I missed or wanted to flesh out. For instance, I didn’t decide to put full command examples until my second pass. While winding down the On Demand materials, I also started going back and doing the lab exercises again, at least as much as I could (some tools expired). (I did *not* actually include the exercise workbook notes into my index, and I wish I had done so.) Doing all of those above really helped cement the material in my head, but also caused me to really actually *get* it, if that makes sense. Context fell into place, reasons for various things, and it just all feels natural and confident now.

In the day or two before the exam, I limited myself to just flipping through the books. I took the early part of that week off, and doing this allowed me to get familiar with the tabbed sections again, for quick reference and flipping to my tabs.

How was your actual exam experience? Pretty good! I got in early and got going pretty well. The exam itself is a brutal slog of 3 hours, and I definitely made plenty use of it to be as sure of my answers as I could be in a short period of time. Even with my index, there were a few questions that had me somewhat stumped or utterly unsure where to look for that information. Thankfully, in other respects, my index may not have had the proper information, but my knowledge of the books would lead me to the right sections. The exam questions were some of the best-written questions I’ve seen. To the point, clear, proper English, but you still have to read them carefully to pick up on any twists or tricks afoot. Honestly, the questions and answers were wonderful and did nothing to detract from the experience and ability to demonstrate mastery over the topics.

Is there anything with the materials brought on-site for the exam that you’d do differently? Without getting too specific, I think it would have been useful to better document or print screenshots of the output of the tools mentioned. Not all of them, since there’s a ton! But any of them are fair game for questions. Ideally, it should be enough to have used any and all tools during the labs or self-study when re-doing the labs. But that does take effort, as the labs themselves will not use every plugin and tool mentioned in the books. I also am not sure how one would consume such print-outs efficiently while taking the exam, so maybe I was better off without them!

warren buffet advice for people

Many, many idealistic years ago I used to collect books of neat things, sayings, zen koans, and various other things to find serenity in life. Always very helpful. I used to keep many of these as a rotating email signature back in the 90s, which I honestly rarely ever enabled on messages.

Also very helpful are good lists of advice from people older and wiser. And I wanted to keep this wonderful, achievable list from Warren Buffet and apply it to all people not just the young:

Advice for the all the young people:

  1. read and write more
  2. stay healthy and fit
  3. networking is about giving
  4. practice public speaking
  5. stay teachable
  6. find a mentor
  7. keep in touch with friends
  8. you are not your job
  9. know when to leave
  10. don’t spend what you don’t have

an oscp journey with a m0nk3h

I see lots of OSCP reviews, and I usually don’t post or point to many since, well, there’s so many and students should start learning to Google early on. But this one by m0nk3h is amazingly detailed and quite useful to check out for anyone hoping to take the OSCP or in the process of it. It goes above and beyond the simple PWK and experiences review, and also include tips and tricks and useful commands in one shot.

In fact, scrolling back in the dude’s history, I like lots of his posts and the formats, good stuff! (And I was curious, like everyone who attempts the OSCP exam, on his background to see how that compares or may impact.) I particularly like the SpectreOps course review, as I’d love to take that some day.

a journey into a red team

It’s been a pretty busy year so far, and I’ve been remiss in posting things. I think one influence has been my use of Pocket. Instead of posting things here, I put things I want to check out later on into Pocket. Of course, then I just don’t get back to Pocket!

Anyway, here’s a slide deck for, “A journey into a red team” by @MrUn1k0d3r. I imagine at some point perhaps, the presentation will pop up on the NorthSec Youtube channel.

my first netwars experience and netwars coin

During my first SANS experience last week, I also opted to participate in NetWars Core the nights of Day 4 and Day 5. This was also my first NetWars experience, and I came in having pretty low goals, since I didn’t really know what this was all about. I basically wanted to see the top 10 leaderboard for first timers and unlock Level 3 by the end of the event. Turns out, I unlocked Level 4, held onto overall first for several hours, and finished in 2nd place amongst the individuals (and 6th overall), earning me a NetWars coin and invite to the Tournament of Champions in December!

For those unfamiliar, NetWars Core is a two-day event held on site for 3 hours both nights. You show up with a laptop in hand and get handed a USB stick. On the USB stick are some supporting documents and a virtual machine image to load up. Once loaded up and signed into the event, the countdown begins! Once started, the event website allows access to a battery of questions whose answers are found either in the supporting documents or on the VM itself. These questions cover a wide variety of information security topics, from linux and windows systems administration commands, technical trivia, analyzing forensic evidence, examining network traffic, decoding hidden messages, and reversing malware. There are things for defenders and attackers alike.

Upon arriving, I was given the event USB stick, an instructional piece of paper, and 2 drink tickets good for free drinks from the open bar in back. The tickets certainly beat paying up to $12 for a glass of wine! And as each night moves on, you can get more tickets from the minders handing them out. The dimly lit room itself slowly filled up with fellow geeks, the glow of laptop monitors, and the swell of some light techno music from the speakers up front.

The desks all have power provided and a wired switch for use if one does not want to trust the wireless network, which itself was solid the entire event. The instruction sheet gave details on signing up for an account at CounterHack and the username and password for the Virtual Machine. A web-based VM could be connected to from the CounterHack site (with added latency, of course), or one could just use the VM included on the USB stick.

The USB stick included the event VM and some additional files. Before the event truly started, I copied all of the files to my local Windows system and fired up the VM in VM Workstation 14 Pro (trial). It converted and fired up without issue, and default networking settings allowed it access out to the Internet (needed) through my host laptop. I rebooted it and gave it some more RAM for good measure.

After waiting around a bit, the event kicked off! I started out super slowly as I acclimated myself again to the Linux VM, sheepishly Googling up some commands that I should have known, and otherwise having some issues getting into a groove. But eventually I did hit a groove.

Level 1 and Level 2 are basically traditional Jeopardy-style CTF questions. You are asked something about a file or command or binary or whatever, and you work to provide the answer. Sometimes it just means running a few commands, sometimes it means doing some forensics or attacks on various artifacts. Each question has points assigned to them. The more questions you answer, the more points you get, and the more questions open up below those. The first incorrect answer on all questions didn’t cost anything, but subsequent tries would cost more and more points (up to a certain amount).

Most questions have hints you can unlock. These hints do not cost any points, and you can unlock as many or as few as you want. The only role they play is as a tiebreaker in the unlikely event of a tie in points. The hints themselves range from terms you can Google all the way up to actually giving you the command(s) to run to find the answer. This hint system makes the early parts of the event very accessible to anyone with even passing Linux experience.

After hitting a groove and getting a feel for the questions and VM, I appeared on the top 10 leaderboard, and for the rest of the 3 hours on Day 1 I skipped up and down with my fellow geeks from about 10th place up to a peak of 3rd place or so for brief moments. Early on, one team surged very far into the lead, and the admins of the event sought them out. Within a few minutes, their team was removed from the boards. Turns out they had some veterans on the team, and to avoid discouraging other teams, the admins “ghosted” their score off the board.

The scoreboards are broken down into a top 10 list of individuals and a top 10 list of teams. I believe teams can be 5 or less members. In other events, I think veterans and first-timers are also separated out, but for this event, we were all put into the same boards (they gave a reason why, but I didn’t follow it).

At the end of Day 1, I was firmly into Level 2 with 100 points, and sitting at 8th place on the boards. The first place individual had 146 points and the highest team (visible) had 155 points. I retired to my room, but spent the next 3-4 hours working on various questions that I hadn’t gotten to. I think the small break I took to get to my room and settle in really helped, as I hit a stride over those few hours and solved quite a few puzzles and staged up the answers for Day 2. While the game system itself is closed overnight, as long as you keep the question window up and have any hints you want opened up, you can still see and work on the questions on the local VM. This is true for Level 1 and Level 2, but not for subsequent levels.

Day 2
As Day 2 started, I had a flurry of points submitted in the first 60 minutes, and I actually surged into first place overall with 241 points and 2.5 hours remaining. At this point, I hit a wall. Over the next 1.5 hours, I held onto my lead with 269 points, but others were slowly closing in as I stalled out.

Eventually, at level 3 you can attack and attempt to infiltrate remote systems and networks as an attacker. and from there begin attacking other systems in that DMZ network and answer questions on a separate scoreboard. I was rusty with this, as my day job does not involve attacking systems. The Level 3 hints also became time sucks, with some seriously deep trivia that required heavy Googling and searching. Would help immensely to have a wingman doing just those items! Level 4 involves even deeper access. Ultimately, level 5 is reached and at that point those contestants have to defend their servers and services against others at level 5, while attempting to attack the others and earn points through uptimes.

I spent the next hour watching as a rival passed me up for first place, and the teams jostled for their positions. With 30 minutes left, the admins replaced the scoreboard with a countdown clock. After time ran out, winners were announced and I found out that no one else managed to pass me for second place. I finished with 275 points, with first place sitting at 297. The ghosted team of veterans finished with the top score of 370, but the admins also awarded prizes for the next team, which had 301 points; only 6 points ahead of their next rival!

Honestly, had I tackled this event last year, fresh off my OSCP certification, I realistically would have expected at least another 40 points or so. I had lots of time left, and not much comfort level with the attacks I needed to perform at the later stages.

Overall
I had an absolute blast with this event and the question formats. I’m looking forward to doing another one of these, or also trying out the DFIR and Defense ones as well.

Tips from a First-Timer
Spend Day 1 trying to unlock everything you can, including hints. You want to get as far into the levels as possible, with the ultimate goal of getting into the Level 3 and Level 4 stages.

Try to make sure you get the question finished that yields root access to the local VM. This is important in order to progress further.

With Level 3 unlocked, start attacking it right away. Again, it’s more about getting as far as possible, rather than clearing each level completely. The points-per-question trend a little bit upwards as you go.

One could conceivably unlock level 4 without doing much at level 3. Just from my perspective, I think getting dug into level 3 is more important.

The night between the days can be spent researching Level 3+ strategies, but also backfilling Level 1/2 questions and researching Level 3 hints.

Day 2 should be spent trying to open up Level 5 by performing successful attacks and eventual pivoting into internal networks.

For some added drama, the admins turn off the scoreboard for the final 30 minutes. If you’re feeling brave, feel free to bank some points to score during this time. This would be an excellent moment to finish submitting any level 1 and level 2 answers that weren’t needed to open up the higher levels. Of course, the downside might be encountering technical issues that prevent more scores from being posted, so do so at your own peril!

I strongly suggest writing down and saving out answers to a text file in the crazy event the VM crashes or becomes unstable. Near the end of Day 2, my VM’s xfce often became unresponsive, and I wasn’t in a position where I wanted to reboot it fully. I probably lost a good 30 minutes of productivity this way.

Lastly, have fun. Use those drink tickets if you are so inclined, and enjoy!

my first sans event with for508 in san diego

This past week I attended my first SANS event, SANS West in San Diego. I took the FOR508 course, Advanced Digital Forensics, Incident Response, and Threat Hunting with Eric Zimmerman. Overall, the course and SANS experience was excellent, and I hope to do it again next year!

I chose this course as forensics and incident response at this depth isn’t something I’ve heavily done. I’ve looked into malware incidents and done Windows admin troubleshooting for years, but this course takes things to another level with being able to dissect memory and disk images to find badness. My goal is to continue being well-rounded. I can attack systems, perform forensics on the attacks, inform my defenses to improve them, and complete the loop by doing better attacks. This course helped directly improve one of those areas.

I’ve also never had the opportunity to take training like SANS. There’s a whole list of courses I’d like to take and not nearly enough time to do them all, so I wanted to aim high and make sure I had plenty to learn for the experience. I think I was right on with my pick!

This course turned out wonderfully for me. Days 1 through 4 were spent looking for artifacts in Windows disk images and memory dumps using the SIFT Workstation. I knew enough through years of Windows admin troubleshooting to immediately grasp about 60% of it, and the remaining 40% was very accessible for me. On Day 3 in particular I learned some nuances I didn’t know before, like the shimcache and prefetch files and how to use powerful automated tools to make the work easier. Honestly, I can’t imagine the tedious work to find artifacts in gigs of data before these automated tools were around!

Day 5 went super dense and into relatively new territory for me, by diving into the deep end with NTFS forensics. Definitely the hardest day for me, and considering the long stares by just about everyone in the class, I wasn’t alone in this!

Day 6 involved a day-long capstone event where we broke into groups and did a blitz investigation of an incident. This was pretty fun, even though my group didn’t get a coin, but I feel like I learned a lot more by being able to not only put tools to work, but to also find many of the actual correct answers from the incident. It certainly helps the confidence level!

I also really love the process of forensics. It’s not about following a list of commands or a rigid sequence to find answers. It’s about running all sorts of things to find artifacts, and then stitch together a picture through fact and through some gut feels on what happened. You run 10 commands, put some things together, and maybe even go back and run some of the commands again, but with better information like specific times or location to do a deeper dive. Each piece of the puzzle found allows the investigator to look at every other piece of evidence with new light. I also learned the benefit of good corporate baselining and having the capability to pull full disk and memory images. This is a big deal for success with forensics capabilities.

What’s next?
First, I have plenty of studying and practice to tackle before the GCFA certification exam. After that, I can start planning my course next year, with the front-runner being SEC 542/GWAPT. Yes, this is an offensive cert, but it’s compelling right now to do something red team and shore up what I feel I’m weaker with: web app testing. If this training cycle continues, I’d like to alternate defense and offense each year.

Any lessons learned?
I hesitated bringing a second, portable laptop monitor. But there were several in class who had space at their spot for it. Considering my laptop of choice already has smaller resolution compared to current systems, I would have brought the second monitor if I did it again. Worst case, we’re packed into our seats and I don’t have room during the days, but I could still use it for NetWars or working in the hotel room.

That said, I wouldn’t mind a slightly more modern laptop, just from a resolution/screen real estate standpoint. My main hacking system is a ~2013 Lenovo X230 with upgraded disks and RAM. It’s wonderful for the most part, but I could use a newer model 470 or something that remains portable, but allows for good screen resolution.

Turn off host AV. Be sure you are comfortable using your VM host of choice. Don’t use a work computer unless you have full administrative control over it and its protections. This includes turning off malware tools, but also being able to access things like Google Docs.

I’m saving the details for a separate post, but always be sure to sign up for and try out NetWars!

desirable red team candidates article

I liked this post by Tim MalcolmVetter: How to Pass a Red Team Interview. Some takeaways from it are definitions of what red team means and characteristics of a good red team candidate.

Trustworthiness – I tend to stick to the term integrity, but mostly because I think it has similar, but broader meaning.

Know the role/know yourself – Kind of goes without saying.

Healthy competition – I like this one, and it should go without saying, but still unfortunately needs said. The offensive teams exist to help test, inform, and improve the blue team. This often just means being able to help the blue team stop attacks that get through and missed weaknesses, but could mean much deeper interaction.

Creativity – This is one thing I really like about security. In terms of normal IT operations, sure you can be creative with solutions and dealing with people, but often you’re still playing within the bumpers of a bowling lane, i.e. technology capabilities and limitations (developers excepted). With security, you can creatively look between the lanes, over the lanes, under the lanes. You get to poke in the places not normally poked and do so in creative ways on a red team. Good security is as much an art as an objective, to me.

Operational IT experience – I like seeing this item here, though I’m sure entry level security aspirants hate seeing it. But it continues to be true, even more so for a red team member whose goal is to inform the blue team intelligently. In order to do so, you need some measure of understanding about what the blue team is doing, how they do it, why they do it, and why the business needs get weaved into that. It’s not just to know the gaps in the blue team defenses (because you’ve felt those gaps from being on the blue team). It also helps when being creative with attacks and when setting up testing labs.

Development skills – This tends to be one of the harder places to get started. 1) Learn some language or scripting tool, 2) find ways to get practice, and 3) find more ways to keep practiced. It’s those last two that can be difficult and often takes real effort unless you have some corporate project set in front of you that you can use that knowledge against. The author’s point here is excellent (though I would add in some Bash knowledge): “Red Team candidates should at least script in python or powershell. Candidates who can build web apps, implants in C/C++, and manage infrastructure will have a huge leg up.” I really like the inclusion of being able to build a web app, maybe not necessarily an “app” as much as a dynamic web page, but along with that comes valuable knowledge in web architecture, server configuration, coding, SQL, etc.

Unique skills – I also like seeing this item, though it’s a hard pill to swallow for so many. But that’s the point of true red teams; a team of people who fill various roles and specializations. A team of people who all kinda do the same thing isn’t very efficient. Now, that’s not to say every person should come into a new team and be the absolute expert on a particular thing or technology or technique, but they should be the expert of that thing on their team. Until you find a good team to call home for a long time, it’s good to be broad and/or have things one is better at, but definitely look for those gaps in any team you interview with and see if you can fit those openings. Chances are good candidates can adapt and utilize their experience, integrity, and creativity to fill most gaps in a red team.

Lastly, I wanted to just flat out quote the author, “…if you can phish and think like a covert systems administrator, then you can probably be successful on a red team.” But also know that, …”If you want to end up doing red team work, then do yourself a favor and get a variety of roles and exposure before moving into red team — it will still be there when you’re ready.”

threat hunting – a quick introduction

If you’re in infosec and you blinked, you’ll notice today that “threat hunting” is a thing. It’s more than a thing, it’s quite the movement (though probably ranking below blockchain, machine learning, and AI as far as infosec marketing buzzwords). Wikipedia’s page on the subject was created mid-2016. I spent about an hour doing Google searches trying to find earlier mentions of the term/process, but really found nothing. I think Sqrrl/David Bianco [pdf] was one of the earlier describers of threat hunting.

So, where did threat hunting come from?

First, threat hunting is about looking for threats already in your environment. Underlying this process is the assumption that attackers can and have gotten in, and that your current protections are not enough (with an optionally-implied “advanced threat” assumed). This process identifies weaknesses, compromises, and informs defenses. This stands in contrast to reactive technologies like IPS and SIEM alerts, or preventative processes like Antimalware defenses. Threat hunting by skilled analysts has them creatively hypothesizing about attackers, examining evidence and logs for confirmation of a weakness or presence of an attacker, and then informing defenses to protect against them.

Threat intelligence is often pulled into this mix, which is just a fancy way to saying, “increasingly detailed, technical information about attackers and the things they do when attacking.” If you have a mature threat hunting process, you’re helping create that internal intelligence, rather than just consuming outside sources.

Cool, but where did this come from? Why weren’t we doing this 10 years ago?

Ten years ago, we had blue teams (defenders) against red teams, pen testers, and real attackers. The blue team would wield various security tools and try to prevent, detect, and mitigate against those attackers with somewhat standard playbooks and relatively rigid signatures and alerts.

We’ve had various enlightened members on both sides who have contributed to threat intelligence over the years, and plenty of blue team members who have tried (probably largey in vain) to squeeze more out of their tools and high level access in their environments to sleep better at night. So, many of the tasks that a threat hunter may go through are known, but certainly had never before been truly organized, planned, or laid down in writing; they’re tasks many of us have tried to go through when work is light (hah!). One difference today is the ability to gather and parse massive amounts of data collected in an environment.

It has been long found that environments are becoming more complex (i.e. the attack surface keeps increasing, networks are becoming less efficient to defend, and no external consulting source can quickly given guidance beyond broad strokes and “well, it depends…” conversations) and relatively static signature-based tools don’t find even mediocre incidents. Blue teams have traditionally been poor about internalizing red team findings and creating effective mitigations. And red teams, particularly vulnerability assessment and pen testing teams have become a crowded market with several of their tasks somewhat commoditized through automation and point-in-time limitations.

Threat hunting teams can shore up plenty of these weaknesses. Rather than a point-in-time external pen test, threat hunting teams can act as internal, always-on pen testers; they think like attackers, study attacker methods, determine IOC, and inform and test defenses.

Threat hunting helps turn an infosec team from believing they are not compromised, to at least knowing they are not compromised via X, Y, or Z. And in today’s landscape, the X, Y, and Z incidents hitting the news are what board members ask about.

Threat hunting also acts as an outlet to some otherwise mundane tasks in infosec: analyzing logs and alerts and directing the rollout of a new patch every week. A way to exercise some creativity, learn more about attacker methods and tools (poppin’ root shells, my dudes!), and actually test internal controls and visibility.

They also fill a small gap in between security tools, blue teams, and red teams. Sure, red teams can detect weaknesses and security tools can act as a locked door or tripwire, but what about things the red team misses, or what about attackers who have already leveraged those weaknesses to get into an organization and evade security tools?

Ten years ago, threat hunting probably only went as far as a blue team admin finding something “weird” happening on a server or workstation, and investigating it to find an existing compromise. They fix the compromise, wonder how someone/something got in, wonder if it impacted anything else, and then likely moved on when those answers were not going to be efficiently forthcoming.

Sounds like a no-brainer to start doing this, yeah? Well, not so fast.

To my mind, the main challenge with threat hunting is being able to log, capture, and know everything (or enough) in an environment such that testing a hypothesis is efficient. An organization that is poorly logging things, has poor standards, and struggles to control the environment, will have a very uphill battle to do any real threat hunting. At that point, you’re walking into a forest inferno and looking for dumpsters that are on fire.

cis top 20 critical security control version 7 released

I missed this getting announced, but a few weeks ago a new CIS Top 20 Critical Security Controls doc came out, version 7. There is a registration wall to get past (and one of the few times I had mailinator denied ever), but that shouldn’t be a problem for anyone in security.

This new version chops the list of 20 slightly differently, and actually moves the previous #3 item about secure configurations down to #5. Instead of typically advising to tackle the top 5 items as a priority, the top 6 items are now considered “Basic” controls. Items 7 through 16 are “Foundational” controls, and 17 through 20 are “Organizational.” While adding a different visualization/chunking to the list, this really doesn’t change anything. I do like that this results in vulnerability management appearing at #3 right below the two inventory controls. I think this is appropriate. Secure configurations are hard (“yuck, documentation!”), and so many people lose steam at that control.

I do like the change to 17 away from the awkward skills assessment and gaps wording to being more standard as a “Security Awareness and Training Program.” Previously, this always sounded like a way to train internal security staff (and was probably worded that way to promote training from the previous custodians of the list, SANS), which then left questions about security awareness programs in general.

There are also many more individual sub-controls under each control, which I really like. In the past, I could usually add one or two extra bullets under each control just to fill it out a bit, but I feel like these are fairly solid so far.

There are still some minor gaps, however. For instance, traditional physical security isn’t present, though that usually falls into a facilities sort of department. Cloud security isn’t really a thing on its own, though clearly every control on-prem will have analogous controls in the cloud, depending on what sort of cloud presence is maintained. I’d love to see Threat Hunting functions get rolled into Pen Test/Red Team Exercises. I always have to think twice about control #7: Web and Email Security. It just seems like it should be included in other items, but it’s a large enough attack surface that I get pulling it out. And I also always wince that an entire appsecdev section gets shoved into a single control down at #18.

There are also very few call-outs to documentation and diagrams. They’re so valuable for insiders, outsiders, new employees, new vendors, and so on to get a quick handle on how a network is laid out, critical data flows, attack surface, and high level posture. No space ship general lacks for a holographic display of the important pieces, and I wish most items called out more artifacts like policies, diagrams, and such in the sub-controls.

Lastly, #10 Data Recovery Capabilities is always a tough one for me. In my mind, this is control #0 that every organization should have: Backups! And it’s also probably the one control that has the smallest infosec scope in the list. Do you do it? Is recovery proven and ready? Retention is set in policy? Cool, move on! Eliminating this as a top 20 control would free up a slot for something else. Some inflate this item into BCP/DR processes or otherwise blow it up into Availability or Resiliency in general. I get it, but the sub-controls don’t reflect it.

Overall, are these huge changes? No, but they do reflect incremental changes in our landscape that bring the list up to modern standards. And this list remains one of my primary and initial roadmaps for infosec in organizations.

case study of fail of the week so far – panera

Panera is going through a bit of a debacle last night and today. The original security researcher posted about it in detail after shit hit the fan. Near the end, he poses a few reflective questions. I figured I would poke at them!

“1. We could collectively afford to be more critical of companies when they issue reactionary statements to do damage control. We need to hold them to a higher standard of accountability. I honestly don’t know what that looks like for the media, but there has to be a better way to do thorough, comprehensive reporting on this.”

I think everyone in media and public relations would say this is about understanding the short attention span of media and their audience. Plus, every media outlet wants to get out there first, at least amongst their peers. (This is probably why, as a fan of CNN, I continue to be appalled at their lack of spelling and grammar over the past couple years…) But, yes, I’ve been sick of the vague, cookie-cutter statements for 15 years now. “We take security seriously…” and, “affected by a sophisticated, advanced attack…” and, “no further signs of abuse/disclosure (within only 60 minutes of discovery)…” The problem is one of transparency. The company usually has no reason to be very detailed, which means someone in the know (either inside or the researcher or someone who pieces it together and rediscovers it independently) needs to reveal the details responsibly. And I usually fall on the side of full disclosure as opposed to no disclosure or “responsible disclosure, which really just means stifling it with a smile.” It’s the best way for all of us to learn and get better, and also make educated choices about where to do business.

As far as holding accountable overall, that’s a rough one. While security companies and other Business-to-Business (B2B) firms can struggle after a breach, I don’t know of any retailers or food service companies that have been terribly impacted by a breach. Food quality scares can threaten Chipotle, but breaches seem to get ignored. This Panera one is a little different as the platform affected was a mobile ordering app, which is probably used by slightly more connected and savvy users; those that may never use it again due to this, but will probably will eat at Panera.

“2. We need to collectively examine what the incentives are that enabled this to happen. I do not believe it was a singular failure with any particular employee. It’s easy to point to certain individuals, but they do not end up in those positions unless that behavior is fundamentally compatible with the broader corporate culture and priorities.”

This is a meaty issue, for sure. I think many security issues don’t get exposed or talked about, because it is always (always!) easier to consciously or unconsciously ignore it. It’s hard finding security issues; sometimes you have to really try. And many people are just trying to complete their tasks and get through their days. The security team (if there is one) has this responsibility, but in a world where development wants to go at the speed of agile, no one can slow them down. Security has to move at that speed as well, which is difficult since security inherently is always behind the curve a little bit, and often perceived as adding no value.

This starts with a mandate or at the very least interest in security from a high level. It’s fine enough to say, “We don’t want to be the next Equifax/Panera/Boeing.” From there, security management needs to be based on fact, and not belief. It also needs to constantly be questioned and improved. None of us know how to secure everything; we learn incrementally or seek assistance/ideas elsewhere, and then weave that into our internal security fabric over and over and over. When that improvement process stops, gaps will appear. (To me, this is where threat hunting is becoming a thing; you can get pretty good with your security posture and do the basic things well to go, but to keep improving, you need that unit that keeps hunting, poking, learning, and injecting further information into the whole.)

“3. If you are a security professional, please, I implore you, set up a basic page describing a non-threatening process for submitting security vulnerability disclosures. Make this process obviously distinct from the, “Hi I think my account is hacked” customer support process. Make sure this is immediately read by someone qualified and engaged to investigate those reports, both technically and practically speaking. You do not need to offer a bug bounty or a reward. Just offering a way to allow people to easily contact you with confidence would go a long way.”

I basically agree with this, but also make sure that investigations are done properly (carefully) by your IR team, and make sure someone has a quick line to PR/legal if questions arise or things escalate. Lack of response can be appropriate, but that should be stated and at least give the correspondent some indication an email was successfully delivered. Basically, security@ email address should suffice, and optionally a quick mention on an appropriate Contact Us page.

a rant about rants about password rotations

Here’s a rant that makes me look pretty stable. 🙂 Nick Selby’s post, “Do You Make Users Rotate Passwords? Well, Cut It Out.” I agree with the general sentiment, and I get the annoyance, but not so much the general way this is presented without making some qualifications.

Just to get the elephant in the room out of the way: All of this discussion is somewhat moot once we throw in the requirement of multi-factor authentication. Which makes sense, especially as biometrics (slowly) continues to make headway, which is like a password we won’t ever be able to change.

I’m also not making any assumptions on password strength, either chosen or forced. I can’t expect every user to practice good password hygiene, so I can’t really add that to my arguments. I’m also not going to make assumptions on complexity requirements forced on users by the system.

OK, let’s first make some distinctions: corporate/employee* vs consumer sites. There are two major types of accounts I have in mind with this discussion. First, accounts that an enterprise uses to identify its employees, usually set up and managed by IAM/Help Desk folks and rotated every 90 days or so, and removed upon termination. These employees typically come to an office and sit at a workstation to log into. Second, there are accounts used on consumer sites such as Gmail, Amazon, DreamHost, FaceBook, etc. These are usually set up via self-serve and probably don’t force changes except when compromise is suspected to some degree.

In the former, there are arguable times when these account passwords may get divulged or known, such as a Help Desk worker doing “something” on your system to troubleshoot an error over a lunch break and wants to log in after the screensaver kicks in (hey, another rant-worthy piece of bait!). There are too often small one-offs where an account password is shared. I hate it, but I understand it happens. There are still two use-cases for enterprises to use proactive password rotations. If a user has a shared password and needs that friendly reminder to change it, or if an unknown compromise of a password has occurred, the forced rotation of a password will close both of these gaps.

For consumer site accounts, users are left to shoulder the responsibility of the confidentiality of their passwords. If they share it with someone else, it’s on them to change the password after usage expectation has ended (ever “borrow” someone’s Hulu account longer than intended?). For consumer sites, the onus for keeping your password to yourself is on the user, with the only exception being a failure in the security of the site, which can have two outcomes. First, a compromise is suspected/known and all affected users are asked to change their password. Second, the compromise and exposure of a database that contains user passwords in reversible format, but where this compromise may not be known for months or years.

In the past, this conversation has sort of been about rotating passwords faster than most attackers can crack accounts, but I’d argue that’s less the case, and the real way it should be worded is to limit the window of opportunity for an actor to possess something that is still valid that they should not possess. Whether that means cracking times or exposure to an unknown compromise, I don’t really care.

Has password rotation ever “increased security?” I’d argue not really, but it helps deal with *decreased security* scenarios, namely someone has your password and you didn’t know it, or you did, and failed to change your password after the need to know it has passed. In the past, this also includes the scenario of password cracking. On the other hand, perhaps rotating passwords at varying levels has helped prevent situations where users use the same password for many accounts. Rotating passwords at varying times may decrease password re-use across accounts and actually be slightly better for security, but that’s just strange to think about.

Users create predictable variations from existing passwords, though! I suppose, if you know the base form of that password. For some, it’s easy to guess. New account in April? Try Spring18 or Spring2018. Then things get a bit more predictable, sure. For corporate accounts, at that point we’re already starting to get close to account lockouts or other alarms. For consumer sites, it’s harder to guess that base form, in my estimation. That said, I would say this argument has some weight. I’d bet some old passwords for users, if cracked, probably would inform their future choices (Bulls2015 may today be Bulls2016 or Bulls2017 or Bulls2018). But I suppose a cracked password that requires guesses to get the current password is better than a cracked password still being valid?

Users hate it and are inconvenienced! This is less a substantive argument and more a commentary that security and convenience are always at odds, and finding the sweet spot between them is a dance between objective and subjective discourse. Ask 20 infosec pros any scenario and you’ll get 22 different answers, all of which are varying shades of correct.

Why 90 days? Why not 30 minutes? I don’t know, I don’t think that’s the point? I think this is just acknowledging that security is not about finding *THE* right answer, but finding what works between the goals and people. Ok, fine, I think 30/60/90 days requirements back in 2003 were about cracking times for typical Windows account hashes. Roughly. Very roughly…

At any rate, all of this is pretty anecdotal, as is probably 99% of the discussion on this topic. Even places that try to say there is “lots of evidence” one way or another never really seems scientific or defensible from twisting the statistics around to form an opposing hypothesis or just have too small a sample. (Yeah, this is my way of dismissing stats and not wasting my time pouring over academic papers to support whatever my position is or will be. If paid to do that, I’ll be happy to!)

(Now, all of this said…I’m playing a little bit o’ Devil’s Advocate here. I won’t defend either position on password rotations terribly hard or to the death. But I think defaulting to password rotation in corporate account cases is the better approach and defaulting to unchanging passwords for consumer sites is also the better approach, with MFA swinging things away from rotations.)

* There is also the small slice of the puzzle with actual shared enterprise accounts or service accounts. The former being maybe a shared login to something low value (for instance, each concurrent user is a cost or something). The latter being your normal service accounts where various admins may be able to retrieve the password or set it up on a new server. I won’t deal with this, since it should be obvious these are rotated regularly, especially as employees leave the enterprise or lose their need-to-know.

lots of people looking for mentors these days

Everyone wants a course or cert or mentor that will teach them how to be master hacker or to hit the ground running with a cheatsheet in hand. But the only real path requires two legs: practice and curiosity (self-study to learn the things that are still unknown). A course can help with the former, but courses can only cover a miniscule amount of all the unknown things you’ll run into. Dive in, get guidance as needed, but there is no substitute for practice and doing something by yourself. Apply for and land a job, find a mentor there, or find yourself being the mentor. Do, do, do.

That said, just ask direct questions.

the soc or siem of tomorrow

Monster post of the week goes to Gunter Olmann’s NextGen SIEM Isn’t SIEM blog post.

To paraphrase the first half, basically SIEM’s main weaknesses have been fidelity and trying to integrate newer sources. These newer sources are pushing that fidelity and active response away from the SIEM and down closer to end endpoints/attackers/events.

…Interjecting my own thoughts for a moment: Also, in the past, a SIEM was only as good as the intelligence behind it, which was often fueled entirely by the staff sitting directly in front of it. I’m sure every SIEM and MSSP vendor has been asked, “So, what do we look for out of these million logs entries?” by every single one of their clients, and the answer is always, “It depends,” or, “What do you want to find?” (You’d think intelligence would get pushed downhill, but I think only the most obvious of intelligence ever gets outside a singular organization’s walls, including those that share it!) The best (luckiest) shops have SIEM ninjas in house, but most just flounder wetly about in the hallway. Now, back to Gunter…

And he frames the transition correctly by saying these new tools typically do only a narrow scope of things really well.

Honestly, I’m not sure asking what the “next gen SIEM looks like” is exactly the right question. I’d take a small step back and say, “What does the next SOC look like?” (I’m writing this as I read the article, and Gunter goes to the same direction!) Do we still strive for one pane of glass? Do we have many panes of glass with best of breed tools?

I like Gunter’s bullet points on what the next SOC/SIEM should do or look like.

But I do want to add one other factor into this. The shops that have the budgets to get things like big SIEM tools and various other Threat Hunting or SOC-supporting tools are also the ones fighting with a ridiculous technology change pace in their own networks, and those that have manageable environments are the ones too small for the best tools. Between cloud, IOT, mobile devices, and advancing system sprawl, it’s a huge endeavor for a SOC just to keep up with its own organization.

Anyway, just to interject a wonderful (or nightmarish) vision of the future…we keep taking steps forward towards actual Gibson/Shadowrun-like ICE!

q2 2018 training and learning plans

So, what’s on my structured training list now that I’ve finished CCNA Cyber Ops? I have a 2018 goals post, but obviously things can change… I don’t blog about many of my training things, largely because I have a separate, private OneNote instance that has a huge breakdown and list of things I want to do this year, next year, and discussions on everything else that my career may entail beyond. I have a long term section, and a list of things I’m basically doing right now.

Right now, I have a small lull until I head to SANS West in two months and pick up my first SANS course, GCFA FOR508. I’ve decided to forego some courses people tend to take early on in their SANS experience, and dive into the deep end by skipping GSEC (SEC401), GCIH (SEC504), GPEN (SEC560), and GCFE (FOR500). I’ve never had the opportunity to do SANS courses before, and rather than go easy and do something I may know pretty well already, I decided not to wait years and instead get to a course that will certainly be a challenge.

To that end, I’m already doing a little bit of prep work to brush up on some forensics/IR topics so that I don’t entirely need to catch my mindset up much to hit day 1 at a brisk walk. I’ll be watching some random YouTube clips of the course and related topics, reading a few books I have sitting around on forensics and data collection, and otherwise preparing my workstation.

Beyond that, I’m likely going to do a little preparation for NetWars as well, though to be honest, I don’t expect much as a first timer. But I want to finish a bit more in my RHCSA/LFCS courses, refresh using Metasploit Unleashed (a course I’ve long since just never gotten through) to get my mind back in offense, and then do some retired HTB boxes to oil those wheels further.

I’ll also be at C2E2 (Chicago) in the middle of all of this, so that likely is enough planning for now to see me through to the midpoint of 2018.