red team tools or bas, it’s still about validating your controls

I was catching up on some blogs and came across a thought-sparking post from Augusto Barros titled “From my Gartner Blog – It’s Not (Only) That The Basics Are Hard…” In this post, he talks about how basic controls fail, for example keeping accurate inventory when someone forgets to follow the process. In other words, how do you make sure you’re still doing the basics accurately?

I don’t necessarily get what is new with BAS (Breach and Attack Simulation tools) or whether this signifies the coming of age of internal red teams or a new way to market these tools, but making sure basic controls are in place is part of the purpose of things that, I can see from a particular point of view, attacker types of tools play into.

In the case of inventory control, this is where you have network discovery and internal recon (vuln scans, NSM…) or tripwires (NAC, ISE…) catch things that miss the inventory process. You find them and treat them as rogue until proven otherwise. In the process, you also care about certain zones more than others. An isolated server deployed in an internal segment is one thing, but a server in the DMZ with a few ports exposed to the Internet is another. In the latter case, another potential detection point is external footprint scanning, something that is very important to know, as this is where attacker eyeballs will also be looking.

Maybe this fits more into internal threat hunting or having an internal security team that at least thinks and designs controls and internal intelligence with a thought towards how an attacker would see things.

Leave a Reply

Your email address will not be published. Required fields are marked *