There is a growing chorus of folks talking about simulating targeted attacks from known adversaries as a valuable security service.
The argument goes like this: penetration testers are vulnerability focused and have a toolset/style that replicates a penetration tester. This style finds security problems and it helps, but it does little to prepare the customer for the targeted attacks they will experience.
Adversary simulation is different. It focuses on the customer’s ability to deal with an attack, post-compromise. These assessments look at incident response and provide a valuable “live fire” training opportunity for the analysts who hunt for and respond to incidents each day.
The organizations that buy security products and services are starting to see that compromise is inevitable. These organizations spend money on blinky boxes, people, services, and processes to deal with this situation. They need a way to know whether or not this investment is effective. Adversary simulation is a way to do this.
There’s no standard definition for adversary simulation, yet. It doesn’t even have an agreed upon term. I’ve heard threat emulation, purple teaming, and attack simulation to discuss roughly the same concept. I feel like several of us are wearing blindfolds, feeling around our immediate vicinity, and we’re working to describe an elephant to each other.
From the discussions on this concept, I see a few common elements:
The goal of adversary simulation is to prepare network defense staff for the highly sophisticated targeted attacks their organization may face.
Adversary simulation assumes compromise. The access vector doesn’t matter as much as the post-compromise actions. This makes sense to me. If an adversary lives in your network for years, the 0-day used three years ago doesn’t really matter. Offensive techniques, like the Golden Ticket, turn long-term persistence on its head. An adversary may return to your network and resume complete control of your domain at any time. This is happening.
Adversary simulation is a white box activity, sometimes driven by a sequence of events or a story board. It is not the goal of an adversary simulation exercise to demonstrate a novel attack path. There are different ways to come up with this sequence of events. You could use a novel attack from a prior red team assessment or real-world intrusion. You could also host a meeting to discuss threat models and derive a plausible scenario from that.
There’s some understanding that adversary simulation involves meaningful network and host-based indicators. These are the observables a network defense team will use to detect and understand the attacker. The simulated indicators should allow the network defense team to exercise the same steps they would take if they had to respond to the real attacker. This requires creative adversary operators with an open mind about other ways to hack. These operators must learn the adversary’s tradecraft and cobble together something that resembles their process. They must pay attention to the protocols the adversary uses, the indicators in the communication, the tempo of the communication, and whether or not the actor relies on distributed infrastructure. Host-based indicators and persistence techniques matter too. The best training results will come from simulating these elements very closely.
Adversary simulation is inherently cooperative. Sometimes, the adversary operator executes the scenario with the network defense team present. Other times the operator debriefs after all of the actions are executed. In both cases, the adversary operators give up their indicators and techniques to allow the network defense team to learn from the experience and come up with ways to improve their process. This requirement places a great burden on an adversary simulation toolkit. The adversary operators need ways to execute the same scenario with new indicators or twists to measure improvement.
Hacking to get Caught – A Concept for Adversary Replication and Penetration Testing
Threat Models that Exercise your SIEM and Incident Response
Comprehensive testing Red and Blue Make Purple
Seeing purple hybrid security teams for the enterprise
I see a simulated attack as different from a red team or full scope assessment. Red Team assessments exercise a mature security program in a comprehensive way. A skilled team conducts a real-world attack, stays in the network, and steals information. At the end, they reveal a (novel?) attack path and demonstrate risk. The red team’s report becomes a tool to inform decision makers about their security program and justify added resources or changes to the security program.
A useful full scope assessment requires ample time and they are expensive.
Adversary simulation does not have to be expensive or elaborate. You can spend a day running through scenarios once each quarter. You can start simple and improve your approach as time goes on. This is an activity that is accessible to security programs with different levels of budget and maturity.
I participate in a lot of Cyber Defense Exercises. Some events are setup as live-fire training against a credible simulated adversary. These exercises are driven off of a narrative and the red team executes the actions the narrative requires. The narrative drives the discussion post-action. All red team activities are white box, as the red team is not the training audience. These elements make cyber defense exercises very similar to adversary simulation as I’m describing here. This is probably why the discussion perks up my ears, it’s familiar ground to me.
There are some differences though.
These exercises don’t happen in production networks. They happen in labs. This introduces a lot of artificiality. The participants don’t get to “train as they fight” as many tools and sensors they use at home probably do not exist in the lab. There is also no element of surprise to help the attacker. Network defense teams come to these events ready to defend. These events usually involve multiple teams which creates an element of competition. A safe adversary simulation, conducted on a production network, does not need to suffer from these drawbacks.
Purple Teaming is a discussion about how red teams and blue teams can work together. Ideas about how to do this differ. I wouldn’t refer to Adversary Simulation as Purple Teaming. You could argue that Adversary Simulation is a form of Purple Teaming. It’s not the only form though. Some forms of purple teaming have a penetration tester sit with a network defense team and dissect penetration tester tradecraft. There are other ways to hack beyond the favored tricks and tools of penetration testers.
Let’s use lateral movement as an example:
A penetration tester might use Metasploit’s PsExec to demonstrate lateral movement, help a blue team zero in on this behavior, and call it a day. A red team member might drop to a shell and use native tools to demonstrate lateral movement, help a blue team understand these options, and move on.
An adversary operator tasked to replicate the recent behavior of “a nation-state affiliated” actor might load a Golden Ticket into their session and use that trust to remotely setup a sticky keys-like backdoor on targets and control them with RDP. This is a form of lateral movement and it’s tied to an observed adversary tactic. The debrief in this case focuses on the novel tactic and potential strategies to detect and mitigate it.
Do you see the difference? A penetration tester or red team member will show something that works for them. An adversary operator will simulate a target adversary and help their customer understand and improve their posture against that adversary. Giving defenders exposure and training on tactics, techniques, and procedures beyond the typical penetration tester’s arsenal is one of the reasons adversary simulation is so important.
Adversary Simulation is a developing area. There are several approaches and I’m sure others will emerge over time…
One way to simulate an adversary is to simulate their traffic on the wire. This is an opportunity to validate custom rules and to verify that sensors are firing. It’s a low-cost way to drill intrusion response and intrusion detection staff too. Fire off something obvious and wait to see how long it takes to detect it. If they don’t, you immediately know you have a problem.
Marcus Carey’s vSploit is an example of this approach. Keep an eye on his FireDrill.me company, as he’s expanding upon his original ideas as well.
DEF CON 19 – Metasploit vSploit Modules
Another approach is to use public malware on your customer’s network. Load up DarkComet, GhostRAT, or Bifrost and execute attacker-like actions. Of course, before you use this public malware, you have to audit it for backdoors and make sure you’re not introducing an adversary into your network. On the bright side, it’s free.
This approach is restrictive though. You’re limiting yourself to malware that you have a full toolchain for [the user interface, the c2 server, and the agent]. This is also the malware that off-the-shelf products will catch best. I like to joke that some anti-APT products catch 100% APT, so long as you limit your definition of APT malware to Dark Comet.
This is probably a good approach with a new team, but as the network security monitoring team matures, you’ll need better capability to challenge them and keep their interest.
Penetration Testing Tools are NOT adequate adversary simulation tools. Penetration Testing Tools usually have one post-exploitation agent with limited protocols and fixed communication options. If you use a penetration testing tool and give up its indicators, it’s burned after that. A lack of communication flexibility and options makes most penetration testing tools poor options for adversary simulation.
Cobalt Strike overcomes some of these problems. Cobalt Strike’s Beacon payload does bi-directional communication over named pipes (SMB), DNS TXT records, DNS A records, HTTP, and HTTPS. Beacon also gives you the flexibility to call home to multiple places and to vary the interval at which it calls home. This allows you to simulate an adversary that uses asynchronous bots and distributed infrastructure.
The above features make Beacon a better post-exploitation agent. They don’t address the adversary replication problem. One difference between a post-exploitation agent and an adversary replication tool is user-controlled indicators. Beacon’s Malleable C2 gives you this. Malleable C2 is a technology that lets you, the end user, change Beacon’s network indicators to look like something else. It takes two minutes to craft a profile that accurately mimics legitimate traffic or other malware. I took a lot of care to make this process as easy as possible.
Malleable Command and Control
Cobalt Strike isn’t the only tool with this approach either. Encripto released Maligno, a Python agent that downloads shellcode and injects it into memory. This agent allows you to customize its network indicators to provide a trail for an intrusion analyst to follow.
Malleable C2 is a good start to support adversary simulation from a red team tool, but it’s not the whole picture. Adversary Simulation requires new story telling tools, other types of customizable indicators, and it also requires a rethink of workflows for lateral movement and post-exploitation. There’s a lot of work to do yet.
Putter Panda – Threat Replication Case Study
I think so. I’d like to close this post with an observation, taken across various exercises:
In the beginning, it’s easy to challenge and exercise a network defense team. You will find that many network defenders do not have a lot of experience (actively) dealing with a sophisticated adversary. This is part of what allows these adversaries the freedom to live and work on so many networks. An inability to find these adversaries creates a sense of complacency. If I can’t see them, maybe they’re not there?
By exercising a network defense team and providing actionable feedback with useful details, you’re giving that team a way to understand their level. The teams that take the debrief seriously will figure out how to improve and get better.
Over time, you will find that these teams, spurred by your efforts, are operating at a level that will challenge your ability to create a meaningful experience for them. I’ve provided repeat red team support to many events since 2011. Each year I see the growth of the return teams that my friends and I provide an offensive service to. It’s rewarding work and we see the difference.
Heed my words though: the strongest network defense teams require a credible challenge to get better. Without adversary simulation tools [built or bought], you will quickly exhaust your ability to challenge these teams and keep up with their growth.