Acting Red — Seeing Blue
Observing Your macOS Red Team attack footprint like an EDR with a HELK Lab
As you run your red team attack in a macOS environment, you might find yourself wondering what your blue team is seeing from their Endpoint Detection & Response (EDR) solution.
In this article, I will explain how to build a lab using the following three components. You can use it for real-time observation of Apple’s Endpoint Security Framework (ESF) events coming from your macOS client (test victim) as you perform your red team techniques against it:
- Roberto Rodriguez ‘s HELK
- Elastic’s Filebeat
- Christopher Ross’ Appmon
With this lab, you can develop and hone your macOS tradecraft by seeing your footprints. Use this test setup to know what information your blue team’s EDR will gather when you run the same techniques against your target organization’s infrastructure. You can also help your blue team create detections for the techniques before the Operation and have custom queries ready post-op or for your follow-on purple team.
The Pieces of Your Lab
HELK is the “Hunting Elastic Stack.” The Elastic stack being Elastic Search, Logstash, and Kibana (ELK). HELK is an open source threat hunting platform that utilizes the ELK stack. HELK makes it super easy and fast to stand up your threat hunting server.
I won’t get into what all of the pieces of the Elastic Stack do, but I need to mention that Kibana is the User Interface piece of the stack and will be where you will visualize the ESF events as they happen.
Appmon is a command-line tool for capturing events from your macOS client’s ESF. Since Apple deprecated kernel extensions in macOS Catalina 10.15.4, all of the EDR solutions use ESF to produce their events, so this is a great way to imitate what they are doing.
Filebeat is another Elastic product that we will use to ship our Appmon logs to Logstash and thus our HELK stack.
Standing Up Your HELK Server in the Cloud
Let’s begin by setting up our HELK server. You can do this on-prem or in the cloud. My setup is in the cloud, so that is what I will guide you through. I’m not going to walk you through all of the steps for building an EC2 instance in AWS or a Compute instance in GCP, but you should be aware of the system requirements:
- Cores: A minimum of 4 cores (whether logical or physical)
- RAM: 5GB
- DISK: 20GB available
- Operating System: Ubuntu ≥16, or CentOS ≥ 7 with or without SELinux in enforcement mode
You can see the rest of the HELK installation requirements here. I built a t2.large EC2 instance in AWS from an Ubuntu 18.04 AMI with 100GB of Disk space. Initially, I gave it a 20GB disk, but that was not enough after updating the instance.
Your security group will need to allow inbound connection via port 22, for instance administration over SSH and inbound from your victim machines via port 443.
Downloading, Configuring, and Installing HELK
You will now need to clone the latest version of HELK to your server instance from git: git clone https://github.com/Cyb3rWard0g/HELK.git
Next, you will need to modify the: HELK/docker/helk-logstash/pipeline/0098–all-filter.conf file by adding the JSON section, as you can see in bold text in the file below:
# HELK All filter conf file
# HELK build Stage: Alpha
# Author: Roberto Rodriguez (@Cyb3rWard0g), Nate Guagenti (@neu5ron)
# License: GPL-3.0
filter {
if [message] {
mutate {
add_field => {
"etl_pipeline" => "all-filter-0098"
"etl_version" => "2020.04.19.01"
}
rename => {
#"[@metadata][kafka][consumer_group]" => "etl_kafka_consumer_group"
#"[@metadata][kafka][key]" => "etl_kafka_key"
"[@metadata][kafka][offset]" => "etl_kafka_offset"
"[@metadata][kafka][partition]" => "etl_kafka_partition"
"[@metadata][kafka][timestamp]" => "etl_kafka_time"
"[@metadata][kafka][topic]" => "etl_kafka_topic"
}
copy => { "message" => "event_original_message" }
}
json {
source => "message"
target => "esf"
}
ruby {
code => "event.set('etl_processed_time', Time.now().utc);"
add_field => { "etl_pipeline" => "all-add_processed_timestamp" }
}
}
}
Now you are ready to install HELK:
cd HELK/docker
sudo ./helk_install.sh
For the build choice, enter 1.
Next, set the HELK IP to the IP address of your HELK server instance.
Enter a password, re-enter it, and remember it. You will need this password to log in to Kibana.
HELK will now build and run.
Your HELK server is now all set! It is THAT easy.
Download and Install Filebeat on Your macOS Client
On your client machine, you will need to download Filebeat from Elastic. Once you expand the Filebeat archive file on your client, you will need to edit the filebeat.yml file.
Create a file somewhere on your system called esf.log. In this example, /Users/antman/esf.log. Filebeat will pull this log and deliver it to Elasticsearch.
Point the filebeat.yml config file at the log.
- Ensure you uncomment type log.
- Ensure enabled is set to true.
- Add your log file path
- Make sure the other Filebeat Inputs are commented out
You can see these steps in bold print in the example yml file below:
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /Users/antman/esf.log
Also, point the logstash output of the same filebeat.yml config file to your HELK server instance.
You can see these steps in the bold print below:
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
hosts: ["10.0.1.226:5044"]
Also, ensure you comment out output.elasticsearch and its options. If Filebeat attempts to connect to both elastic and logstash, you will have a conflict error.
# ---------------------------- Elasticsearch Output ----------------------------#output.elasticsearch:
Next, run Filebeat and point it to your edited config file:
./filebeat -e -c filebeat.yml
Download and Run Appmon on Your macOS Client
Download Appmon here and move it to your Applications directory on your macOS client.
Ensure your terminal has Full Disk Access (FDA) via your machine’s privacy and security settings.
From your terminal, run Appmon as root and direct the output to your esf.log file:
sudo /Applications/appmon.app/Contents/MacOS/appmon >> /Users/antman/esf.log
Filebeat is now shipping the ESF events from your Appmon logs to your HELK server!
Login to Kibana to See your ESF Events
From your Google Chrome Browser, open an incognito window and Navigate to your Kibana instance via HTTPS. For some reason, you might get errors if you don’t use an incognito window.
Kibana will prompt you for the HELK credentials that you made when standing up the server instance.
Click the Discover Compass Icon on the left to navigate to the query page, where you will see your ESF events coming into HELK.
If you click on a down arrow for one of the events, you might notice yellow warning triangles in many of the fields. If this is the case, you will need to complete the following steps to correct the issue:
- Click the Management Gear Icon at the lower left menu.
- Click on Index Patterns under the Kibana menu.
- Click logs-* under Pattern.
4. On the logs-* page, click the refresh icon.
You are all set! Now you can see all of the ESF data from your macOS endpoint in Kibana!
Wrapping Up
That is all there is to it. You can now test your red team techniques on your macOS test client and see the footprints your blue team is seeing from their EDR.
Note that you can install Appmon and Filebeat on more than one client and aggregate data across all the macOS endpoints in the target range. You can also create custom queries in Kibana to narrow down on specific techniques. Finally, you can create dashboards to visualize custom metrics.
Hopefully, this lab setup is as helpful for your red team as it is for me. Let me know what interesting footprints you spot using this setup.
As always, thanks for reading!
Special Thanks
I was one of the fortunate few to take SpecterOps’ very first run of their Adversary Tactics: Mac Tradecraft (ATMT) course back in November 2020. One of the MANY cool things this fantastic course had was a HELK lab like the one discussed in this article. I recently reached out to Leo Pitt of SpecterOps for help in troubleshooting my setup. All thanks to Leo for getting me up and running!