Continuous Deployment for offensive Red Team Operators
Continuous Deployment for Offensive Red Team Operators
Intro
Red team operators and their malware development / tooling team are tasked with an essential role: to launch and maintain sophisticated cyber attacks, much like a real-world adversary. When managing a Command and Control (C2) server during a penetration test, it’s critical to ensure that the infrastructure is as resilient and responsive as an attacker’s would be (if not better). This is where Continuous Deployment (CD) becomes a pivotal component. CD, the practice of automatically deploying all code changes to a production environment after the test and build stage, offers a multitude of strategic advantages for red team operations.
CI, or Continuous Integration, involves regularly merging code changes into a central repository, followed by automated testing. This ensures that new code integrates well with the existing codebase and doesn’t introduce new vulnerabilities or bugs, which is especially critical in a field where reliability and stealth are paramount.
CD, or Continuous Deployment, allows for rapid iteration and deployment, which is crucial when operating under the changing conditions of a penetration test. You can introduce changes, updates, or new payloads to your C2 server seamlessly and discretely, often necessary when adapting to shifting defensive measures. By implementing a robust CD pipeline you can minimise downtime and ensure your tools are constantly at the ready, mirroring the persistent threat posed by actual attackers. This automated process not only increases efficiency but also enhances the operational security (OpSec) of the red team, as manual errors are significantly reduced and changes can be propagated without drawing undue attention. This level of agility is what sets apart competent red teams from their less prepared counterparts, allowing them to maintain the momentum of their campaign and adapt to counteractions in real-time.
CI/CD really shines when approaching red teaming with a ‘SaaS based’ approach (or saving you time in the long run with constant, repetitive redeployments & sysadmin work setting up your infrastructure).
Deployment, simplified.
In this Continuous Deployment example, we are going to use GitHub Actions and Docker for our CI/CD pipeline. If you aren’t familiar with Docker, it is a platform that uses OS-level virtualisation to deliver software in packages called containers. These containers are isolated from each other but can communicate through well-defined channels. Docker is lightweight and modular, making it ideal for creating replicable, scalable environments - a key requirement for effective red team operations.
A huge benefit of Docker for red team operations is the ease at which you can deploy any number of services side-by-side agnostic of your actual hosted environment. Imagine developing (alone or with a team) ‘locally’ on Windows, Mac and Linux workstations, and having the pain of having to debug bugs that are machine or architecture specific. Docker resolves this problem as your container will be the environment of your final application / service, reducing the likelihood of encountering the dreaded ‘it works on my machine’ problem.
GitHub Actions, is a CI/CD tool that automates your workflows, allowing you to build, test, and deploy code right from GitHub. It enables you to automate workflows in response to events in your GitHub repositories, such as push events or pull requests. By combining GitHub Actions with Docker, you can ensure that every code change is automatically built, tested in a containerised environment, and then deployed to your C2 infrastructure.
The build we are about to do will allow you to deploy changes, in real-time after pushing updates to your main GitHub branch, to your C2 team server. In time, this will allow you to manage multiple “in the wild” command and control servers, from simply committing your code to GitHub.
When implementing CI/CD in your development process, particularly for C2 operations, it’s crucial to factor in the management of local configurations. These configurations might include specific environment variables, custom routing rules, or specialised payloads tailored for certain targets. The risk lies in the fact that automated deployment processes could inadvertently overwrite these local settings with each new update pushed from your main GitHub branch. To mitigate this, it’s advisable to separate your core application code from these configurations. One effective strategy is to use external configuration files that are not tracked in your source control. If you have a ‘c2 deployment pipeline’ when commissioning a new server, your automation pipeline can handle configuring these separated configurations.
Setting it up
First, I am assuming you have a repository on GitHub already with your code in, rearing and waiting to go. This tutorial will focus on a Go (Golang) web application c2, though you can use any language / structure. As an example, we can use the SIMAP command and control server.
Docker
Docker is a tool which manages containerised environments. A containerised environment is a method of packaging and running applications in units called containers. You can think of containers as lightweight, standalone packages that contain everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. Unlike traditional virtual machines that require a full operating system, containers share the host system’s kernel and are isolated from each other, making them more efficient, faster to start, and requiring less computational resources. This approach simplifies deployment since a containerised application runs the same, regardless of where or on what system it’s deployed. For red team operations, this means you can develop, test, and deploy tools consistently across various environments, whether it’s on a local machine or a remote server, without worrying about discrepancies due to different system configurations.
In the root folder of your GitHub project, create a new file called Dockerfile
, with no file extension.
To containerise our Go-based application, we start by creating a Dockerfile, which serves as a set of instructions to build our Docker container. For our use case, we’ll use the official Golang Docker image, which comes pre-installed with the latest version of Go on a Linux base. This choice simplifies our setup process and ensures we have a consistent, up-to-date Go environment.
Within our Dockerfile, we first specify golang:latest
as our base image. Then, we set /app
as our working directory in the container, a conventional choice for many applications. This directory is where our application code and dependencies will reside within the container. We also specify two environment variables, accessible within the container, simap_poc_username
and simap_poc_password
. When a new container is spun up, it will inherit any environment variables we specify in the Dockerfile.
Next, we copy the contents of our current directory into the container. This directory is expected to be the root of our GitHub repository, which contains our Go based c2 server (or application of your choosing). Following this, we execute go mod download
to install the application’s dependencies. After dependencies are installed, we compile our Go application into a statically linked binary optimised for Linux.
We then specify EXPOSE 8080
in our Dockerfile to indicate that our application within the container will listen on port 8080. This doesn’t actually publish the port to the host, but it serves as a form of documentation to indicate which ports the container will use. To make the application accessible from outside the container, we’d typically set up a reverse proxy like NGINX on the host. This reverse proxy would forward external traffic from a standard web port (such as 80 or 443) to the internal port 8080 of the container.
Finally, we use the CMD instruction to specify the command that will be executed when the Docker container starts, which in this case, is running our compiled Go application. The below should go into your Dockerfile:
FROM golang:latest
WORKDIR /app
ENV simap_poc_username=defaultUsername
ENV simap_poc_password=defaultPassword
COPY . .
RUN go mod download
RUN GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o c2server
EXPOSE 8080
CMD ["./c2server"]
SSH Keys
On your local machine, create a new SSH key with ssh-keygen -t rsa -b 4096 -C "github-actions-key"
. By default, this key will added to the folder ~/.ssh/
. Do not include a passphrase on this key. Copy out to the terminal the content of ~/.ssh/id_rsa.pub
(or whatever you called your public key), and copy this key to your clipboard, or keep note of it in the terminal as we will need it in a moment.
On your remote c2 / cloud server, you need to add this key to ~/.ssh/authorized_keys
. It is likely you already have at least 1 key in there, if that is the case what we will add will be on the next line. authorized_keys
stores 1 key per line.
Open this file up with nano ~/.ssh/authorized_keys
(or Vim
if you are ‘that way inclined’) and paste your copied public key into this newline. We are adding this key to authorized_keys
so that GitHub may use this SSH key. Although it may not be necessary, I’d recommend restarting the SSH daemon with sudo systemctl restart sshd
to prevent potential errors later.
Before exiting your server make sure to install Docker on your server at this point, you should be able to do this with sudo yum install docker
or sudo apt install docker
. You should also create a folder where the automation will send your code to, I’d recommend creating a folder now at ~/staging
.
Next, open your GitHub repo and in the repo settings, find Secrets and Variables
, then open the sub-menu Actions
. In here, you will want to add a new secret, calling it SSH_PRIVATE_KEY
- make the value of this the value of your private key you generated with ssh-keygen
earlier. Note, this is not the .pub
file, if you used default settings it will be the content of id_rsa
.
In the same secrets page, set up a new secret and call it HOST
, the value for this would be the username@ip
you use when SSH’ing into your remote host, for example flux@123.456.789.987
. If you use AWS, this may look like ec2-198-51-100-1.compute-1.amazonaws.com
.
GitHub Actions
Finally, we can set up GitHub Actions to automate the deployment of your code to the remote server whenever you push to the main branch, including when merging a pull request.
GitHub Actions are configured through a file in .github/workflows/
, where we’ll create main.yml
. This file defines the actions to be taken in response to events in your repository, like pushing to a branch. In this case, we’re focusing on Continuous Deployment.
Our GitHub Action will do the following upon a push to the main branch (or ‘master’ if you use that): check out the repository, build the Docker image, and securely transfer it to the server using SCP. On the server, it will load the image into Docker, replace any running instance of the application, and clean up residual files.
Below is the main.yml
for deploying to an AWS EC2 instance. Remember to adjust file paths and settings to match your environment:
NOTE, I have made some updates to improve how this is handled, there is the addition of logic removing previous containers to free up space and also to use the GitHub SHA value in the image name to prevent Docker from caching an old image and using that (and thus it was not forcing the updated image).
name: Deploy to server
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Build Docker Image
run: |
# build the image with --pull to always pull newer versions of the base image
# tag the image with the SHA of the commit to ensure uniqueness
docker build --pull -t c2server:${GITHUB_SHA} .
- name: Save Docker Image
run: |
docker save c2server:${GITHUB_SHA} > c2server.tar
- name: Compress Docker Image
run: |
# Using gzip for compression
tar -czvf c2server.tar.gz c2server.tar
- name: Create SSH Key File
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
- name: Transfer Docker Image to server
run: |
scp -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa c2server.tar.gz ${{ secrets.HOST }}:/home/ec2-user/staging
- name: Deploy on server
env:
GITHUB_SHA: ${{ github.sha }}
run: |
ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa ${{ secrets.HOST }} "
cd /home/ec2-user/deploy/
tar -xzvf c2server.tar.gz
# clean up any existing Docker containers
docker stop c2server || true
docker rm c2server || true
# clean up any existing Docker images with the repository name c2server
docker rmi \$(docker images -q c2server) || true
# load the new Docker image
docker load < ./c2server.tar
echo 'Available images:'
docker images
echo \"Attempting to run image with SHA: $GITHUB_SHA\"
# check if GITHUB_SHA is set and not empty
if [ -z \"$GITHUB_SHA\" ]; then
echo 'GITHUB_SHA is not set or empty. Cannot run the container without a valid tag.'
exit 1
fi
# run the new container using the unique tag
docker run -d --name c2server -p 8080:8080 c2server:\"$GITHUB_SHA\"
"
- name: cleanup after deployment
run: |
ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa ${{ secrets.HOST }} '
while [ -z "$(docker ps -q -f name=c2server)" ]; do sleep 1; done
rm -f /home/ec2-user/staging/c2server.tar
rm -f /home/ec2-user/staging/c2server.tar.gz
'
The secrets SSH_PRIVATE_KEY
and HOST
, added earlier to your repo, are securely used in the workflow. GitHub manages these secrets, ensuring they’re not exposed. For more details on using secrets in GitHub Actions, you can refer to the documentation.
Now you can test whether it works by pushing to your main branch, and looking in the Actions
tab. If for some reason nothing appears in there, you might have to create your main.yml
file through the GUI in GitHub, just copy and paste the content over and delete the local file, pulling the new one that will be created by the GUI.
Taking it further
CI/CD is not just for web applications, you can include CI/CD in other types of application where deployment is handled, e.g. a second stage implant you are serving on your c2.
To take this further, you can manage any number of remote servers through GitHub actions if you consider your pipelines early enough in a project. Equally, you can use GitHub actions to automate the provisioning of new C2 infrastructure, adding details of those live into a repository of Actions for updates to the main repo to be pushed to (something I aim to cover in the future).