Self hosting Capture-the-Flag with OWASP Juice Shop, MultiJuicer and CTFd on AWS EC2

Craig Gray

Craig Gray

DevOps Engineer

We hosted a fully remote CTF event for our Engineering team using OWASP Juice Shop, multi-juicer and CTFd. The event was a fun way to raise awareness of offensive security across our team.

Amazon EC2 is one of the eight AWS services for use during penetration testing without prior approval, consequently we hosted our infrastructure using EC2 and Amazon EKS with EC2 node groups.

Overview

OWASP Juice Shop is a vulnerable web application used in security trainings and awareness demos. OWASP Juice Shop is not designed for multiple users. MultiJuicer extends Juice Shop allowing participants to sign up and launch Juice Shops. MultiJuicer runs on Kubernetes (K8s).

CTFd is a capture the flag (CTF) platform which includes a challenge list, flag submission and scoreboard.

juice-shop-ctf exports challenges from Juice Shop for import into CTF platforms including CTFd.

Stringing these tools together along with AWS EC2 and AWS Elastic Kubernetes Service, we operated a system which enabled engineers to self sign up for a Juice Shop instance, find flags and submit them to CTFd to gain points.

System Architecture
System Architecture

Setting Up

There is a comprehensive CTF hosting guide in the Juice Shop documentation, which details the required set up for a CTF event.

What follows is how we set up our event.

Juice Shop and MultiJuicer

The set up of MultiJuicer on EKS has been adapted from the MultiJuicer documentation. These steps put the Juice Shop in CTF mode. Use the official documentation for further clarification on these following steps.

1. Deploying the EKS Cluster with eksctl

Create an EKS Cluster and two t3.medium EC2 instances.

eksctl create cluster \
--name multi-juicer \
--version 1.14 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto

2. Deploying MultiJuicer

Create a custom values.yaml configuration to put Juice Shop in CTF mode. Download values.yaml and update it with the keys below. Add a random string for ctfKey.

juiceShop:
maxInstances: -1
ctfKey: <ADD_RANDOM_STRING_HERE>
config: |
challenges:
safetyOverride: true
ctf:
showFlagsInNotifications: true

Additional configuration options for CTF mode are available in ctf.yaml.

Use helm to deploy the custom values.yaml.

helm repo add multi-juicer https://iteratec.github.io/multi-juicer/
helm install -f values.yaml multi-juicer multi-juicer/multi-juicer

3. Add an ingress to expose the MultiJuicer

Create an IAM policy giving the EKS cluster access to manage the AWS ALB.

aws iam create-policy \
--policy-name ALBIngressControllerIAMPolicy \
--policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json

Attach the policy to the cluster by downloading cluster-iam.yaml and updating region and attachPolicyARNs with the region of the EKS deployment and the policy ARN from the previous command.

metadata:
region: <YOUR_REGION>
iam:
attachPolicyARNs:
- "<INSERT_IAM_POLICY_ARN>"

Create the ingress controller.

eksctl utils associate-iam-oidc-provider --config-file=cluster-iam.yaml --approve
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
eksctl create iamserviceaccount --config-file=cluster-iam.yaml --approve --override-existing-serviceaccounts
kubectl apply -f https://raw.githubusercontent.com/iteratec/multi-juicer/master/guides/aws/alb-ingress-controller.yaml

4. Optional: Apply External DNS to attach a CNAME to the load balancer

For a custom domain name on the load balancer apply External DNS.

5. Optional: Apply the Cluster Autoscaler to scale out the EC2 instances

To autoscale the cluster nodes, apply the Cluster Autoscaler or alternatively scale manually with with eksctl.

eksctl scale nodegroup --cluster=multi-juicer --nodes=6 --nodes-max=6 --name=standard-workers

Scoreboard with CTFd

CTFd offers a deployment option with docker-compose. This is the simplest way to self host the scoreboard. The following CloudFormation template launches the scoreboard on port 80 in an isolated VPC. EC2 instance user data is used to start the scoreboard.

---
AWSTemplateFormatVersion: "2010-09-09"
Description: "Launch an EC2 instance with CTFd"
Parameters:
AMI:
Type: "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>"
Default: "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
Resources:
# VPC
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 192.168.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Subnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 192.168.0.0/24
MapPublicIpOnLaunch: true
# Gateway
IGW:
Type: AWS::EC2::InternetGateway
IGWAttachement:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref IGW
VpcId: !Ref VPC
# Routing
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
DefaultRoute:
Type: AWS::EC2::Route
Properties:
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref IGW
RouteTableId: !Ref RouteTable
RTA:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref Subnet
RouteTableId: !Ref RouteTable
# Instance
Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t3.small
ImageId: !Ref AMI
SubnetId: !Ref Subnet
SecurityGroupIds:
- !Ref SG
BlockDeviceMappings:
- DeviceName: "/dev/xvda"
Ebs:
DeleteOnTermination: true
VolumeType: "gp2"
VolumeSize: 30
UserData:
Fn::Base64: |
#!/bin/bash -xe
yum -y install docker git
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
systemctl enable docker
systemctl start docker
git clone https://github.com/CTFd/CTFd.git
cd CTFd
docker-compose up -d
iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8000
# SG
SG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Access
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0

1. Deploy the scoreboard

Save the CloudFormation template as scoreboard.yaml.

aws cloudformation deploy \
--template-file scoreboard.yaml \
--stack-name scoreboard

Import Challenges to the Scoreboard

1. Create a Juice Shop

Access the MultiJuicer UI to create a Juice Shop instance. Use kubectl get ingress to find the MultiJuicer UI endpoint.

2. Port forward to the instance

This port forward will be used to export challenges. Keep kubectl port-forward running or put it into the background with &.

POD=$(kubectl get pods -o name | grep juiceshop | head -1)
kubectl port-forward ${POD} 8080:3000

3. Create a config.yml file for juice-shop-ctf to export challenges.

The CTF key should match the key set previously.

ctfFramework: CTFd
juiceShopUrl: http://localhost:8080
ctfKey: <ADD_KEY_HERE>
insertHints: none
insertHintUrls: none

4. Export the challenges

juice-shop-ctf -c config.yml

5. Import the challenges into CTFd

  • Browse to the CTFd UI and create an Admin user and a name for the CTF.
  • Import the challenges under Admin > Config > Backup.
  • Recreate the CTF via the UI and ensure challenges show up.

The event

The infrastructure was deployed just before 9am. It took around thirty minutes to provision the full stack. At 9am the unofficial start of the event was announced. To be conscious of our engineers' time, we left it to the individual to decide when they wanted to get stared.

Kick Off!
Kick Off!

The official start of the remote event was scheduled for 1pm, however it didn't take engineers long to jump in and start working through the challenges.

Coffee making an early start
Coffee making an early start

At 1pm, we jumped on video conferencing and went through the set up instructions, to get those who didn't start in the morning set up. Leaving the conference call open, engineers could discuss exploits they were working on and get ideas and help from the others. SQL injection (SQLi), Cross Site Scripting (XSS) and Remote Code Execution (RCE) came up as topics of discussion.

OWASP Juice Shop is a learning tool and solutions are available online. Penalties were swift for those who "accidentally" google the answer.

Awarding those who Google answers
Awarding those who Google answers

Heading into the final stages of event competition was fierce and eventually a winner emerged.

Dan goes into orbit
Dan goes into orbit ๐Ÿš€

In the final hour we regrouped for some knowledge sharing. Some of the difficult challenges were selected and Engineers presented the novel ways in which the exploitation had been performed.

Tearing it down

Deleting the Scoreboard

aws cloudformation delete-stack --stack-name scoreboard
aws cloudformation wait stack-delete-complete --stack-name scoreboard

Deleting the Multi juicer

helm delete multi-juicer
kubectl delete -f aws-ingress.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
kubectl delete -f kubectl apply -f external-dns.yaml
eksctl delete cluster multi-juicer

Learning

  • Four hours was a good amount of time for the official event to run.
  • Engineers were capable to address a wide variety of of issues in a relatively short amount of time.
  • There was some lag in the system. I think the lag may have been on the Juice Balancer. Consider increasing the amount of CPU and Memory on the Juice Balancer and Juice Shop pods.
  • The focus should be on understanding the issues, not on accruing points, precious points.
  • Security is fun.