saarCTF infrastructure | Attack-defense CTF server setup scripts
See also: Hetzner Cloud Playbook.
This repository contains setup scripts for all servers required to host an attack-defense CTF with the saarCTF framework.
It consists of three server types:
First, you need a git configuration repository and a local config.json
configuration for the build process.
Second, you need a packer configuration to create images.
See Configuration for details on both.
To prepare server images (for libvirt):
From the root directory of this repo, run:
packer init basis
packer build -var-file global-variables.pkrvars.hcl basis
packer build -var-file global-variables.pkrvars.hcl controller
packer build -var-file global-variables.pkrvars.hcl vpn
packer build -var-file global-variables.pkrvars.hcl checker
To create and launch the VMs (in libvirt), see libvirt-test-setup/README.md.
By default, these IPs are used:
10.32.250.1
10.32.250.2
10.32.250.3+
10.32.N.0/24
10.32.1.0/24
10.32.0.0/24
Interesting commands: update-server
(pull updates and rebuild things).
Checker scripts belong on this machine (/home/saarctf/checkers
, owned by user saarctf
).
CTF Timer needs manual start (on exactly one machine), new hosts must be manually added to Prometheus for monitoring (/root/prometheus-add-server.sh <ip>
).
A bunch of interesting scripts are in /root
, check them out.
This image can be reconfigured to fill in almost any particular role, using the scripts in /root
to disable some components. We think of backup servers (databases to slaves replicating original databases), dedicated management / monitoring server (db to slave, disable monitoring on original host) or dedicated scoreboard / submitter server (db off, monitoring off, systemctl start scoreboard
).
Postgresql (:5432), Redis (:6379), RabbitMQ.
Flask app running under uwsgi / user saarctf. Nginx frontend.
http://<ip>:8080/
systemctl restart uwsgi
/var/log/uwsgi/app/controlserver.log
Tornado app (systemd), Nginx frontend.
http://<ip>:8081/
/ http://127.0.0.1:20000
systemctl restart flower
/var/log/flower.log
Coder running in docker container as user saarctf.
http://<ip>:8082/
docker restart coder-server
docker logs coder-server
Static folder with files, served by nginx.
http://<ip>/
systemctl restart nginx
/var/log/nginx/access.log
and /var/log/nginx/error.log
/etc/nginx/sites-available/scoreboard
The scoreboard is automatically created if the CTF timer is running on this machine. If not use the scoreboard daemon instead (systemctl start scoreboard
, but not in parallel with the CTF timer).
Needs manual start
Triggers time-based events (new round, scoreboard rebuild, start/stop of CTF). Exactly one instance on one server must run at any time.
systemctl start ctftimer
systemctl restart ctftimer
/var/log/ctftimer.log
(interesting messages usually shown in controlpanel dashboard)C++ application that receives flags from teams.
nc <ip> 31337
systemctl restart submission-server
update-server
/var/log/submission-server.log
Monitors itself, grafana and localhost by default, other servers should be manually added using /root/prometheus-add-server.sh <ip>
. Results can be seen in Grafana.
http://localhost:9090/
systemctl restart prometheus
journalctl -u prometheus
Configured to display stats from database and prometheus.
http://<ip>:3000/
systemctl restart grafana
/var/log/grafana/grafana.log
/etc/grafana/grafana.ini
tcpdump needs manual start / stop
Runs many OpenVPN instances and network management.
OpenVPN configuration files should be (re)built at least once on this machine.
Three servers per team, managed by systemd. Service name is:
vpn@teamXYZ
(tunX) for the single, self-hosted VPN (whole /24 in one connection)vpn2@teamXYZ-cloud
(tun100X) for the cloud-hosted team members endpoint (upper /25, multiple connections possible)vpn@teamXYZ-vulnbox
(tun200X) for the single, cloud-hosted vulnbox connection (/30 for cloud box, config not given to players)Activation rules:
vpn@teamX-vulnbox
is always active (players can’t mess too much with it, except by booting a vulnbox)vpn@teamX-vulnbox
is connected, team-hosted vpn vpn@teamX
is down (avoiding conflicts with team members using the old config)vpn@teamX
is connected, cloud-hosted player vpn vpn2@teamX-cloud
is down (avoid both configs being used at the same time)<ip>:10000+X / <ip>:12000+X / <ip>:1400+X (udp)
tunX
/ tun100X
/ tun200X
systemctl start vpn@teamX
/ systemctl start vpn
systemctl restart vpn@teamX
/ systemctl restart vpn vpn@\*
systemctl stop vpn@teamX
/ systemctl stop vpn vpn@\*
/var/log/openvpn/output-teamX.log
and /var/log/openvpn/openvpn-status-teamX.log
Writes traffic summary to database.
systemctl restart trafficstats
/var/log/trafficstats.log
Based on IPTables.
Edit /opt/gameserver/vpn/iptables.sh
if you need to change something permanently.
On restart, INPUT
and FORWARD
chains are replaced.
Inserts rules for NAT and TCP Timestamp removal.
systemctl restart firewall
/var/log/firewall.log
Inserts rules into IPTables that open/close VPN or ban single teams.
systemctl restart manage-iptables
/var/log/firewall.log
Needs manual start
Captures traffic: game traffic (between gameservers and teams) and team traffic (between teams).
systemctl start tcpdump-game tcpdump-team
systemctl restart tcpdump-game tcpdump-team
/var/log/tcpdump.log
Website that displays connection status of VPN connections and tests connectivity using ping.
Website is finally served by an nginx.
Includes a background worker (service vpnboard-celery
).
systemctl restart vpnboard
/var/log/vpnboard.log
Runs only a Celery Worker.
No checker scripts need to be placed on this machine.
Needs manual start
because each celery worker needs an unique name.
After creation run celery-configure <SERVER-NUMBER>
.
From now on, the celery worker will start on boot.
/etc/celery.conf
systemctl restart celery
/var/log/celery/XYZ.log
Manual worker invocation:
screen -R celery
celery-run <unique-hostname> <number-of-processes>