Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
6be538a
fix: update usage comment in kubectl.sh to reflect correct script name
The0mikkel Dec 21, 2025
ee876dd
fix: correct tfvars filename generation for test environment to inclu…
The0mikkel Dec 21, 2025
5d656af
Initial documentation
The0mikkel Dec 21, 2025
e167434
docs: clarify usage of environment flags in initialization commands
The0mikkel Dec 21, 2025
8dbb502
docs: update usage instructions in kubectl.sh to reflect sourcing
The0mikkel Dec 21, 2025
26f8525
Start on architecture section
The0mikkel Dec 21, 2025
ecb98bc
Reorganize structure
The0mikkel Dec 24, 2025
b2e3a88
feat: add loading of S3 backend credentials from automated.tfvars
The0mikkel Dec 26, 2025
49afd8b
refactor: update template.automated.tfvars with S3 backend credentials
The0mikkel Dec 26, 2025
b9cb418
refactor: add error handling for missing S3 backend credentials in au…
The0mikkel Dec 26, 2025
97f6ae6
refactor: correct typo in log message for tool availability check
The0mikkel Dec 26, 2025
0e5f22f
refactor: streamline loading of S3 backend credentials from automated…
The0mikkel Dec 26, 2025
f6b0434
fix: update placeholder check to allow GitHub URLs
The0mikkel Dec 26, 2025
bb08350
feat: add dedicated challenges node type
The0mikkel Dec 26, 2025
456c395
refactor: remove example environment file to enhance security
The0mikkel Dec 26, 2025
8673094
Feat/remove env file (#12)
The0mikkel Dec 26, 2025
c6a0f42
Continued work on documentation
The0mikkel Dec 26, 2025
3c04016
Continued work on documentation
The0mikkel Dec 26, 2025
10f42e1
Continued work on documentation
The0mikkel Dec 26, 2025
436b28d
Add generate-backend command
The0mikkel Dec 26, 2025
a7e2e4f
Continued work on documentation
The0mikkel Dec 26, 2025
ed74bcd
Continued work on documentation
The0mikkel Dec 26, 2025
2570a6d
Continued work on documentation
The0mikkel Dec 26, 2025
118b4e2
Continued work on documentation
The0mikkel Dec 26, 2025
168eacb
Restructure commands list for CLI Tool
The0mikkel Dec 26, 2025
bbc6ff0
Restructure of CLI tool wording
The0mikkel Dec 26, 2025
4b00c0b
Restructure the Commands section to better align with overall hirecki
The0mikkel Dec 26, 2025
cb3d4e8
Fix typos and improve clarity in README documentation
The0mikkel Dec 26, 2025
f12b1fe
Refactor formatting
The0mikkel Dec 26, 2025
7b0089f
Clarify restrictions on commercial use in README
The0mikkel Dec 26, 2025
d11d7d5
Add more guides to how-to-run
The0mikkel Dec 26, 2025
4b230e6
Correct formatting
The0mikkel Dec 27, 2025
b1530d2
Add CLI bypass guide
The0mikkel Dec 27, 2025
9b33cc6
Add initial architecture diagrams
The0mikkel Dec 27, 2025
84b83c6
Update architecture overview
The0mikkel Dec 27, 2025
cf199fb
Update cluster configuration documentation for improved clarity and r…
The0mikkel Dec 27, 2025
aedee8c
Add networking diagrams
The0mikkel Dec 27, 2025
0306576
Change svg diagrams to png
The0mikkel Dec 27, 2025
ce332c7
Revert to svg images
The0mikkel Dec 27, 2025
1a18380
Update architecture diagrams to correct colors
The0mikkel Dec 27, 2025
d8b66a9
Update cluster network diagram to include Cloudflare proxy
The0mikkel Dec 27, 2025
4211682
Add generate-backend to quickstart guide
The0mikkel Dec 27, 2025
1d6755b
Correct formatting
The0mikkel Dec 27, 2025
8767675
Correct heading position
The0mikkel Dec 27, 2025
1ee0a94
Correct formatting
The0mikkel Dec 27, 2025
baab6ae
Started on cluster architecture overview
The0mikkel Dec 27, 2025
54e6e24
Add ops, platform and challenges architecture overview
The0mikkel Dec 27, 2025
50ec6a6
Update formatting of headers:
The0mikkel Dec 27, 2025
2cb5a1a
Clarify challenge instance scheduling requirements in documentation
The0mikkel Dec 27, 2025
5b145a8
Add overview of challenge deployment system and its components
The0mikkel Dec 27, 2025
6064496
Update challenge deployment architecture section
The0mikkel Dec 27, 2025
3febb1d
Add cluster networking documentation
The0mikkel Dec 27, 2025
14a2a33
Update getting help documentation
The0mikkel Dec 27, 2025
51befe3
Correct formatting
The0mikkel Dec 27, 2025
c5c788e
Update cluster network architecture to show traefik as being scaled
The0mikkel Dec 27, 2025
ad46afa
Add challenge networking documentation
The0mikkel Dec 27, 2025
e573c63
Clarify TCP endpoint handling and custom port limitations in document…
The0mikkel Dec 27, 2025
58c4908
Update grammar
The0mikkel Dec 27, 2025
ac50d96
Fix punctuation in networking section of documentation
The0mikkel Dec 27, 2025
2debb38
Fix grammar and punctuation in README documentation
The0mikkel Dec 27, 2025
afb665c
Fix spelling errors in resource descriptions in tfvars templates
The0mikkel Dec 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .env.example

This file was deleted.

957 changes: 952 additions & 5 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion cluster/kube.tf
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ module "kube-hetzner" {
},
{
name = "challs-1",
server_type = var.scale_type,
server_type = var.challs_type,
location = var.region_1,
labels = [
"ressource-type=node",
Expand Down
25 changes: 17 additions & 8 deletions cluster/tfvars/template.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -35,35 +35,44 @@ cluster_dns_ctf = "<dns-ctf-domain>" # The domain name to use for
# Cluster configuration
# ------------------------
# WARNING: Changing region while the cluster is running will cause all servers in the group to be destroyed and recreated.
# For optimal performance, it is recommended to use the same region for all servers.
# Region 1 is used for scale nodes and loadbalancer.
# Possible values: fsn1, hel1, nbg1
region_1 = "fsn1" # Region for subgroup 1
region_2 = "fsn1" # Region for subgroup 2
region_3 = "fsn1" # Region for subgroup 3
# For optimal performance, it is recommended to use the same region for all servers. If you want redundancy, use different regions for each group.
# Region 1 is used for challs nodes, scale nodes and loadbalancer.
# Possible values: fsn1, hel1, nbg1, ash, hil, sin - See https://docs.hetzner.com/cloud/general/locations/
region_1 = "nbg1" # Region for group 1, challs nodes, scale nodes and loadbalancer
region_2 = "nbg1" # Region for group 2
region_3 = "nbg1" # Region for group 3
network_zone = "eu-central" # Hetzner network zone. Possible values: "eu-central", "us-east", "us-west", "ap-southeast". Regions must be within the network zone.

# Servers
# Server definitions are split into three groups: Control Plane, Agents, and Scale. Control plane and agents has three groups each, and scale has one group.
# Server definitions are split into four groups: Control Plane, Agents, Challs and Scale. Control plane and agents has three groups each, while challs and scale is one group each.
# Each group can be scaled and defined independently, to allow for smooth transitions between different server types and sizes.
# Control planes are the servers that run the Kubernetes control plane, and are responsible for managing the cluster.
# Agents are the servers that run the workloads, and scale is used to scale the cluster up or down dynamically.
# Scale is automatically scaled agent nodes, which is handled by the cluster autoscaler. It is optional, and can be used to scale the cluster up or down dynamically.
# Challs are the servers that run the CTF challenges.
# Scale is automatically scaled agent nodes, which is handled by the cluster autoscaler. It is optional, and can be used to scale the cluster up or down dynamically if there is not enough resources in the cluster.
# Challs and scale nodes are placed in region_1, and are tainted to make normal resources prefer agent nodes, but allow scheduling on challs and scale nodes if needed.

# Server types. See https://www.hetzner.com/cloud
# Control plane nodes - Nodes that run the Kubernetes control plane components.
control_plane_type_1 = "cx23" # Control plane group 1
control_plane_type_2 = "cx23" # Control plane group 2
control_plane_type_3 = "cx23" # Control plane group 3
# Agent nodes - Nodes that run general workloads, excluding CTF challenges.
agent_type_1 = "cx33" # Agent group 1
agent_type_2 = "cx33" # Agent group 2
agent_type_3 = "cx33" # Agent group 3
# Challenge nodes - Nodes dedicated to running CTF challenges.
challs_type = "cx33" # CTF challenge nodes
# Scale nodes - Nodes that are automatically scaled by the cluster autoscaler. These nodes are used to scale the cluster up or down dynamically.
scale_type = "cx33" # Scale group

# Server count
# Control plane nodes - Nodes that run the Kubernetes control plane components.
# Minimum of 1 control plane across all groups. 1 in each group is recommended for HA.
control_plane_count_1 = 1 # Number of control plane nodes in group 1
control_plane_count_2 = 1 # Number of control plane nodes in group 2
control_plane_count_3 = 1 # Number of control plane nodes in group 3
# Agent nodes - Nodes that run general workloads, excluding CTF challenges.
# Minimum of 1 agent across all groups. 1 in each group is recommended for HA.
agent_count_1 = 1 # Number of agent nodes in group 1
agent_count_2 = 1 # Number of agent nodes in group 2
Expand Down
5 changes: 5 additions & 0 deletions cluster/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,11 @@ variable "agent_type_3" {
default = "cx32"
}

variable "challs_type" {
type = string
description = "CTF challenge nodes server type"
default = "cx32"
}
variable "scale_type" {
type = string
description = "Scale group server type"
Expand Down
39 changes: 29 additions & 10 deletions ctfp.py
Original file line number Diff line number Diff line change
Expand Up @@ -701,12 +701,8 @@ def get_filename_tfvars(environment="test"):
:param environment: The environment name (test, dev, prod)
:return: The filename for the tfvars file
'''

prefix = ""
if environment != "test":
prefix = f"{environment}."

return f"automated.{prefix}tfvars"
return f"automated.{environment}.tfvars"

@staticmethod
def load_tfvars(file_path: str):
Expand Down Expand Up @@ -895,6 +891,9 @@ def init_terraform(self, path, components: str = ""):
try:
# Check if tfvars file exists and is valid
self.check_values()

# Load backend connection credentials
self.load_backend_credentials()

# Check if backend config exists
if not TFBackend.backend_exists(components):
Expand Down Expand Up @@ -1032,7 +1031,7 @@ def check_values(self):

# Check if fields include "<" or ">"
def check_placeholders(value):
if isinstance(value, str) and value.startswith("<") and value.endswith(">"):
if isinstance(value, str) and (value.startswith("<") or value.startswith("https://github.com/<")) and value.endswith(">"):
return True
elif isinstance(value, dict):
for v in value.values():
Expand All @@ -1050,6 +1049,25 @@ def check_placeholders(value):

Logger.info(f"{self.get_filename_tfvars()} is filled out correctly")


def load_backend_credentials(self):
'''
Load S3 backend credentials from automated.tfvars, to set Terraform S3 connection credentials
'''

# Load tfvars file
tfvars_data = TFVARS.safe_load_tfvars(self.get_path_tfvars())

# Set environment variables for S3 backend
os.environ["AWS_ACCESS_KEY_ID"] = tfvars_data.get("terraform_backend_s3_access_key", "")
os.environ["AWS_SECRET_ACCESS_KEY"] = tfvars_data.get("terraform_backend_s3_secret_key", "")

if os.environ["AWS_ACCESS_KEY_ID"] == "" or os.environ["AWS_SECRET_ACCESS_KEY"] == "":
Logger.error("S3 backend credentials not found in automated.tfvars. Please fill out terraform_backend_s3_access_key and terraform_backend_s3_secret_key as they are required to run the Terraform components.")
exit(1)

Logger.info(f"S3 backend credentials loaded")

def cluster_deploy(self):
Logger.info("Deploying the cluster")

Expand Down Expand Up @@ -1265,10 +1283,6 @@ def challenges_destroy(self):
class CLI:
def run(self):
Logger.info("Starting CTF-Pilot CLI")
Logger.info("Checking availability of requried tools")
self.platform_check()
self.tool_check()
Logger.success("Required Tools are available")

args = Args()
if args.parser is None:
Expand Down Expand Up @@ -1298,6 +1312,11 @@ def run(self):
args.print_help()
exit(1)

Logger.info("Checking availability of required tools")
self.platform_check()
self.tool_check()
Logger.success("Required Tools are available")

# Run the subcommand
try:
namespace.func(namespace)
Expand Down
166 changes: 166 additions & 0 deletions docs/attachments/architecture/challenge-deployment.drawio

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/attachments/architecture/challenge-deployment.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading