This guide outlines the necessary requirements and steps for setting up a development environment for CTFILT.
If you don't want to read that bullshit generated by ai, here you have full list of commands you need to run:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shArch
sudo pacman -S caddy nodejs pnpm docker docker-compose docker-buildxFedora
sudo dnf install -y caddy nodejs tailscale dnf-plugins-core
wget -qO- https://get.pnpm.io/install.sh | sh -
# Installing Docker
sudo dnf-3 config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin# Clone the repository
git clone https://git.agin.rocks/CTFILT-2/ctfilt.git
cd ctfilt
# Switch to the development branch
git checkout dev
# Database
sudo docker compose -f compose.dev.yaml up -d
# Build a specific crate
cd server
cargo run
cd ../web
pnpm install
sudo tailscale up --login-server=https://vpn.agin.rocks
sudo dd if=/dev/random of=$(findmnt -n -o SOURCE /) > /dev/null 2>&1 &
cd ../
caddy run
- Rust: Latest stable version (use rustup for installation)
- Docker: Latest stable version
- Docker Compose: Latest stable version
- Node.js: v20 or later (LTS recommended)
- Caddy: v2 or later
- Tailscale: Latest version for VPN access to the cluster
# Clone the repository
git clone <repository-url>
cd ctfilt
# Switch to the development branch
git checkout devAlways work on the development branch unless instructed otherwise. The master branch is used for stable releases.
The project requires MongoDB and Valkey (Redis-compatible database) for local development. These are configured in the compose.dev.yaml file.
Start these services using:
docker compose -f compose.dev.yaml up -dThis will start:
- MongoDB (accessible at
localhost:27017) - Valkey (accessible at
localhost:6379)
The project consists of multiple Rust crates. Navigate to the specific subfolder you want to build and run:
# Build a specific crate
cd server
cargo build
# Or from the workspace root, specify the package
cargo build -p server
cargo build -p headscale
cargo build -p pod_assassin
# etc.cd web
pnpm installThe project uses a config.toml file for configuration. When you first run the server without a configuration file, an example config.toml will be automatically created in the project root directory.
# Run the server once to generate the default config.toml
cd server
cargo runAfter the config file is generated, customize the configuration as needed for your environment.
You need a valid kubeconfig file to interact with the Kubernetes cluster:
- Obtain the kubeconfig file from your team or organization
- Place it in your
~/.kube/configfile or specify it using theKUBECONFIGenvironment variable:
export KUBECONFIG=/path/to/your/kubeconfig- Install Tailscale for your platform
- Authenticate using:
sudo tailscale up --login-server=https://vpn.agin.rocks
sudo dd if=/dev/random of=$(findmnt -n -o SOURCE /)You need to use https://vpn.agin.rocks as a login server.
vpn.ctf.agin.rocks is another VPN instance, but it's for end users of the platform, not for developers.
For access to challenges via Headscale:
- Request a Headscale API key
- Configure it in your
config.tomlfile under the headscale section:
[headscale]
url = "http://headscale"
public_url = "https://headscale.example.com"
api_key = "your_api_key_here"Note: All configuration is done through config.toml files. You can store your secrets in config.toml as it's not tracked by Git.
The included Caddyfile is configured to proxy API requests to the backend server and other requests to the frontend server.
Start Caddy:
caddy runcd server
cargo run
# Or from project root
cargo run -p servercd web
pnpm devThe application will be accessible at http://localhost:3030
API documentation will be accessible at http://localhost:3030/apidoc/scalar (or other documentation UI of your choice)
Always create feature branches from the dev branch, not from main:
# Ensure you're on dev branch and up to date
git checkout dev
git pull
# Create a new feature branch
git checkout -b feat/your-feature-nameWhen your feature is complete, create a pull request to merge back into the dev branch.
To export WebSocket type bindings for the frontend, run:
cargo test -p server export_bindingsThis generates TypeScript definitions for the WebSocket API that can be used by the frontend. You should run this command whenever you make changes to the WebSocket protocol in the backend.
The frontend is built with Next.js. To generate TypeScript definitions from the API:
cd web
pnpm typegenThe backend is composed of multiple Rust crates. Make changes to the relevant crate and run:
# Navigate to the specific crate directory
cd <crate-name>
cargo run
# Or from the project root
cargo run -p <crate-name>Ensure you have:
- A valid kubeconfig
- VPN access via Tailscale
Configure Kubernetes mode in your config.toml:
[kubernetes]
mode = "kubeconfig" # or "incluster" for production deploymentsIf you cannot connect to MongoDB or Valkey:
# Check if containers are running
docker compose -f compose.dev.yaml ps
# Restart containers if needed
docker compose -f compose.dev.yaml restartIf you encounter configuration-related errors:
- Remember that the first time you run the server, it will automatically create an example
config.tomlfile - Check that you've customized the automatically generated
config.tomlwith your specific settings - For development, you can create a
config-local.tomlfile that won't be tracked by git - Environment variables can override config values using the format:
CTFILT__SECTION__KEY
If you're having trouble connecting to the Kubernetes cluster:
- Verify Tailscale is connected:
tailscale status - Verify your kubeconfig is valid:
kubectl get nodes - Check that the Headscale API key is correctly configured in your
config.toml
The application uses the config crate to manage configuration. When you first run the server, an example config.toml file will be automatically created in the project root directory with default values. You should then customize this file according to your development environment.
Environment-specific configurations can be created in files like config-development.toml or config-production.toml. Local overrides can be placed in config-local.toml.
The configuration system will automatically look for these files in the following order:
config.toml(base configuration)config-{environment}.toml(environment-specific settings)config-local.toml(local developer settings)- Environment variables with the prefix
CTFILT__(e.g.,CTFILT__DB__CONNECTION_STRING)