Skip to content

external-secrets-inc/audit-poc-backend

Repository files navigation

audit_poc

Poetry

This project uses poetry. It's a modern dependency management tool.

To run the project use this set of commands:

poetry install
poetry run python -m audit_poc

This will start the server on the configured host.

You can find swagger documentation at /api/docs.

You can read more about poetry here: https://python-poetry.org/

Docker

You can start the project with docker using this command:

docker-compose up --build

If you want to develop in docker with autoreload and exposed ports add -f deploy/docker-compose.dev.yml to your docker command. Like this:

docker-compose -f docker-compose.yml -f deploy/docker-compose.dev.yml --project-directory . up --build

This command exposes the web application on port 8000, mounts current directory and enables autoreload.

But you have to rebuild image every time you modify poetry.lock or pyproject.toml with this command:

docker-compose build

Project structure

$ tree "audit_poc"
audit_poc
├── __main__.py  # Startup script. Starts uvicorn.
├── db.py  # module contains db configurations
├── domain # sample domain name
    ├── api.py  # is a core of each module with all api endpoints
    ├── schemas.py  # for pydantic models, like request payloads and reponse definitions
    ├── models.py  # for db models (will be automatically loaded to beanie database)
    ├── dependencies.py  # for api router dependencies
    ├── constants.py  # module specific constants and error codes
    ├── exceptions.py  # module specific exceptions
    └── service.py  # module specific business logic
├── settings.py  # Main configuration settings for project.
├── grpc  # grpc related
├── static  # Static content.
├── tests  # Tests for project.
    └── conftest.py  # Fixtures for all tests.
└── web  # Package contains web server.
    ├── application.py  # FastAPI application configuration.
    └── lifespan.py  # Contains actions to perform on startup and shutdown.

Configuration

This application can be configured with environment variables.

You can create .env file in the root directory and place all environment variables here.

All environment variables should start with "AUDIT_" prefix.

For example if you see in your "audit_poc/settings.py" a variable named like random_parameter, you should provide the "AUDIT_RANDOM_PARAMETER" variable to configure the value. This behaviour can be changed by overriding env_prefix property in audit_poc.settings.Settings.Config.

An example of .env file:

AUDIT_RELOAD="True"
AUDIT_PORT="8000"
AUDIT_ENVIRONMENT="dev"
AUDIT_TENANT_MANAGER_URL="http://localhost:8080"
AUDIT_OPA_BASE_URL = "http://localhost:8181"
AUDIT_ADMIN_USERNAME = "admin"
AUDIT_ADMIN_PASSWORD = "admin"

You can read more about BaseSettings class here: https://pydantic-docs.helpmanual.io/usage/settings/

OpenTelemetry

If you want to start your project with OpenTelemetry collector you can add -f ./deploy/docker-compose.otlp.yml to your docker command.

Like this:

docker-compose -f docker-compose.yml -f deploy/docker-compose.otlp.yml --project-directory . up

This command will start OpenTelemetry collector and jaeger. After sending a requests you can see traces in jaeger's UI at http://localhost:16686/.

This docker configuration is not supposed to be used in production. It's only for demo purpose.

You can read more about OpenTelemetry here: https://opentelemetry.io/

Pre-commit

To install pre-commit simply run inside the shell:

pre-commit install

pre-commit is very useful to check your code before publishing it. It's configured using .pre-commit-config.yaml file.

By default it runs:

  • black (formats your code);
  • mypy (validates types);
  • ruff (spots possible bugs);

You can read more about pre-commit here: https://pre-commit.com/

Running tests

If you want to run it in docker, simply run:

docker-compose run --build --rm api pytest -vv .
docker-compose down

Migrations

This project uses the mongodb-migrations library to manage MongoDB schema and data migrations. Migrations are automatically applied each time the app starts, ensuring the database is up-to-date. The migration files are stored in the audit_poc/migrations/ directory.

Creating a Migration File

To create a migration file, use the create-migration-file Makefile target. The MIGRATION_FILE_NAME argument is required and should describe the purpose of the migration.

Command:

make create-migration-file MIGRATION_FILE_NAME=<migration_description>

Example:

make create-migration-file MIGRATION_FILE_NAME=add_provider_secret_id

This will create a migration file named add_provider_secret_id with a prefixed timestamp in the audit_poc/migrations/ directory.

Adding Migration Logic

  1. Open the new migration file in audit_poc/migrations/.
  2. Add your migration logic to the upgrade function for applying changes.
  3. Add logic to the downgrade function for reverting the changes.

An example of migration logic code can be found at audit_poc/migrations/20250123144903_add_provider_secret_id.py

Important: Migration functions must be synchronous!

Running Migrations

Automatically

Migrations are run automatically when the application starts:

  • Default Behavior: The upgrade functions are executed.
  • Downgrade Automatically: Set the environment variable AUDIT_DB_DOWNGRADE=true to run the downgrade functions instead.

Manually

You can manually run migrations using the CLI.

Upgrade:

poetry run mongodb-migrate --url mongodb://user:pass@host:port/db --migrations audit_poc/migrations/

Downgrade: Add the --downgrade flag to run downgrades:

poetry run mongodb-migrate --url mongodb://user:pass@host:port/db --migrations audit_poc/migrations/ --downgrade

Specifying a Migration Deadline

To set a deadline for migrations (both upgrades and downgrades), you can:

  • Use the environment variable AUDIT_DB_MIGRATION_TO_DATETIME, providing the value as a string in the format YYYYMMDDHHmmss.
  • Use the --to_datetime flag with the CLI.

One-off Report

Usage

Command-Line Arguments

The script accepts the following arguments:

Argument Required Default Value Description
--secrets_path ✅ Yes None Path to the JSON file containing secrets data.
--config_path ✅ Yes None Path to the configuration file (YAML).
--password_strength_id ❌ No 56a0b04a-43d5-438d-af10-c428cf2d2edd ID of the password strength policy to apply.
--output_mode ❌ No csv Output format of the generated report (csv or json).

Running the Script Manually

To execute the script, run:

poetry run python3 scripts/oneoff_report.py \
    --secrets_path secrets.json \
    --config_path config.yaml \
    --password_strength_id 56a0b04a-43d5-438d-af10-c428cf2d2edd \
    --output_mode csv

Running with Makefile

A Makefile is provided to simplify execution. Run the following command to generate the report:

make create-report

By default, it uses:

  • secrets.json as the secrets file
  • config.yaml as the configuration file

To override defaults, run:

make create-report SECRETS_PATH=my_secrets.json CONFIG_PATH=my_config.yaml

You can also use PASSWORD_STRENGTH_ID and OUTPUT_MODE to define the optional parameters

Output

summary.csv

A flattened summary of key metrics, including:

  • Total Events
  • Total Secrets Analysed
  • Total Accessors Processed
  • Total Providers Analysed
  • Total Duplicates
  • Cross Environments Duplicates
  • Total Secrets Not Rotated Within Month
  • Total Access
  • Total Not Access or Only by Tool
  • Total Access by Individuals
  • Total Lineages Trees Generated
  • Largest Lineage
  • Longest Sync Time for Lineage
  • Oldest Rotated Secret
  • Newly Rotated Secret
  • Total Strong Secrets
  • Total Weak Secrets
  • Most Accessed Secret
  • Secret with Most Accessors
  • Biggest Accessor

Structure Example:

field value
total_events 50
total_secrets_analysed 10
oldest_rotated_secret.secret projects/862611859558/secrets/test
newly_rotated_secret.last_rotation 2025-02-17 19:00:04.138150

total_secrets_rotation_per_interval.csv

Displays rotation intervals for secrets, indicating how recently secrets have been rotated.

Structure Example:

interval total_secrets_rotation
0-10 7
10-20 0
>90 1

secrets_accessors_per_interval.csv

Summarizes access patterns per secret, broken down by human vs tool accessors.

Structure Example:

interval human_accessor tool_accessor
1-2 7 0
5-10 0 1

password_strength_by_provider.csv

Evaluates the password strength per provider, identifying weak and strong secrets.

Structure Example:

provider strong_password weak_password
provider-name-gcp 3 0

output.json

A json with all generated metrics metioned at the csv files :)

Standalone

The standalone version of the oneoff report is compiled using Nuitka. To create the binary you need to install patchelf and ccache. It can be generated by running:

make build-report-bin

And the report can be generated using:

make create-report-docker SECRETS_PATH=./vault_secrets.json CONFIG_PATH=./config.yaml OUTPUT_FOLDER=./oneoff_report_output

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages