Skip to content

Latest commit

 

History

History
282 lines (202 loc) · 7.11 KB

File metadata and controls

282 lines (202 loc) · 7.11 KB

Behat Contexts Testing Infrastructure

This directory contains a BATS-based testing infrastructure for validating Behat context outputs and behaviors.

Table of Contents

Overview

This testing infrastructure uses:

  • BATS (Bash Automated Testing System): Test framework for bash scripts
  • DDEV: Local development environment
  • Aljibe: DDEV add-on that provides automated Drupal + Behat setup

The tests validate that Behat contexts produce expected outputs and behaviors through integration testing.

Prerequisites

Local development

To run tests locally, ensure you have the following installed:

  • DDEV: Install DDEV
  • BATS: Install via your package manager
    # macOS
    brew install bats-core
    
    # Ubuntu/Debian
    sudo apt-get install bats
  • BATS Libraries: Required for assertions.

Install via package manager. Example for Ubuntu/Debian:

apt install bats-assert
apt install bats-file
apt install bats-support
  • GitHub Token: For composer authentication (avoids rate limiting)

CI/CD

GitHub Actions workflow automatically installs all prerequisites.

Running tests locally

# Set required environment variables
export BEHAT_CONTEXTS_SOURCE_PATH=$(pwd) # When done from the behat-contexts repo root
export GITHUB_TOKEN=your_github_token
export TEMP_DIR=/path/to/temp/dir #If not provided, /tmp is used.

# Run tests
./tests/run_tests_locally.sh

Running specific tests

bats tests/contexts/cookie-compliance-context.bats

CI/CD Testing

Tests run automatically in GitHub Actions on:

  • Push to main or branches starting with tests
  • Pull requests
  • Manual workflow dispatch

For detailed information about the testing infrastructure, including:

  • Prerequisites and setup
  • Test structure and organization
  • How to add new tests
  • Troubleshooting guide

Environment Variables

Variable Required Description Default
BEHAT_CONTEXTS_SOURCE_PATH Yes Absolute path to behat-contexts source -
GITHUB_TOKEN Yes GitHub personal access token -
TMP_DIR No Base directory for temporary test files /tmp
BATS_LIB_PATH No Path to BATS libraries /usr/lib

Test Structure

tests/
├── contexts/                   # BATS test files
│   ├── setup_suite.bash       # Suite-level setup functions
│   ├── cookie-compliance-context.bats
│   ├── debug-context.bats
│   └── logs-context.bats
├── features/                   # Test fixture feature files
│   ├── cookies-test.feature
│   ├── debug-screenshot.feature
│   ├── debug-error.feature
│   └── logs-test.feature
├── test_helper/               # Shared test utilities
│   └── common-setup.bash     # Common setup/teardown functions
└── run_tests_locally.sh      # Local test runner script

Test Files

Each .bats file contains tests for a specific Behat context:

  • cookie-compliance-context.bats: Tests CookieComplianceContext
  • debug-context.bats: Tests DebugContext (screenshots, error reports)
  • logs-context.bats: Tests LogsContext (watchdog logs, CSV reports)

Feature Files

Minimal Behat feature files designed to exercise specific context behaviors:

  • cookies-test.feature: Cookie acceptance/rejection scenarios
  • debug-screenshot.feature: Screenshot generation
  • debug-error.feature: Error report generation
  • logs-test.feature: Log collection and reporting

How to Add New Tests

1. Create a Feature File

Add a minimal .feature file in tests/features/:

Feature: MyContext - Test Description
  As a tester
  I want to verify MyContext behavior
  So that I can validate functionality

  @api
  Scenario: Test scenario name
    Given I am on the homepage
    When I perform some action
    Then I should see expected output

2. Create a BATS Test File

Add a .bats file in tests/contexts/:

#!/usr/bin/env bats

# Load test helpers
load '../test_helper/common-setup'
load 'setup_suite'

# Setup - runs before each test
setup() {
  setup_test_environment
}

# Teardown - runs after each test
teardown() {
  teardown_test_environment
}

@test "MyContext: test description" {
  local start='        - Metadrop\Behat\Context\PreviousContext:'
  local end='        - Metadrop\Behat\Context\NextContext:'
  local replacement='            my_param: value'

  prepare_test "my-feature.feature" "$start" "$end" "$replacement"

  # Run Behat
  run ddev exec behat --config="$BEHAT_YML_CURRENT_TEST_PATH" "$FEATURE_UNDER_TEST_PATH"

  # Assertions
  assert_success "Error detected while testing MyContext"
  assert_output --partial "Expected output"
}

3. Update CI Workflow (Optional)

For PoC, tests run individually. To add your test to CI:

Edit .github/workflows/test-contexts.yml:

- name: Run BATS tests
  run: |
    bats tests/contexts/my-context.bats

Troubleshooting

BATS Library Not Found

Error: Could not find library 'bats-assert'

Solution: Ensure BATS libraries are installed and BATS_LIB_PATH is set:

export BATS_LIB_PATH=/usr/lib
ls -la /usr/lib/bats-*

BEHAT_CONTEXTS_SOURCE_PATH Not Set

Error: BEHAT_CONTEXTS_SOURCE_PATH environment variable must be set

Solution: Set the variable to your local repository path:

export BEHAT_CONTEXTS_SOURCE_PATH=$(pwd)

DDEV Not Running

Error: DDEV is not running

Solution: Tests set up their own DDEV environment. Ensure DDEV is installed:

ddev version

GitHub Token Rate Limiting

Error: Composer rate limit errors

Solution: Set a valid GitHub token:

export GITHUB_TOKEN=ghp_your_token_here

Create a token at: https://github.com/settings/tokens

Test Isolation Issues

Each test runs in a fresh temporary directory with its own DDEV environment. If tests interfere with each other:

  1. Check that setup_suite.bash functions are called properly
  2. Verify cleanup happens in teardown_test_environment()
  3. Ensure no shared state between tests

Debugging Tests

Run tests with verbose output:

bats --verbose-run tests/contexts/my-context.bats

Enable trace mode:

bats --trace tests/contexts/my-context.bats

Check DDEV logs:

ddev logs

Contributing

When adding new tests:

  1. Follow existing test patterns
  2. Use descriptive test names
  3. Include comments explaining complex logic
  4. Ensure tests are isolated (no shared state)
  5. Test both success and failure scenarios
  6. Update this documentation if adding new patterns

References