This guide explains how to use the automated test runner to verify all examples work correctly.
# From the project root, run all examples:
./test-examples.sh --continue
# Or from the examples directory:
cd examples
python3 test_all_examples.py --continue
# Test a specific module:
./test-examples.sh --module module1_fundamentals
# Quick test with shorter timeout:
./test-examples.sh --quick --continueThe main test runner script that:
- ✅ Automatically discovers all example scripts across all modules
- ✅ Runs each script with configurable timeout
- ✅ Captures stdout and stderr for each test
- ✅ Shows colored progress indicators (✓ for pass, ✗ for fail)
- ✅ Reports execution time for each script
- ✅ Generates detailed failure logs
- ✅ Saves machine-readable JSON results
- ✅ Calculates pass rates and statistics
- ✅ Identifies fastest and slowest tests
A convenient wrapper script to run tests from the project root directory.
Comprehensive documentation with:
- All command-line options explained
- Common use cases and examples
- Troubleshooting guide
- CI/CD integration examples
- Tips and best practices
Automatically generated JSON file containing:
- List of all passed tests with execution times
- List of all failed tests with log file paths
- Overall statistics (pass rate, total time, etc.)
- Timestamps for tracking test runs
Directory containing detailed logs for any failed tests:
- Full stdout output
- Full stderr output
- Timestamps
- Named as
{module}_{script}.log
./test-examples.sh --continue --timeout 120This runs all tests and shows you all failures at once, with generous timeout.
./test-examples.sh --module module3_programming --verboseShows detailed output for debugging.
./test-examples.sh --quick --continueRuns all tests with shorter 30s timeout for rapid feedback.
# Run with verbose output
./test-examples.sh --module module5_error_correction --verbose
# Or check the log file
cat examples/test_logs/module5_error_correction_01_quantum_noise_models.logAfter running tests, you'll see a summary like:
================================================================================
Test Summary
================================================================================
Total tests run: 54
Passed: 52
Failed: 2
Pass rate: 96.3%
Total execution time: 145.23s
Fastest test: 07_no_cloning_theorem.py (0.31s)
Slowest test: 05_first_quantum_algorithm.py (13.40s)
Failed Tests:
✗ module7_hardware/01_ibm_quantum_access.py
Log: test_logs/module7_hardware_01_ibm_quantum_access.log
✗ module7_hardware/02_aws_braket_integration.py
Log: test_logs/module7_hardware_02_aws_braket_integration.log
0= All tests passed ✅1= One or more tests failed ❌
This makes the script perfect for CI/CD pipelines:
./test-examples.sh --continue && echo "Ready to deploy!" || echo "Fix tests first!"- 🟢 Green ✓ = Test passed
- 🔴 Red ✗ = Test failed
- 🔵 Blue = Module headers
- 🟡 Yellow = Warnings and error snippets
- 🟣 Purple = Section headers
Each test shows execution time, and the summary includes:
- Total execution time across all tests
- Fastest test (good for finding lightweight examples)
- Slowest test (good for identifying intensive computations)
The test runner automatically tests all examples in:
-
✅ module1_fundamentals (8 examples)
- Classical vs quantum bits, gates, superposition, entanglement, etc.
-
✅ module2_mathematics (5 examples)
- Complex numbers, linear algebra, tensor products, etc.
-
✅ module3_programming (6 examples)
- Qiskit programming, framework comparison, circuit patterns, etc.
-
✅ module4_algorithms (5 examples)
- Deutsch-Jozsa, Grover's, QFT, Shor's, VQE
-
✅ module5_error_correction (8 examples)
- Noise models, Steane code, error mitigation, fault tolerance, etc.
-
✅ module6_machine_learning (5 examples)
- Feature maps, VQC, quantum neural networks, PCA, etc.
-
✅ module7_hardware (5 examples)
- IBM Quantum, AWS Braket, hardware optimization, etc.
-
✅ module8_applications (6 examples)
- Chemistry, finance, logistics, materials, cryptography, etc.
Total: 54 example scripts (utils excluded)
pip install -r examples/requirements.txtTests that access real quantum hardware (IBM, AWS) may fail without credentials. This is expected. You can:
- Skip those modules
- Set up API keys (see individual example documentation)
- Accept that those tests will fail (they're marked clearly)
- Default 60s: Good for most examples
- 90-120s: Better for algorithm-heavy modules
- 30s (--quick): Fast feedback during development
- 180s+: Comprehensive testing including slow examples
Some quantum simulations are memory-intensive:
- Watch your RAM usage
- Close unnecessary applications
- Consider testing modules separately if you have limited RAM
Run tests:
- ✅ Before committing changes
- ✅ After updating dependencies
- ✅ When debugging issues
- ✅ After adding new examples
The test runner is designed for easy CI/CD integration:
name: Test Examples
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
pip install --upgrade pip
pip install -r examples/requirements.txt
- name: Run example tests
run: |
./test-examples.sh --continue --timeout 120
- name: Upload test logs
if: failure()
uses: actions/upload-artifact@v3
with:
name: test-logs
path: examples/test_logs/
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results
path: examples/test_results.jsonMake sure the script is executable:
chmod +x test-examples.sh
chmod +x examples/test_all_examples.py- Increase timeout:
./test-examples.sh --timeout 300 - Check system load:
toporhtop - Test modules individually to isolate issues
Install/update dependencies:
pip install --upgrade -r examples/requirements.txt- Close other applications
- Test modules separately
- Use
--quickmode to skip intensive examples
Your terminal might not support ANSI colors. The functionality works the same, just less colorful!
Here's what a successful test run looks like:
$ ./test-examples.sh --module module1_fundamentals
================================================================================
Quantum Computing 101 - Example Test Runner
================================================================================
Discovering examples...
Found 8 examples across 1 modules
Timeout per script: 60s
Testing module1_fundamentals (8 examples)
--------------------------------------------------------------------------------
[1/8] ✓ 01_classical_vs_quantum_bits.py (2.90s)
[2/8] ✓ 02_quantum_gates_circuits.py (3.16s)
[3/8] ✓ 03_superposition_measurement.py (1.93s)
[4/8] ✓ 04_quantum_entanglement.py (1.47s)
[5/8] ✓ 05_first_quantum_algorithm.py (13.40s)
[6/8] ✓ 06_quantum_teleportation.py (1.83s)
[7/8] ✓ 07_no_cloning_theorem.py (0.60s)
[8/8] ✓ 08_hardware_reality_check.py (0.77s)
================================================================================
Test Summary
================================================================================
Total tests run: 8
Passed: 8
Failed: 0
Pass rate: 100.0%
Total execution time: 26.06s
Fastest test: 07_no_cloning_theorem.py (0.60s)
Slowest test: 05_first_quantum_algorithm.py (13.40s)
Results saved to: /home/ysha/quantum-computing-101/examples/test_results.json
All tests passed! 🎉Parse test_results.json for custom reporting:
import json
with open('examples/test_results.json', 'r') as f:
results = json.load(f)
print(f"Pass rate: {results['pass_rate']}%")
print(f"Slowest tests:")
for test in sorted(results['passed'], key=lambda x: x['time'], reverse=True)[:5]:
print(f" {test['script']}: {test['time']:.2f}s")Run tests periodically:
# Test every hour
watch -n 3600 './test-examples.sh --quick --continue'Add to your Makefile:
.PHONY: test
test:
./test-examples.sh --continue
.PHONY: test-quick
test-quick:
./test-examples.sh --quick --continue
.PHONY: test-module
test-module:
./test-examples.sh --module $(MODULE) --verboseThen use:
make test
make test-quick
make test-module MODULE=module1_fundamentalsWhen adding new examples to the course:
-
Place in correct module directory
examples/moduleX_name/NN_example_name.py -
Test it works standalone
python3 examples/moduleX_name/NN_example_name.py
-
Run the test suite
./test-examples.sh --module moduleX_name
-
Run full suite before committing
./test-examples.sh --continue
The test runner automatically discovers new examples - no configuration needed!
The test runner provides:
- 🚀 Automation - Test all 54 examples with one command
- 📊 Reporting - Clear pass/fail status with statistics
- 🔍 Debugging - Detailed logs for failures
- ⚡ Speed - Quick mode for rapid iteration
- 🎯 Focus - Test specific modules during development
- 📈 Tracking - JSON results for analysis and trends
- 🌈 UX - Color-coded output for easy scanning
- 🔧 Flexibility - Many options for different use cases
Happy testing! 🎉