-
Notifications
You must be signed in to change notification settings - Fork 3
Adding tests #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
brookemosby
wants to merge
12
commits into
main
Choose a base branch
from
brooke/addingTests
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Adding tests #12
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
c3589da
adding tests
brookemosby ef95f40
adding tests
brookemosby 02e2383
adding tests
brookemosby 408e60c
adding tests
brookemosby 6db777f
adding tests
brookemosby 24e351f
adding tests
brookemosby 17f5b5c
adding tests
brookemosby 92344aa
Update run_e2e_tests.py
brookemosby cd1a86e
adding tests
brookemosby 20cfc1b
adding tests
brookemosby d14ee8a
adding tests
brookemosby d2c40a8
adding tests
brookemosby File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -124,3 +124,5 @@ venv.bak/ | |
| **/*.env | ||
|
|
||
| uv.lock | ||
| .vercel | ||
| .env*.local | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,247 @@ | ||
| #!/usr/bin/env python3 | ||
| """ | ||
| E2E Test Runner for Vercel Python SDK | ||
|
|
||
| This script runs end-to-end tests for the Vercel Python SDK, | ||
| checking all major workflows and integrations. | ||
| """ | ||
|
|
||
| import sys | ||
| import subprocess | ||
| import argparse | ||
| from pathlib import Path | ||
|
|
||
| from tests.e2e.config import E2ETestConfig | ||
|
|
||
| # Add the project root to the Python path | ||
| project_root = Path(__file__).parent | ||
| sys.path.insert(0, str(project_root)) | ||
|
|
||
|
|
||
| class E2ETestRunner: | ||
| """Runner for E2E tests.""" | ||
|
|
||
| def __init__(self): | ||
| self.config = E2ETestConfig() | ||
| self.test_results = {} | ||
|
|
||
| def check_environment(self) -> bool: | ||
| """Check if the test environment is properly configured.""" | ||
| print("Checking E2E test environment...") | ||
| self.config.print_env_status() | ||
|
|
||
| # Check if at least one service is available | ||
| services_available = [ | ||
| self.config.is_blob_enabled(), | ||
| self.config.is_vercel_api_enabled(), | ||
| self.config.is_oidc_enabled(), | ||
| ] | ||
|
|
||
| if not any(services_available): | ||
| print("❌ No services available for testing!") | ||
| print("Please set at least one of the following environment variables:") | ||
| print(f" - {self.config.BLOB_TOKEN_ENV}") | ||
| print(f" - {self.config.VERCEL_TOKEN_ENV}") | ||
| print(f" - {self.config.OIDC_TOKEN_ENV}") | ||
| return False | ||
|
|
||
| print("✅ Environment check passed!") | ||
| return True | ||
|
|
||
| def run_unit_tests(self) -> bool: | ||
| """Run unit tests first.""" | ||
| print("\n🧪 Running unit tests...") | ||
| try: | ||
| result = subprocess.run( | ||
| [sys.executable, "-m", "pytest", "tests/", "-v", "--tb=short"], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=300, | ||
| ) | ||
|
|
||
| if result.returncode == 0: | ||
| print("✅ Unit tests passed!") | ||
| return True | ||
| else: | ||
| print("❌ Unit tests failed!") | ||
| print("STDOUT:", result.stdout) | ||
| print("STDERR:", result.stderr) | ||
| return False | ||
| except subprocess.TimeoutExpired: | ||
| print("❌ Unit tests timed out!") | ||
| return False | ||
| except Exception as e: | ||
| print(f"❌ Error running unit tests: {e}") | ||
| return False | ||
|
|
||
| def run_e2e_tests(self, test_pattern: str = None) -> bool: | ||
| """Run E2E tests.""" | ||
| print("\n🚀 Running E2E tests...") | ||
|
|
||
| cmd = [sys.executable, "-m", "pytest", "tests/e2e/", "-v", "--tb=short"] | ||
|
|
||
| if test_pattern: | ||
| cmd.extend(["-k", test_pattern]) | ||
|
|
||
| try: | ||
| result = subprocess.run(cmd, capture_output=True, text=True, timeout=600) | ||
|
|
||
| if result.returncode == 0: | ||
| print("✅ E2E tests passed!") | ||
| return True | ||
| else: | ||
| print("❌ E2E tests failed!") | ||
| print("STDOUT:", result.stdout) | ||
| print("STDERR:", result.stderr) | ||
| return False | ||
| except subprocess.TimeoutExpired: | ||
| print("❌ E2E tests timed out!") | ||
| return False | ||
| except Exception as e: | ||
| print(f"❌ Error running E2E tests: {e}") | ||
| return False | ||
|
|
||
| def run_integration_tests(self) -> bool: | ||
| """Run integration tests.""" | ||
| print("\n🔗 Running integration tests...") | ||
|
|
||
| try: | ||
| result = subprocess.run( | ||
| [sys.executable, "-m", "pytest", "tests/integration/", "-v", "--tb=short"], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=600, | ||
| ) | ||
|
|
||
| if result.returncode == 0: | ||
| print("✅ Integration tests passed!") | ||
| return True | ||
| else: | ||
| print("❌ Integration tests failed!") | ||
| print("STDOUT:", result.stdout) | ||
| print("STDERR:", result.stderr) | ||
| return False | ||
| except subprocess.TimeoutExpired: | ||
| print("❌ Integration tests timed out!") | ||
| return False | ||
| except Exception as e: | ||
| print(f"❌ Error running integration tests: {e}") | ||
| return False | ||
|
|
||
| def run_examples(self) -> bool: | ||
| """Run example scripts as smoke tests.""" | ||
| print("\n📚 Running example scripts...") | ||
|
|
||
| examples_dir = Path(__file__).parent / "examples" | ||
| if not examples_dir.exists(): | ||
| print("❌ Examples directory not found!") | ||
| return False | ||
|
|
||
| example_files = list(examples_dir.glob("*.py")) | ||
| if not example_files: | ||
| print("❌ No example files found!") | ||
| return False | ||
|
|
||
| success_count = 0 | ||
| for example_file in example_files: | ||
| print(f" Running {example_file.name}...") | ||
| try: | ||
| result = subprocess.run( | ||
| [sys.executable, str(example_file)], capture_output=True, text=True, timeout=60 | ||
| ) | ||
|
|
||
| if result.returncode == 0: | ||
| print(f" ✅ {example_file.name} passed!") | ||
| success_count += 1 | ||
| else: | ||
| print(f" ❌ {example_file.name} failed!") | ||
| print(f" STDOUT: {result.stdout}") | ||
| print(f" STDERR: {result.stderr}") | ||
| except subprocess.TimeoutExpired: | ||
| print(f" ❌ {example_file.name} timed out!") | ||
| except Exception as e: | ||
| print(f" ❌ Error running {example_file.name}: {e}") | ||
|
|
||
| if success_count == len(example_files): | ||
| print("✅ All example scripts passed!") | ||
| return True | ||
| else: | ||
| print(f"❌ {len(example_files) - success_count} example scripts failed!") | ||
| return False | ||
|
|
||
| def run_all_tests(self, test_pattern: str = None) -> bool: | ||
| """Run all tests.""" | ||
| print("🧪 Starting comprehensive E2E test suite...") | ||
| print("=" * 60) | ||
|
|
||
| # Check environment | ||
| if not self.check_environment(): | ||
| return False | ||
|
|
||
| # Run unit tests | ||
| if not self.run_unit_tests(): | ||
| return False | ||
|
|
||
| # Run E2E tests | ||
| if not self.run_e2e_tests(test_pattern): | ||
| return False | ||
|
|
||
| # Run integration tests | ||
| if not self.run_integration_tests(): | ||
| return False | ||
|
|
||
| # Run examples | ||
| if not self.run_examples(): | ||
| return False | ||
|
|
||
| print("\n" + "=" * 60) | ||
| print("🎉 All tests passed! E2E test suite completed successfully.") | ||
| return True | ||
|
|
||
| def run_specific_tests(self, test_type: str, test_pattern: str = None) -> bool: | ||
| """Run specific type of tests.""" | ||
| print(f"🧪 Running {test_type} tests...") | ||
|
|
||
| if test_type == "unit": | ||
| return self.run_unit_tests() | ||
| elif test_type == "e2e": | ||
| return self.run_e2e_tests(test_pattern) | ||
| elif test_type == "integration": | ||
| return self.run_integration_tests() | ||
| elif test_type == "examples": | ||
| return self.run_examples() | ||
| else: | ||
| print(f"❌ Unknown test type: {test_type}") | ||
| return False | ||
|
|
||
|
|
||
| def main(): | ||
| """Main entry point.""" | ||
| parser = argparse.ArgumentParser(description="E2E Test Runner for Vercel Python SDK") | ||
| parser.add_argument( | ||
| "--test-type", | ||
| choices=["all", "unit", "e2e", "integration", "examples"], | ||
| default="all", | ||
| help="Type of tests to run", | ||
| ) | ||
| parser.add_argument("--pattern", help="Test pattern to match (for e2e tests)") | ||
| parser.add_argument( | ||
| "--check-env", action="store_true", help="Only check environment configuration" | ||
| ) | ||
|
|
||
| args = parser.parse_args() | ||
|
|
||
| runner = E2ETestRunner() | ||
|
|
||
| if args.check_env: | ||
| success = runner.check_environment() | ||
| elif args.test_type == "all": | ||
| success = runner.run_all_tests(args.pattern) | ||
| else: | ||
| success = runner.run_specific_tests(args.test_type, args.pattern) | ||
|
|
||
| sys.exit(0 if success else 1) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| main() |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The E2E test step will silently succeed even if actual tests fail due to the
||(OR) operator. Failed tests will be masked as "skipped" in the CI output, making the build appear green when it should fail.View Details
📝 Patch Details
Analysis
E2E test failures masked as skipped due to shell OR operator
What fails: The CI workflow at line 86 uses
|| echo "E2E tests skipped..."which masks test failures as successes. When E2E tests fail with exit code 1, the shell OR operator (||) executes the echo command, making the final exit code 0 (success) instead of 1 (failure).How to reproduce:
Result: Failed E2E tests are reported as "E2E tests skipped (secrets not available)" in CI output, causing the build to pass when tests actually failed.
Expected:
Fix: Moved environment checking into
run_specific_tests()so the Python script handles the distinction between "no secrets available" (exit 0) and "tests failed" (exit 1), eliminating the need for the shell OR operator. This ensures that actual test failures are properly reported to CI.