Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
281 changes: 139 additions & 142 deletions .github/workflows/dotnetCi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,152 +41,149 @@ jobs:
DISPLAY: :99

steps:
- name: Checkout
uses: actions/checkout@v4

# Install build dependencies

- name: Add .NET global tools location to PATH
run: echo "$HOME/.dotnet/tools" >> "$GITHUB_PATH"
- name: Install .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: ${{ env.DotnetVersion }}
- name: Install SonarScanner
run: dotnet tool install --global dotnet-sonarscanner
- name: Install Coverlet console
run: dotnet tool install --global coverlet.console
- name: Install DocFX
run: dotnet tool install --global docfx
- name: Install Node.js for building JSON-to-HTML report converter
uses: actions/setup-node@v6.2.0
- name: Install Java JDK for SonarScanner
uses: actions/setup-java@v5.1.0
with:
java-version: 21
distribution: 'zulu'
- name: Install GUI packages so Chrome may run
run: |
sudo apt-get update
sudo apt install -y xorg xvfb gtk2-engines-pixbuf dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable psmisc

# Environment setup pre-build

- name: Setup Selenium Manager config
run: |
mkdir ~/.cache/selenium
cp Tools/se-config.toml ~/.cache/selenium
- name: Start an Xvfb display so Chrome may run
run: Xvfb -ac $DISPLAY -screen 0 1280x1024x16 &
- name: Restore .NET packages
run: dotnet restore
- name: Restore Node modules
run: |
cd CSF.Screenplay.JsonToHtmlReport.Template/src
npm ci
cd ../..

# Build and test the solution

- name: Start SonarScanner
run: >
dotnet sonarscanner begin
/k:${{ env.SonarCloudProject }}
/v:GitHub_build_${{ env.RunNumber }}
/o:${{ env.SonarCloudUsername }}
/d:sonar.host.url=${{ env.SonarCloudUrl }}
/d:sonar.token=${{ env.SonarCloudSecretKey }}
/d:${{ env.BranchParam }}=${{ env.BranchName }} ${{ env.PullRequestParam }}
/d:sonar.javascript.lcov.reportPaths=CSF.Screenplay.JsonToHtmlReport.Template/src/TestResults/lcov.info
/s:$PWD/.sonarqube-analysisproperties.xml
- name: Build the solution
run: dotnet build -c ${{ env.Configuration }} --no-incremental
- name: Run .NET tests with coverage
id: dotnet_tests
continue-on-error: true
run: |
for proj in Tests/*.Tests
do
projNameArray=(${proj//// })
projName=${projNameArray[1]}
assemblyPath=$proj/bin/$Configuration/$Tfm/$projName.dll
coverlet "$assemblyPath" --target "dotnet" --targetargs "test $proj -c $Configuration --no-build --logger:nunit --test-adapter-path:." -f=opencover -o="TestResults/$projName.opencover.xml"
- name: Checkout
uses: actions/checkout@v4

# Install build dependencies

- name: Add .NET global tools location to PATH
run: echo "$HOME/.dotnet/tools" >> "$GITHUB_PATH"
- name: Install .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: ${{ env.DotnetVersion }}
- name: Install SonarScanner
run: dotnet tool install --global dotnet-sonarscanner
- name: Install Coverlet console
run: dotnet tool install --global coverlet.console
- name: Install DocFX
run: dotnet tool install --global docfx
- name: Install Node.js for building JSON-to-HTML report converter
uses: actions/setup-node@v6.2.0
- name: Install Java JDK for SonarScanner
uses: actions/setup-java@v5.1.0
with:
java-version: 21
distribution: 'zulu'
- name: Install GUI packages so Chrome may run
run: |
sudo apt-get update
sudo apt install -y xorg xvfb gtk2-engines-pixbuf dbus-x11 xfonts-base xfonts-100dpi xfonts-75dpi xfonts-cyrillic xfonts-scalable psmisc

# Environment setup pre-build

- name: Setup Selenium Manager config
run: |
mkdir ~/.cache/selenium
cp Tools/se-config.toml ~/.cache/selenium
- name: Start an Xvfb display so Chrome may run
run: Xvfb -ac $DISPLAY -screen 0 1280x1024x16 &
- name: Restore .NET packages
run: dotnet restore
- name: Restore Node modules
run: |
cd CSF.Screenplay.JsonToHtmlReport.Template/src
npm ci
cd ../..

# Build and test the solution

- name: Start SonarScanner
run: >
dotnet sonarscanner begin
/k:${{ env.SonarCloudProject }}
/v:GitHub_build_${{ env.RunNumber }}
/o:${{ env.SonarCloudUsername }}
/d:sonar.host.url=${{ env.SonarCloudUrl }}
/d:sonar.token=${{ env.SonarCloudSecretKey }}
/d:${{ env.BranchParam }}=${{ env.BranchName }} ${{ env.PullRequestParam }}
/d:sonar.javascript.lcov.reportPaths=CSF.Screenplay.JsonToHtmlReport.Template/src/TestResults/lcov.info
/s:$PWD/.sonarqube-analysisproperties.xml
- name: Build the solution
run: dotnet build -c ${{ env.Configuration }} --no-incremental
- name: Run .NET tests with coverage
id: dotnet_tests
continue-on-error: true
run: |
for proj in Tests/*.Tests
do
projNameArray=(${proj//// })
projName=${projNameArray[1]}
assemblyPath=$proj/bin/$Configuration/$Tfm/$projName.dll
coverlet "$assemblyPath" --target "dotnet" --targetargs "test $proj -c $Configuration --no-build --logger:nunit --test-adapter-path:." -f=opencover -o="TestResults/$projName.opencover.xml"
if [ $? -ne 0 ]
then
echo "failures=true" >> $GITHUB_OUTPUT
fi
done
- name: Run JavaScript tests with coverage
id: js_tests
continue-on-error: true
run: |
cd CSF.Screenplay.JsonToHtmlReport.Template/src
npm test
if [ $? -ne 0 ]
then
echo "failures=true" >> $GITHUB_OUTPUT
fi
done
- name: Run JavaScript tests with coverage
id: js_tests
continue-on-error: true
run: |
cd CSF.Screenplay.JsonToHtmlReport.Template/src
npm test
if [ $? -ne 0 ]
then
echo "failures=true" >> $GITHUB_OUTPUT
fi
cd ../..

# Post-test tasks (artifacts, overall status)

- name: Stop SonarScanner
run:
dotnet sonarscanner end /d:sonar.token=${{ env.SonarCloudSecretKey }}
- name: Gracefully stop Xvfb
run: killall Xvfb
continue-on-error: true
- name: Upload test results artifacts
uses: actions/upload-artifact@v4
with:
name: NUnit test results
path: Tests/*.Tests/**/TestResults.xml
- name: Upload Screenplay JSON report artifact
uses: actions/upload-artifact@v4
with:
name: Screenplay JSON reports
path: Tests/**/ScreenplayReport_*.json
- name: Convert Screenplay reports to HTML
continue-on-error: true
run: |
for report in $(find Tests/ -type f -name "ScreenplayReport_*.json")
do
reportDir=$(dirname "$report")
outputFile="$reportDir/ScreenplayReport.html"
dotnet run --no-build --framework $Tfm --project CSF.Screenplay.JsonToHtmlReport --ReportPath "$report" --OutputPath "$outputFile"
done
- name: Upload Screenplay HTML report artifact
uses: actions/upload-artifact@v4
with:
name: Screenplay HTML reports
path: Tests/**/ScreenplayReport.html
- name: Fail the build if any test failures
if: ${{ steps.dotnet_tests.outputs.failures == 'true' || steps.js_tests.outputs.failures == 'true' }}
run: |
echo "Failing the build due to test failures"
exit 1

# Build the apps in release mode and publish artifacts

- name: Clean the solution ahead of building in release config
run: dotnet clean
- name: Build, in release configuration
run: dotnet pack -p:VersionSuffix=$VersionSuffix -o packages
- name: Upload build result artifacts
uses: actions/upload-artifact@v4
with:
name: Build results (NuGet)
path: packages/*.nupkg
- name: Build docs website
run: dotnet build -c Docs
- name: Upload docs website artifact
uses: actions/upload-artifact@v4
with:
name: Docs website
path: docs/**/*

# publishDocs:
# TODO: Take the docco site artifact and publish it to master
cd ../..

# Post-test tasks (artifacts, overall status)

- name: Stop SonarScanner
run:
dotnet sonarscanner end /d:sonar.token=${{ env.SonarCloudSecretKey }}
- name: Gracefully stop Xvfb
run: killall Xvfb
continue-on-error: true
- name: Upload test results artifacts
uses: actions/upload-artifact@v4
with:
name: NUnit test results
path: Tests/*.Tests/**/TestResults.xml
- name: Upload Screenplay JSON report artifact
uses: actions/upload-artifact@v4
with:
name: Screenplay JSON reports
path: Tests/**/ScreenplayReport_*.json
- name: Convert Screenplay reports to HTML
continue-on-error: true
run: |
for report in $(find Tests/ -type f -name "ScreenplayReport_*.json")
do
reportDir=$(dirname "$report")
outputFile="$reportDir/ScreenplayReport.html"
dotnet run --no-build --framework $Tfm --project CSF.Screenplay.JsonToHtmlReport --ReportPath "$report" --OutputPath "$outputFile"
done
- name: Upload Screenplay HTML report artifact
uses: actions/upload-artifact@v4
with:
name: Screenplay HTML reports
path: Tests/**/ScreenplayReport.html
- name: Fail the build if any test failures
if: ${{ steps.dotnet_tests.outputs.failures == 'true' || steps.js_tests.outputs.failures == 'true' }}
run: |
echo "Failing the build due to test failures"
exit 1

# Build the apps in release mode and publish artifacts

- name: Clean the solution ahead of building in release config
run: dotnet clean
- name: Build, in release configuration
run: dotnet pack -p:VersionSuffix=$VersionSuffix -o packages
- name: Upload build result artifacts
uses: actions/upload-artifact@v4
with:
name: Build results (NuGet)
path: packages/*.nupkg
- name: Build docs website
run: dotnet build -c Docs
- name: Upload docs website artifact
uses: actions/upload-artifact@v4
with:
name: Docs website
path: docs/**/*

# runBrowserTests:
# TODO: Use build-results artifacts and run tests on matrix of browsers
38 changes: 38 additions & 0 deletions .github/workflows/supportVerification.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Third party support verification

on:
schedule:
- cron: "25 19 * * 0"

jobs:

# Summary:
#
# Runs a subset of the unit tests for Reqnroll, across a matrix
# of versions of this 3rd party library.
# This verifies that I support the version range which I claim to support.
#
# If this begins failing then it likely means that Reqnroll have released
# a version which includes breaking changes, and so the Screenplay support
# policy/version range must be updated accordingly.

test_reqnroll_versions:
name: Verify Reqnroll support across versions
runs-on: ubuntu-slim

strategy:
matrix:
reqnroll_version: ['2.0.0', '2.*', '3.0.0', '3.*', '*']

env:
DotnetVersion: 8.0.x

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: ${{ env.DotnetVersion }}
- name: Test Reqnroll (only)
run: dotnet test Tests/CSF.Screenplay.Reqnroll.Tests -p:ReqnrollVersion=${{ matrix.reqnroll_version }}
5 changes: 4 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
{
"cucumberautocomplete.steps": [ "Tests/CSF.Screenplay.SpecFlow.Tests/StepDefinitions/**/*.cs" ],
"cucumberautocomplete.steps": [
"Tests/CSF.Screenplay.SpecFlow.Tests/StepDefinitions/**/*.cs",
"Tests/CSF.Screenplay.Reqnroll.Tests/StepDefinitions/**/*.cs"
],
"cucumberautocomplete.strictGherkinCompletion": true,
}
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Where Screenplay is suitable for testing

The Screenplay pattern is a recommended tool for writing automated tests for application software.
Unlike [NUnit] or [SpecFlow] (or many others) Screenplay is not a complete testing framework.
Unlike [NUnit] or [Reqnroll] (or many others) Screenplay is not a complete testing framework.
Rather, Screenplay integrates with your chosen testing framework to assist in the writing of test logic.

_Screenplay is not a silver bullet_; some kinds of tests could benefit from Screenplay and others will not.
Some testing scenarios are listed below, along with a brief consideration as to whether Screenplay is likely to be relevant.
Terminology can differ between developers, so each type of test begins with a short definition.

[NUnit]: https://nunit.org/
[SpecFlow]: https://specflow.org/
[Reqnroll]: https://reqnroll.net/

## Ideal: System tests

Expand Down Expand Up @@ -65,6 +65,6 @@ This is particularly true if the test/sample scenarios would be recognisable to
## Recommended: Use BDD-style tests

Screenplay is a great tool when used alongside [Behaviour Driven Development] (BDD).
Whilst the use of a BDD framework such as [SpecFlow] is not at all mandatory, those familiar with BDD will quickly see the synergies with Screenplay.
Whilst the use of a BDD framework such as [Reqnroll] is not at all mandatory, those familiar with BDD will quickly see the synergies with Screenplay.

[Behaviour Driven Development]: https://en.wikipedia.org/wiki/Behavior-driven_development
2 changes: 1 addition & 1 deletion CSF.Screenplay.Docs/docs/bestPractice/WritingTests.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Rather than using many performables at the top-level of your tests, create [high
When using a method-driven testing framework, such as NUnit, five performables in a single test method is a reasonable number.
More than approximately ten performables is too many.

When using a binding-driven testing framework like SpecFlow, each binding should ideally correspond to at-most one performable.
When using a binding-driven testing framework like Reqnroll, each binding should ideally correspond to at-most one performable.

[performables]: ../../glossary/Performable.md
[high-level tasks]: ../../glossary/Task.md
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ Which of these depends upon the nature and paradigm of the test framework.
For frameworks which are based on **test methods** such as [NUnit], services are typically injected via _method parameter injection_ into the test methods.
[If Screenplay were to be extended] to work with frameworks such as xUnit or MSTest then this is likely to be the technique used.

For frameworks which are based on **binding classes** such as [SpecFlow], services are constructor-injected into binding classes.
For frameworks which are based on **binding classes** such as [Reqnroll], services are constructor-injected into binding classes.

Use dependencies injected in this way to get access to [commonly-used Screenplay services] and anything else required at the root level of your test logic.

[NUnit]: https://nunit.org/
[If Screenplay were to be extended]: ../extendingScreenplay/TestIntegrations.md
[SpecFlow]: https://specflow.org/
[Reqnroll]: https://reqnroll.net/
[commonly-used Screenplay services]: InjectableServices.md

## Into standalone performance logic
Expand Down
Loading
Loading