This guide shows you how to build a custom test reporter using the
ReportBuilder class.
When to use this: Your test framework isn't supported by our existing reporters (Mocha, Playwright, Web Test Runner).
const { ReportBuilder } = require('../helpers/report-builder.cjs');
// 1. Create a logger
const logger = {
info: (msg) => console.log(msg),
warning: (msg) => console.warn(msg),
error: (msg) => console.error(msg),
location: (msg, loc) => console.log(`${msg}: ${loc}`)
};
// 2. Initialize the builder (uses defaults: './d2l-test-report.json')
const report = new ReportBuilder('your-framework-name', logger);
// 3. Build your report (see example below)Options (all optional):
reportPath- Output path (default:'./d2l-test-report.json')reportConfigurationPath- Path to config file for taxonomy mapping (type/tool/experience) and ignore patterns (default:'./d2l-test-reporting.config.json'). See schema for format.verbose- Show validation warnings (default:false)
You'll build a report with two parts:
- Summary - Overall test run info (total duration, pass/fail counts)
- Access with:
report.getSummary()
- Details - Individual test results (one per test)
- Access with:
report.getDetail(testId)
All methods return this for chaining:
report.getSummary().setStarted(time).setPassed();Here's a minimal custom reporter showing the typical test lifecycle:
Note
The hook names (onRunStart, onTestEnd, etc.) vary by framework.
This example uses generic names. Consult your test framework's documentation
for actual hook names and parameters.
const { ReportBuilder } = require('../helpers/report-builder.cjs');
class CustomReporter {
constructor(options = {}) {
const logger = { /* see Quick Start */ };
try {
this._report = new ReportBuilder('custom-framework', logger, options);
} catch (error) {
logger.error('Failed to initialize D2L test report builder');
logger.error(error.message);
return;
}
}
onRunStart(stats) {
this._report.getSummary()
.addContext() // Adds GitHub/Git info automatically
.setStarted(stats.startTime);
}
onTestStart(test) {
if (this._report.ignoreFilePath(test.file)) return;
const testId = `${test.file}[${test.name}]`;
this._report.getDetail(testId)
.setName(test.name)
.setLocationFile(test.file)
.setStarted(new Date().toISOString())
.setTimeout(test.timeout);
}
onTestRetry(test) {
if (this._report.ignoreFilePath(test.file)) return;
const testId = `${test.file}[${test.name}]`;
this._report.getDetail(testId)
.incrementRetries()
.addDuration(test.duration);
}
onTestEnd(test, result) {
if (this._report.ignoreFilePath(test.file)) return;
const testId = `${test.file}[${test.name}]`;
const detail = this._report.getDetail(testId);
detail.addDuration(result.duration);
if (result.status === 'passed') detail.setPassed();
else if (result.status === 'skipped') detail.setSkipped();
else detail.setFailed();
}
onRunEnd(stats) {
const summary = this._report.getSummary()
.setDurationTotal(stats.duration);
if (stats.failures === 0) {
summary.setPassed();
} else {
summary.setFailed();
}
this._report.finalize().save(); // Must call finalize() before save()
}
}Read this first - these patterns answer the most common questions:
Creating Test IDs Each test needs a unique ID. Combine file path + test name:
const testId = `${test.file}[${test.fullName}]`;
// Example: "test/unit/component.test.js[MyComponent > should render]"Checking Ignored Files Always check before processing a test:
if (this._report.ignoreFilePath(test.file)) {
return; // Skip this test
}Handling Retries On each retry attempt, increment the counter and add duration:
detail.incrementRetries().addDuration(attemptDuration);Only set pass/fail status after all retries complete.
Setting Browser Names Validate against the supported list:
const supportedBrowsers = ReportBuilder.SupportedBrowsers; // ['chromium', 'chrome', 'firefox', 'webkit', 'safari', 'edge']
if (browser && supportedBrowsers.includes(browser.toLowerCase())) {
detail.setBrowser(browser.toLowerCase());
}const summary = report.getSummary();
summary.addContext(); // Add GitHub/Git context (if available)
summary.setStarted(isoTimestamp); // Set start time
summary.setDurationTotal(ms); // Set total duration in milliseconds
summary.setPassed() / setFailed(); // Set overall statusconst detail = report.getDetail(testId);
detail.setName(name); // Test name
detail.setLocationFile(path); // File path (auto-applies taxonomy if configured)
detail.setLocationLine(n); // Line number (optional)
detail.setStarted(isoTimestamp); // Start time
detail.setTimeout(ms); // Timeout in milliseconds
detail.setBrowser(browser); // Browser name (must be in SupportedBrowsers)
detail.addDuration(ms); // Add to duration (use for retries)
detail.incrementRetries(); // Add 1 to retry count
detail.setPassed() / setFailed() / setSkipped(); // Set test statusreport.ignoreFilePath(path); // Returns true if file should be skipped
report.finalize(); // Aggregate counts from details (required before save)
report.save(); // Write JSON report to diskCall finalize() before save() to calculate summary counts:
report.finalize().save();What it does:
- Loops through all test details
- Counts by status: passed, failed, skipped
- Detects flaky tests:
status='passed'ANDretries > 0→ counted as flaky (not passed) - Updates summary count fields
Without calling finalize(), your summary counts will be incorrect.
Before you ship your reporter:
- ✅ Call
finalize()beforesave() - ✅ Check
ignoreFilePath()before processing each test - ✅ Use consistent test ID format throughout
- ✅ Use ISO 8601 timestamps:
new Date().toISOString() - ✅ Wrap
new ReportBuilder()in try-catch - ✅ Test with retries enabled to verify flaky detection works
- Mocha reporter for a complete reference
- Playwright reporter for another example
- Report format for output schema details