-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Add Live Server Integration Testing Beyond Existing Integration Tests
Problem
The OGC-Client-CSAPI library currently has limited live server integration testing despite having excellent unit test coverage (832+ tests, 94% coverage - Issue #19) and comprehensive endpoint integration tests (10 tests, 100% coverage - Issue #18). The existing integration tests validate internal library behavior (URL building, parsing, validation) but do NOT test actual interoperability with real OGC API - Connected Systems servers.
Current Testing Landscape:
What IS Tested ✅:
- Unit tests: 832+ tests verify individual components in isolation
- Endpoint integration: 10 tests verify URL building and collection parsing (Issue Validate: OGC Standards Compliance (~95% claim) #18)
- Internal integration: CSAPINavigator → TypedCSAPINavigator → Parsers → Validators flow
- Mock data: All tests use generated/mocked data, not real server responses
- OGC compliance: Theoretical compliance verified through code review (98% - Issue Validate: OGC API Endpoint Integration (endpoint.ts) #20)
What is NOT Tested ❌:
- Live server connectivity: No tests against real OGC CSAPI servers
- Real-world responses: No validation of actual server response formats
- Interoperability: No testing with multiple server implementations (OpenSensorHub, istSOS, custom servers)
- Server-specific quirks: No handling of vendor extensions, edge cases, non-standard behaviors
- Network conditions: No testing of timeouts, retries, error responses, rate limiting
- Authentication patterns: No testing of OAuth2, API keys, bearer tokens with real servers
- Content negotiation: No testing of Accept headers, format preference with real servers
- End-to-end workflows: No testing of complete CRUD operations on live data
- Performance with real servers: No latency measurements, pagination behavior, large response handling
- Standards conformance: No verification that library works with OGC-conformant servers in practice
Current Integration Test Limitations (from Issue #18):
The existing endpoint.integration.spec.ts (10 tests, 100% coverage) only tests:
- Loading collection metadata from JSON
- Parsing collection information
- Detecting available resources
- Creating CSAPINavigator instances
- Building URLs from collections
Missing from existing tests:
- No HTTP requests to actual servers
- No testing of response parsing from real data
- No error handling for server-side failures
- No testing of API authentication
- No testing of server conformance claims
Risk Assessment:
Without live server integration testing:
- Undiscovered incompatibilities: Library may fail with real servers despite 98% theoretical compliance
- Unexpected response formats: Real servers may return valid but unexpected data structures
- Authentication failures: OAuth2/API key flows may not work as expected
- Performance issues: Real server latency, pagination, rate limiting may expose problems
- Vendor lock-in: Library may only work with mock data, not diverse server implementations
- Standards interpretation: OGC specs have ambiguities; real servers may implement differently
- Error handling gaps: Real server errors (404, 500, timeouts) may not be handled correctly
- Production failures: Issues discovered in production deployments rather than testing
Real-World Impact:
From Issue #20 validation findings:
"Test Against Reference Implementation - Test against OGC reference implementation (if available), Test against OpenSensorHub, Document interoperability issues, Verify standards compliance in practice"
Key quote from Issue #20:
"The implementation is production-ready and exceeds the documented compliance claims."
However, this verdict is based on code review only, not live server testing. The library achieves 98% theoretical compliance but has zero empirical validation against real OGC CSAPI servers.
Context
This issue was identified during the comprehensive validation conducted January 27-28, 2026.
Related Validation Issues:
- Issue #20: OGC Standards Compliance - Noted need for "Test Against Reference Implementation" and "Verify standards compliance in practice"
Work Item ID: 46 from Remaining Work Items
Repository: https://github.com/OS4CSAPI/ogc-client-CSAPI
Validated Commit: a71706b9592cad7a5ad06e6cf8ddc41fa5387732
Note: This is the FINAL work item (46 of 46) - completes the comprehensive validation project!
Detailed Findings
From Issue #20 (OGC Standards Compliance)
The OGC compliance validation achieved 98% compliance but explicitly identified the need for live server testing:
Validation Tasks Section (Lines from Issue #20):
Task #6: Test Against Reference Implementation
- Test against OGC reference implementation (if available)
- Test against OpenSensorHub
- Document interoperability issues
- Verify standards compliance in practice
Status: ❌ NOT COMPLETED - No live server testing was conducted during validation
Why This Matters:
The validation report states:
"The implementation is production-ready and exceeds the documented compliance claims."
However, this assessment is based on:
- ✅ Static code analysis: Direct examination of
navigator.ts(2,091 lines) - ✅ Unit test verification: 471 CSAPI-specific tests (97.2% coverage)
- ✅ Spec cross-reference: All methods reference OGC 23-001r2 and 23-002r1
- ❌ Live server testing: Zero tests against real servers
Theoretical vs. Practical Compliance:
Theoretical Compliance (Verified ✅):
- All 7 core resource types implemented (Systems, Deployments, Procedures, SamplingFeatures, Properties)
- All 3 Part 2 resource types implemented (Datastreams, ControlStreams, SystemEvents)
- All CRUD operations implemented (GET, POST, PUT, PATCH, DELETE)
- All query parameters supported (bbox, datetime, q, id, geom, foi, parent, recursive, etc.)
- All endpoint patterns match OGC specs
- Proper HTTP methods and URL encoding
Practical Compliance (Not Verified ❌):
- Does the library successfully fetch systems from OpenSensorHub?
- Can it parse real GeoJSON responses from istSOS?
- Does authentication work with OAuth2 servers?
- Can it handle server-specific extensions?
- Does pagination work with real server limits?
- Can it handle malformed but "valid" server responses?
- Does it work with all OGC-conformant servers?
Existing Integration Tests (Issue #18)
Current Coverage (100% of internal integration):
// endpoint.integration.spec.ts - 10 tests
describe('CSAPINavigator Integration', () => {
test('Load collection metadata from JSON');
test('Parse collection information');
test('Detect available resources (systems, datastreams, etc.)');
test('Create CSAPINavigator from collection');
test('Build system URLs');
test('Build deployment URLs');
test('Build datastream URLs');
test('Build observation URLs');
test('Query parameter serialization');
test('Format negotiation (Accept headers)');
});What these tests do: Verify URL building and collection parsing with mock data
What these tests DON'T do:
- ❌ Make HTTP requests to real servers
- ❌ Parse real server responses
- ❌ Handle network errors (timeouts, connection refused, DNS failures)
- ❌ Handle server errors (404, 500, 503, rate limiting)
- ❌ Test authentication (API keys, OAuth2, bearer tokens)
- ❌ Test content negotiation (Accept header preference)
- ❌ Test CORS (cross-origin requests from browsers)
- ❌ Test pagination (next/prev links, limit enforcement)
- ❌ Test server conformance (does server match OGC spec?)
Interoperability Concerns from Issue #20
Known OGC CSAPI Server Implementations:
-
OpenSensorHub - Open-source OGC CSAPI server
- Most mature CSAPI implementation
- Java-based with extensive features
- Supports CSAPI Part 1 and Part 2
- Potential issues: Vendor extensions, non-standard query parameters, custom authentication
-
istSOS - Open-source SOS server with CSAPI support
- Python-based sensor observation service
- Converting from SOS 2.0 to CSAPI
- Potential issues: Incomplete CSAPI implementation, SOS legacy behaviors
-
Custom implementations - Organizations building their own servers
- Various languages (Node.js, Python, Go, .NET)
- Different interpretation of OGC specs
- Potential issues: Ambiguities in spec, missing optional features, non-standard extensions
Interoperability Risks:
Without testing against multiple servers:
- Assumption-driven development: Code assumes spec interpretation matches all servers
- Single-server bias: If only tested against one server, may not work with others
- Breaking changes: Server updates may break client without notice
- Vendor lock-in: Library may inadvertently become tied to specific server quirks
OGC Conformance Class Verification
From Issue #20, the library checks conformance classes:
// endpoint.ts, Lines 296-302
get hasConnectedSystems(): Promise<boolean> {
return this.conformance.then(checkHasConnectedSystems);
}
// info.ts, Lines 105-117
export function checkHasConnectedSystems([conformance]: [ConformanceClass[]]): boolean {
return (
conformance.indexOf('http://www.opengis.net/spec/ogcapi-connected-systems-1/1.0/conf/core') > -1 ||
conformance.indexOf('http://www.opengis.net/spec/ogcapi-cs-part1/1.0/conf/core') > -1
);
}What this checks: Server declares CSAPI conformance classes
What it DOESN'T verify:
- Does server actually implement declared conformance classes?
- Are server responses valid per the spec?
- Does client correctly parse server responses?
- Can client complete end-to-end workflows?
Need: Live testing to verify conformance claims match reality
Real-World Use Cases Requiring Live Testing
1. IoT Dashboard Scenario:
// User wants to display all temperature sensors on a map
const navigator = new TypedCSAPINavigator(
'https://opensensorhub.example.com/csapi'
);
// Does this work with real OpenSensorHub?
const systems = await navigator.getSystems({
observedProperty: 'http://www.opengis.net/def/property/OGC/0/Temperature',
bbox: [-122.5, 37.7, -122.3, 37.9],
});
// Can we parse real GeoJSON responses?
systems.data.forEach(system => {
displayMarker(system.geometry.coordinates, system.properties.name);
});Unknowns without live testing:
- Does OpenSensorHub support
observedPropertyfiltering? - Is the bbox parameter interpreted correctly?
- Are coordinate systems handled properly (WGS84)?
- Can we parse OpenSensorHub's GeoJSON format?
2. Data Ingestion Scenario:
// User wants to fetch latest observations for analysis
const datastream = await navigator.getDatastream('temp-sensor-1');
const observations = await navigator.getDatastreamObservations('temp-sensor-1', {
phenomenonTime: 'latest',
limit: 1000,
});
// Process observations
const values = observations.data.map(obs => obs.properties.result.value);Unknowns without live testing:
- Does server support
phenomenonTime: 'latest'special value? - Is pagination enforced (what if server ignores limit)?
- Can we parse real observation result formats (SWE Common)?
- Are timestamps in correct ISO 8601 format?
3. Command & Control Scenario:
// User wants to issue a command to a sensor
const command = await navigator.issueCommand('sensor-ctrl-1', {
type: 'Feature',
geometry: null,
properties: {
commandType: 'SET_SAMPLING_RATE',
parameters: { rate: 10 },
},
});
// Check command status
const status = await navigator.getCommandStatus(command.id);Unknowns without live testing:
- Does server accept command format?
- Is command execution tracked correctly?
- Can we parse command result responses?
- Does authentication work for control operations?
Performance and Network Considerations
Real-world network conditions not tested:
- Latency: How does library perform with 200ms+ server latency?
- Timeouts: Does library handle slow servers gracefully?
- Large responses: Can library parse 100MB responses (100k observations)?
- Pagination: Does library follow
nextlinks correctly? - Rate limiting: Does library respect
Retry-Afterheaders? - Connection failures: Does library retry on network errors?
- CORS: Does library work from browser with CORS restrictions?
Current assumptions (not verified):
- Assumes fast, reliable network connections
- Assumes servers respond within default timeout
- Assumes all data fits in memory
- Assumes no rate limiting
Authentication and Authorization
Supported authentication patterns (theoretical):
From Issue #20 validation, the library accepts arbitrary headers:
// navigator.ts - _fetch() accepts headers
private async _fetch(url: string, options?: RequestInit): Promise<Response> {
return fetch(url, {
...options,
headers: {
'Accept': 'application/geo+json, application/sml+json, application/json',
...options?.headers,
},
});
}What this enables:
- Custom headers (API keys, bearer tokens, etc.)
- User can pass
Authorizationheader
What is NOT tested:
- ❌ OAuth2 flows (authorization code, client credentials)
- ❌ API key authentication with real servers
- ❌ Bearer token refresh on expiration
- ❌ Certificate-based authentication (mTLS)
- ❌ SAML/SSO integration
- ❌ Server-specific auth requirements
Risk: Authentication may not work with real servers despite header support
Proposed Solution
Implement a comprehensive live server integration testing suite that validates interoperability with real OGC API - Connected Systems servers, covering authentication, CRUD operations, error handling, and performance.
1. Test Environment Setup
Target Servers for Testing:
// tests/live-integration/config.ts
export const TEST_SERVERS = {
opensensorhub: {
baseUrl: process.env.OSH_BASE_URL || 'https://opensensorhub.example.com/csapi',
auth: {
type: 'bearer',
token: process.env.OSH_TOKEN,
},
supports: ['part1', 'part2'],
},
istsos: {
baseUrl: process.env.ISTSOS_BASE_URL || 'https://istsos.example.com/csapi',
auth: {
type: 'apikey',
key: process.env.ISTSOS_API_KEY,
},
supports: ['part1'],
},
reference: {
baseUrl: process.env.OGC_REFERENCE_URL || 'https://ogc-reference.example.com/csapi',
auth: { type: 'none' },
supports: ['part1', 'part2'],
},
};Environment Variables:
# .env.test
OSH_BASE_URL=https://opensensorhub.example.com/csapi
OSH_TOKEN=your_bearer_token_here
ISTSOS_BASE_URL=https://istsos.example.com/csapi
ISTSOS_API_KEY=your_api_key_here
OGC_REFERENCE_URL=https://ogc-reference.example.com/csapiDocker Compose for Local Testing:
# docker-compose.test.yml
version: '3.8'
services:
opensensorhub:
image: opensensorhub/osh-core:latest
ports:
- "8181:8181"
environment:
- OSH_HOME=/data
volumes:
- ./test-data/osh:/data
istsos:
image: istsos/istsos:latest
ports:
- "8080:8080"
environment:
- DB_HOST=postgres
- DB_NAME=istsos
depends_on:
- postgres
postgres:
image: postgres:14
environment:
- POSTGRES_DB=istsos
- POSTGRES_USER=istsos
- POSTGRES_PASSWORD=istsos2. Server Discovery and Conformance Testing
Test that library correctly discovers server capabilities:
// tests/live-integration/discovery.spec.ts
describe('Live Server Discovery', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let endpoint: OgcApiEndpoint;
beforeAll(async () => {
endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
});
test('loads landing page', async () => {
const landingPage = await endpoint.getLandingPage();
expect(landingPage).toHaveProperty('title');
expect(landingPage).toHaveProperty('links');
expect(landingPage.links).toBeInstanceOf(Array);
console.log(`${serverName} title:`, landingPage.title);
});
test('loads conformance classes', async () => {
const conformance = await endpoint.getConformance();
expect(conformance).toHaveProperty('conformsTo');
expect(conformance.conformsTo).toBeInstanceOf(Array);
// Check for CSAPI core conformance
const hasPart1 = conformance.conformsTo.some(c =>
c.includes('ogcapi-connected-systems-1') || c.includes('ogcapi-cs-part1')
);
expect(hasPart1).toBe(true);
console.log(`${serverName} conformance classes:`, conformance.conformsTo);
});
test('detects CSAPI support', async () => {
const hasCSAPI = await endpoint.hasConnectedSystems;
expect(hasCSAPI).toBe(true);
});
test('loads CSAPI collections', async () => {
const collections = await endpoint.getCollections();
expect(collections).toBeInstanceOf(Array);
expect(collections.length).toBeGreaterThan(0);
// Should have at least 'systems' collection
const systemsCollection = collections.find(c =>
c.id === 'systems' || c.id.includes('systems')
);
expect(systemsCollection).toBeDefined();
console.log(`${serverName} collections:`, collections.map(c => c.id));
});
test('detects available resource types', () => {
const availableResources = endpoint.csapi.availableResources;
expect(availableResources).toContain('systems');
if (config.supports.includes('part2')) {
expect(availableResources).toContain('datastreams');
}
console.log(`${serverName} resources:`, Array.from(availableResources));
});
});
});
});3. CRUD Operations Testing
Test complete create-read-update-delete workflows:
// tests/live-integration/crud.spec.ts
describe('Live Server CRUD Operations', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
let createdSystemId: string;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
// Setup authentication
if (config.auth.type === 'bearer') {
navigator.setAuthHeaders({
'Authorization': `Bearer ${config.auth.token}`,
});
} else if (config.auth.type === 'apikey') {
navigator.setAuthHeaders({
'X-API-Key': config.auth.key,
});
}
});
test('CREATE: Post new system', async () => {
const newSystem = {
type: 'Feature',
geometry: {
type: 'Point',
coordinates: [-122.419, 37.775],
},
properties: {
name: `Test System ${Date.now()}`,
description: 'Created by live integration test',
systemKind: 'sensor',
validTime: ['2026-01-28T00:00:00Z', '..'],
},
};
const createUrl = navigator.navigator.createSystemUrl();
const response = await navigator.fetch(createUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(newSystem),
});
expect(response.status).toBe(201); // Created
expect(response.headers.get('Location')).toBeTruthy();
const created = await response.json();
expect(created).toHaveProperty('id');
createdSystemId = created.id;
console.log(`${serverName} created system:`, createdSystemId);
});
test('READ: Get created system', async () => {
const result = await navigator.getSystem(createdSystemId);
expect(result.data).toBeDefined();
expect(result.data.id).toBe(createdSystemId);
expect(result.data.properties.name).toContain('Test System');
console.log(`${serverName} read system:`, result.data.properties.name);
});
test('UPDATE: Patch system properties', async () => {
const updateUrl = navigator.navigator.patchSystemUrl(createdSystemId);
const patch = {
properties: {
description: 'Updated by live integration test',
},
};
const response = await navigator.fetch(updateUrl, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(patch),
});
expect(response.status).toBe(200);
// Verify update
const updated = await navigator.getSystem(createdSystemId);
expect(updated.data.properties.description).toBe('Updated by live integration test');
});
test('DELETE: Remove created system', async () => {
const deleteUrl = navigator.navigator.deleteSystemUrl(createdSystemId);
const response = await navigator.fetch(deleteUrl, {
method: 'DELETE',
});
expect(response.status).toBe(204); // No Content
// Verify deletion
await expect(navigator.getSystem(createdSystemId)).rejects.toThrow();
});
});
});
});4. Query Parameter Testing
Test all query parameters with real servers:
// tests/live-integration/queries.spec.ts
describe('Live Server Query Parameters', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
});
test('Pagination: limit parameter', async () => {
const result = await navigator.getSystems({ limit: 5 });
expect(result.data.length).toBeLessThanOrEqual(5);
console.log(`${serverName} returned ${result.data.length} systems (limit: 5)`);
});
test('Spatial: bbox parameter', async () => {
const result = await navigator.getSystems({
bbox: [-123, 37, -121, 39], // San Francisco Bay Area
});
expect(result.data).toBeInstanceOf(Array);
// Verify all results within bbox
result.data.forEach(system => {
if (system.geometry && system.geometry.type === 'Point') {
const [lon, lat] = system.geometry.coordinates;
expect(lon).toBeGreaterThanOrEqual(-123);
expect(lon).toBeLessThanOrEqual(-121);
expect(lat).toBeGreaterThanOrEqual(37);
expect(lat).toBeLessThanOrEqual(39);
}
});
});
test('Temporal: datetime parameter', async () => {
const result = await navigator.getSystems({
datetime: {
start: '2025-01-01T00:00:00Z',
end: '2026-01-01T00:00:00Z',
},
});
expect(result.data).toBeInstanceOf(Array);
console.log(`${serverName} returned ${result.data.length} systems for 2025`);
});
test('Full-text search: q parameter', async () => {
const result = await navigator.getSystems({ q: 'temperature' });
expect(result.data).toBeInstanceOf(Array);
// Verify results contain search term
result.data.forEach(system => {
const text = JSON.stringify(system).toLowerCase();
expect(text).toMatch(/temperature|temp/);
});
});
test('Property filtering: observedProperty parameter', async () => {
const result = await navigator.getSystems({
observedProperty: 'http://www.opengis.net/def/property/OGC/0/Temperature',
});
expect(result.data).toBeInstanceOf(Array);
console.log(`${serverName} returned ${result.data.length} temperature systems`);
});
test('Hierarchical: parent + recursive parameters', async () => {
// First, get a system with subsystems
const allSystems = await navigator.getSystems({ limit: 100 });
const parentSystem = allSystems.data.find(s =>
s.properties.subsystems && s.properties.subsystems.length > 0
);
if (parentSystem) {
const result = await navigator.getSystems({
parent: parentSystem.id,
recursive: true,
});
expect(result.data).toBeInstanceOf(Array);
expect(result.data.length).toBeGreaterThan(0);
}
});
test('Property path: select parameter', async () => {
const result = await navigator.getSystems({
limit: 5,
select: 'id,properties.name,geometry',
});
expect(result.data).toBeInstanceOf(Array);
// Verify only requested properties returned (server-dependent)
result.data.forEach(system => {
expect(system).toHaveProperty('id');
expect(system).toHaveProperty('properties');
expect(system.properties).toHaveProperty('name');
});
});
});
});
});5. Format Negotiation Testing
Test content type negotiation with Accept headers:
// tests/live-integration/formats.spec.ts
describe('Live Server Format Negotiation', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
});
test('GeoJSON format (application/geo+json)', async () => {
const url = navigator.navigator.getSystemsUrl({ limit: 1 });
const response = await fetch(url, {
headers: { 'Accept': 'application/geo+json' },
});
expect(response.headers.get('Content-Type')).toMatch(/geo\+json/);
const data = await response.json();
expect(data.type).toBe('FeatureCollection');
});
test('SensorML format (application/sml+json)', async () => {
// Get a system ID first
const systems = await navigator.getSystems({ limit: 1 });
if (systems.data.length === 0) return;
const systemId = systems.data[0].id;
const url = navigator.navigator.getSystemUrl(systemId, 'application/sml+json');
const response = await fetch(url, {
headers: { 'Accept': 'application/sml+json' },
});
// Server may not support SensorML format
if (response.ok) {
expect(response.headers.get('Content-Type')).toMatch(/sml\+json/);
const data = await response.json();
expect(data).toHaveProperty('type'); // SensorML type
}
});
test('Plain JSON format (application/json)', async () => {
const url = navigator.navigator.getSystemsUrl({ limit: 1 });
const response = await fetch(url, {
headers: { 'Accept': 'application/json' },
});
expect(response.headers.get('Content-Type')).toMatch(/json/);
expect(response.ok).toBe(true);
});
});
});
});6. Error Handling Testing
Test error responses and edge cases:
// tests/live-integration/errors.spec.ts
describe('Live Server Error Handling', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
});
test('404 Not Found: Invalid system ID', async () => {
await expect(
navigator.getSystem('nonexistent-system-id-12345')
).rejects.toThrow(/404|not found/i);
});
test('400 Bad Request: Invalid bbox', async () => {
await expect(
navigator.getSystems({
bbox: [180, 90, -180, -90], // Invalid (min > max)
})
).rejects.toThrow(/400|bad request|invalid/i);
});
test('401 Unauthorized: Missing authentication', async () => {
// Create new navigator without auth
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
const unauthNav = endpoint.csapi.typed();
// Try to create resource (should fail if auth required)
const createUrl = unauthNav.navigator.createSystemUrl();
const response = await fetch(createUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ type: 'Feature', properties: {} }),
});
if (config.auth.type !== 'none') {
expect([401, 403]).toContain(response.status);
}
});
test('500 Internal Server Error: Handling', async () => {
// This depends on server behavior, just verify graceful handling
try {
await navigator.getSystems({ limit: 1 });
} catch (error) {
expect(error).toBeInstanceOf(Error);
expect(error.message).toBeTruthy();
}
});
});
});
});7. Performance and Load Testing
Test library performance with real server latency:
// tests/live-integration/performance.spec.ts
describe('Live Server Performance', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
});
test('Response time: Small query (<100 items)', async () => {
const startTime = performance.now();
await navigator.getSystems({ limit: 10 });
const endTime = performance.now();
const duration = endTime - startTime;
console.log(`${serverName} small query: ${duration.toFixed(2)}ms`);
// Should complete in reasonable time (accounting for network latency)
expect(duration).toBeLessThan(5000); // <5 seconds
});
test('Response time: Large query (1000+ items)', async () => {
const startTime = performance.now();
const result = await navigator.getSystems({ limit: 1000 });
const endTime = performance.now();
const duration = endTime - startTime;
console.log(`${serverName} large query: ${duration.toFixed(2)}ms (${result.data.length} items)`);
// Should complete in reasonable time
expect(duration).toBeLessThan(30000); // <30 seconds
});
test('Pagination: Following next links', async () => {
let totalItems = 0;
let nextUrl = navigator.navigator.getSystemsUrl({ limit: 10 });
let pages = 0;
while (nextUrl && pages < 10) { // Limit to 10 pages for test
const response = await fetch(nextUrl);
const data = await response.json();
totalItems += data.features.length;
pages++;
// Find next link
const nextLink = data.links?.find((l: any) => l.rel === 'next');
nextUrl = nextLink?.href || null;
}
console.log(`${serverName} pagination: ${totalItems} items across ${pages} pages`);
expect(totalItems).toBeGreaterThan(0);
});
test('Concurrent requests: 10 simultaneous', async () => {
const startTime = performance.now();
const promises = Array.from({ length: 10 }, (_, i) =>
navigator.getSystems({ limit: 10 })
);
const results = await Promise.all(promises);
const endTime = performance.now();
const duration = endTime - startTime;
console.log(`${serverName} concurrent: ${duration.toFixed(2)}ms for 10 requests`);
expect(results.length).toBe(10);
expect(duration).toBeLessThan(10000); // <10 seconds
});
});
});
});8. Part 2 (Advanced) Features Testing
Test datastreams, observations, commands:
// tests/live-integration/part2.spec.ts
describe('Live Server Part 2 Features', () => {
Object.entries(TEST_SERVERS).forEach(([serverName, config]) => {
if (!config.supports.includes('part2')) {
return; // Skip if server doesn't support Part 2
}
describe(`${serverName} server`, () => {
let navigator: TypedCSAPINavigator;
beforeAll(async () => {
const endpoint = await OgcApiEndpoint.fromUrl(config.baseUrl);
navigator = endpoint.csapi.typed();
});
test('Get datastreams', async () => {
const result = await navigator.getDatastreams({ limit: 10 });
expect(result.data).toBeInstanceOf(Array);
console.log(`${serverName} has ${result.data.length} datastreams`);
});
test('Get observations for datastream', async () => {
const datastreams = await navigator.getDatastreams({ limit: 1 });
if (datastreams.data.length === 0) return;
const datastreamId = datastreams.data[0].id;
const observations = await navigator.getDatastreamObservations(datastreamId, {
limit: 10,
});
expect(observations.data).toBeInstanceOf(Array);
// Verify observation structure
if (observations.data.length > 0) {
const obs = observations.data[0];
expect(obs).toHaveProperty('properties');
expect(obs.properties).toHaveProperty('phenomenonTime');
expect(obs.properties).toHaveProperty('result');
}
});
test('Get latest observation (phenomenonTime: latest)', async () => {
const datastreams = await navigator.getDatastreams({ limit: 1 });
if (datastreams.data.length === 0) return;
const datastreamId = datastreams.data[0].id;
const observations = await navigator.getDatastreamObservations(datastreamId, {
phenomenonTime: 'latest',
});
expect(observations.data).toBeInstanceOf(Array);
if (observations.data.length > 0) {
console.log(`${serverName} latest observation:`, observations.data[0].properties.phenomenonTime);
}
});
test('Get control streams', async () => {
const result = await navigator.getControlStreams({ limit: 10 });
expect(result.data).toBeInstanceOf(Array);
console.log(`${serverName} has ${result.data.length} control streams`);
});
test('Get system events', async () => {
const result = await navigator.getSystemEvents({ limit: 10 });
expect(result.data).toBeInstanceOf(Array);
console.log(`${serverName} has ${result.data.length} system events`);
});
});
});
});9. CI/CD Integration
Add live testing to GitHub Actions:
# .github/workflows/live-integration.yml
name: Live Integration Tests
on:
schedule:
- cron: '0 6 * * *' # Daily at 6 AM UTC
workflow_dispatch: # Manual trigger
jobs:
live-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
server: [opensensorhub, istsos, reference]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Start test servers (if local)
if: matrix.server == 'opensensorhub'
run: docker-compose -f docker-compose.test.yml up -d
- name: Wait for servers
if: matrix.server == 'opensensorhub'
run: |
timeout 60 bash -c 'until curl -f http://localhost:8181/csapi; do sleep 1; done'
- name: Run live integration tests
env:
TEST_SERVER: ${{ matrix.server }}
OSH_BASE_URL: ${{ secrets.OSH_BASE_URL }}
OSH_TOKEN: ${{ secrets.OSH_TOKEN }}
ISTSOS_BASE_URL: ${{ secrets.ISTSOS_BASE_URL }}
ISTSOS_API_KEY: ${{ secrets.ISTSOS_API_KEY }}
OGC_REFERENCE_URL: ${{ secrets.OGC_REFERENCE_URL }}
run: npm run test:live -- --server=${{ matrix.server }}
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: live-test-results-${{ matrix.server }}
path: test-results/live-integration/
- name: Stop test servers
if: always() && matrix.server == 'opensensorhub'
run: docker-compose -f docker-compose.test.yml down10. Documentation
Document live testing setup and results:
# docs/LIVE-INTEGRATION-TESTING.md
# Live Server Integration Testing
This document describes how to run live integration tests against real OGC API - Connected Systems servers.
## Tested Servers
- **OpenSensorHub** - Open-source CSAPI server (Part 1 + Part 2)
- **istSOS** - SOS to CSAPI server (Part 1)
- **OGC Reference** - Reference implementation (Part 1 + Part 2)
## Setup
### 1. Configure Test Servers
Create `.env.test` file:
```bash
OSH_BASE_URL=https://opensensorhub.example.com/csapi
OSH_TOKEN=your_bearer_token
ISTSOS_BASE_URL=https://istsos.example.com/csapi
ISTSOS_API_KEY=your_api_key
OGC_REFERENCE_URL=https://ogc-reference.example.com/csapi2. Run Local Test Servers (Optional)
docker-compose -f docker-compose.test.yml up -d3. Run Tests
# All servers
npm run test:live
# Specific server
npm run test:live -- --server=opensensorhub
# Specific test file
npm run test:live -- tests/live-integration/crud.spec.tsTest Coverage
- Discovery: Landing page, conformance, collections
- CRUD: Create, read, update, delete operations
- Queries: All query parameters (bbox, datetime, q, etc.)
- Formats: GeoJSON, SensorML, JSON negotiation
- Errors: 404, 400, 401, 500 handling
- Performance: Response times, pagination, concurrency
- Part 2: Datastreams, observations, commands, events
Results
Test results are stored in test-results/live-integration/:
{server}-discovery.json- Server discovery results{server}-crud.json- CRUD operation results{server}-queries.json- Query parameter results{server}-performance.json- Performance metrics
Known Issues
- OpenSensorHub: Custom query parameters not in OGC spec
- istSOS: Incomplete Part 2 support
- All servers: Rate limiting may cause test failures
Interoperability Matrix
| Feature | OpenSensorHub | istSOS | OGC Reference |
|---|---|---|---|
| Part 1 Core | ✅ | ✅ | ✅ |
| Part 2 Advanced | ✅ | ✅ | |
| OAuth2 Auth | ✅ | ❌ | ✅ |
| API Key Auth | ✅ | ✅ | ❌ |
| SensorML Format | ✅ | ✅ | |
| Binary Encoding | ❌ | ❌ | ✅ |
---
## Acceptance Criteria
### Test Infrastructure (15 criteria)
**Test Environment Setup (8 criteria):**
- [ ] Configure test servers (OpenSensorHub, istSOS, OGC reference)
- [ ] Set up environment variables for server URLs and credentials
- [ ] Create Docker Compose configuration for local test servers
- [ ] Implement test server configuration file (`config.ts`)
- [ ] Support multiple authentication methods (bearer token, API key, none)
- [ ] Implement test server health checks
- [ ] Document test server setup in `LIVE-INTEGRATION-TESTING.md`
- [ ] Add `.env.test.example` template file
**Test Utilities (7 criteria):**
- [ ] Create reusable test helpers (authentication, assertions, error handling)
- [ ] Implement server availability checks (skip tests if server unavailable)
- [ ] Create performance measurement utilities
- [ ] Implement test data cleanup (delete created resources after tests)
- [ ] Add test result reporting (JSON output for CI/CD)
- [ ] Implement retry logic for flaky network requests
- [ ] Add test timeout configuration (accommodate slow servers)
### Discovery and Conformance (8 criteria)
**Server Discovery (8 criteria):**
- [ ] Test loading landing page from all servers
- [ ] Test loading conformance classes from all servers
- [ ] Test detecting CSAPI support (`hasConnectedSystems`)
- [ ] Test loading CSAPI collections
- [ ] Test detecting available resource types
- [ ] Verify conformance claims match actual functionality
- [ ] Log server metadata (title, conformance classes, resources)
- [ ] Handle servers with non-standard landing pages
### CRUD Operations (15 criteria)
**Create Operations (4 criteria):**
- [ ] Test creating systems via POST
- [ ] Test creating deployments via POST
- [ ] Test creating datastreams via POST (Part 2)
- [ ] Verify created resources return 201 status and Location header
**Read Operations (4 criteria):**
- [ ] Test reading individual resources by ID
- [ ] Test reading resource collections
- [ ] Test reading related resources (system → datastreams)
- [ ] Verify response data matches expected format
**Update Operations (4 criteria):**
- [ ] Test full update via PUT
- [ ] Test partial update via PATCH
- [ ] Verify updated resources reflect changes
- [ ] Test update validation (invalid data should fail)
**Delete Operations (3 criteria):**
- [ ] Test deleting resources via DELETE
- [ ] Test cascade delete (with `cascade=true` parameter)
- [ ] Verify deleted resources return 404 on subsequent GET
### Query Parameters (12 criteria)
**Standard Query Parameters (12 criteria):**
- [ ] Test `limit` parameter (pagination limit)
- [ ] Test `bbox` parameter (spatial bounding box)
- [ ] Test `datetime` parameter (temporal filtering)
- [ ] Test `q` parameter (full-text search)
- [ ] Test `id` parameter (ID filtering)
- [ ] Test `geom` parameter (WKT geometry filtering)
- [ ] Test `foi` parameter (feature of interest filtering)
- [ ] Test `parent` parameter (hierarchical filtering)
- [ ] Test `recursive` parameter (recursive hierarchy)
- [ ] Test `observedProperty` parameter (property filtering)
- [ ] Test `select` parameter (property path selection)
- [ ] Test complex query combinations (bbox + datetime + limit)
### Format Negotiation (6 criteria)
**Content Type Support (6 criteria):**
- [ ] Test GeoJSON format (`application/geo+json`)
- [ ] Test SensorML format (`application/sml+json`)
- [ ] Test plain JSON format (`application/json`)
- [ ] Test Accept header negotiation
- [ ] Verify Content-Type response headers
- [ ] Test format parameter in URLs (`?f=geojson`)
### Error Handling (8 criteria)
**HTTP Error Responses (8 criteria):**
- [ ] Test 404 Not Found (invalid resource ID)
- [ ] Test 400 Bad Request (invalid query parameters)
- [ ] Test 401 Unauthorized (missing authentication)
- [ ] Test 403 Forbidden (insufficient permissions)
- [ ] Test 500 Internal Server Error (graceful handling)
- [ ] Test 503 Service Unavailable (retry logic)
- [ ] Test network timeouts (slow servers)
- [ ] Test malformed responses (invalid JSON, missing fields)
### Performance (10 criteria)
**Response Time Measurements (5 criteria):**
- [ ] Measure response time for small queries (<100 items, <5 seconds)
- [ ] Measure response time for large queries (1000+ items, <30 seconds)
- [ ] Measure response time for complex queries (multiple parameters)
- [ ] Test concurrent requests (10 simultaneous, <10 seconds total)
- [ ] Log all performance metrics for baseline establishment
**Pagination and Load (5 criteria):**
- [ ] Test pagination (following `next` links)
- [ ] Test limit enforcement (server respects `limit` parameter)
- [ ] Test large result sets (10+ pages)
- [ ] Measure memory usage for large responses
- [ ] Test sustained load (100 requests over time)
### Part 2 Features (10 criteria)
**Datastreams and Observations (5 criteria):**
- [ ] Test getting datastreams collection
- [ ] Test getting individual datastream
- [ ] Test getting datastream observations
- [ ] Test `phenomenonTime: 'latest'` special value
- [ ] Test temporal filtering on observations
**Commands and Control (3 criteria):**
- [ ] Test getting control streams
- [ ] Test issuing commands
- [ ] Test getting command status and results
**System Events (2 criteria):**
- [ ] Test getting system events
- [ ] Test creating system events
### Interoperability (8 criteria)
**Multi-Server Testing (8 criteria):**
- [ ] Test against OpenSensorHub (full Part 1 + Part 2)
- [ ] Test against istSOS (Part 1)
- [ ] Test against OGC reference implementation
- [ ] Document server-specific quirks and workarounds
- [ ] Create interoperability matrix (feature × server)
- [ ] Identify common patterns across servers
- [ ] Identify vendor-specific extensions
- [ ] Report interoperability issues to server maintainers
### CI/CD Integration (6 criteria)
**Continuous Testing (6 criteria):**
- [ ] Add GitHub Actions workflow for live integration tests
- [ ] Run tests daily (scheduled)
- [ ] Run tests on demand (manual trigger)
- [ ] Test against multiple servers in parallel
- [ ] Upload test results as artifacts
- [ ] Report test failures to issue tracker
### Documentation (8 criteria)
**Live Testing Documentation (8 criteria):**
- [ ] Document test server setup in `LIVE-INTEGRATION-TESTING.md`
- [ ] Document authentication configuration
- [ ] Document how to run tests locally
- [ ] Document how to add new test servers
- [ ] Document interoperability findings
- [ ] Document known server issues and workarounds
- [ ] Update README.md with live testing badge
- [ ] Document performance baselines from live servers
---
## Implementation Notes
### File Structure
tests/
live-integration/
config.ts (~150 lines) - Server configuration
discovery.spec.ts (~200 lines) - Discovery and conformance
crud.spec.ts (~400 lines) - CRUD operations
queries.spec.ts (~350 lines) - Query parameter testing
formats.spec.ts (~150 lines) - Format negotiation
errors.spec.ts (~200 lines) - Error handling
performance.spec.ts (~250 lines) - Performance testing
part2.spec.ts (~300 lines) - Part 2 features
helpers/
test-utils.ts (~200 lines) - Test utilities
auth.ts (~100 lines) - Authentication helpers
cleanup.ts (~100 lines) - Resource cleanup
docker-compose.test.yml (~100 lines) - Local test servers
.env.test.example (~20 lines) - Environment template
.github/
workflows/
live-integration.yml (~150 lines) - CI/CD workflow
docs/
LIVE-INTEGRATION-TESTING.md (~500 lines) - Documentation
### Dependencies
```bash
npm install --save-dev dotenv
Package.json Scripts
{
"scripts": {
"test:live": "jest --testMatch='**/live-integration/**/*.spec.ts' --runInBand",
"test:live:docker": "docker-compose -f docker-compose.test.yml up -d && npm run test:live && docker-compose -f docker-compose.test.yml down"
}
}Implementation Phases
Phase 1: Infrastructure (8-12 hours)
- Set up test server configuration
- Create Docker Compose for local servers
- Implement authentication helpers
- Create test utilities
Phase 2: Discovery Tests (4-6 hours)
- Implement landing page tests
- Implement conformance tests
- Implement collection discovery tests
Phase 3: CRUD Tests (10-14 hours)
- Implement create tests (POST)
- Implement read tests (GET)
- Implement update tests (PUT, PATCH)
- Implement delete tests (DELETE)
Phase 4: Query Tests (8-12 hours)
- Implement pagination tests
- Implement spatial filtering tests (bbox, geom)
- Implement temporal filtering tests (datetime)
- Implement property filtering tests
Phase 5: Format Tests (4-6 hours)
- Implement GeoJSON negotiation tests
- Implement SensorML negotiation tests
- Implement Accept header tests
Phase 6: Error Tests (6-8 hours)
- Implement HTTP error handling tests (404, 400, 401, 500)
- Implement network timeout tests
- Implement malformed response tests
Phase 7: Performance Tests (8-10 hours)
- Implement response time measurements
- Implement pagination performance tests
- Implement concurrent request tests
Phase 8: Part 2 Tests (8-12 hours)
- Implement datastream/observation tests
- Implement control stream/command tests
- Implement system event tests
Phase 9: CI/CD Integration (6-8 hours)
- Create GitHub Actions workflow
- Configure secrets for server credentials
- Set up test result reporting
Phase 10: Documentation (8-12 hours)
- Write
LIVE-INTEGRATION-TESTING.md - Document interoperability findings
- Create interoperability matrix
- Update README.md
Total Estimated Effort: 70-100 hours (1.75-2.5 weeks)
Known Challenges
1. Server Availability:
- Public test servers may be unavailable or rate-limited
- Solution: Use local Docker containers for reliable testing
2. Authentication:
- Different servers use different auth methods
- Solution: Support multiple auth patterns (bearer, API key, none)
3. Data Persistence:
- Test data may persist across runs
- Solution: Implement cleanup in
afterAll()hooks
4. Server Differences:
- Servers may interpret OGC specs differently
- Solution: Document differences, implement workarounds
5. Network Flakiness:
- Tests may fail due to network issues
- Solution: Implement retry logic, increase timeouts
Priority Justification
Priority: Low
Justification:
Why Low Priority:
- Theoretical compliance verified: Library achieves 98% OGC compliance through code review (Issue Validate: OGC API Endpoint Integration (endpoint.ts) #20)
- Comprehensive unit tests: 832+ tests with 94% coverage (Issue Validate: SWE Common Validation System (validation/swe-validator.ts) #19)
- Internal integration tested: 10 integration tests verify library flow (Issue Validate: OGC Standards Compliance (~95% claim) #18)
- No reported interoperability issues: No evidence of problems with real servers
- Large time investment: 70-100 hours required for comprehensive live testing
- External dependencies: Requires access to multiple test servers
Why Still Important:
- Empirical validation: Only way to verify library works with real servers
- Interoperability assurance: Confirms compatibility with multiple server implementations
- User confidence: Demonstrates library works in production scenarios
- Standards conformance: Validates OGC spec interpretation matches reality
- Issue prevention: Discovers problems before production deployments
- Competitive advantage: Live testing demonstrates maturity and reliability
Impact if Not Addressed:
- Unknown interoperability: Library may fail with certain servers despite theoretical compliance
- Production surprises: First real-world usage may reveal unexpected issues
- Vendor lock-in risk: May only work with specific server implementations
- Authentication problems: OAuth2/API key flows may not work as expected
- Performance issues: Real server latency may expose problems
- User frustration: Issues discovered by users rather than developers
- Reputation risk: Failures in production may damage library credibility
When to Prioritize:
- User reports interoperability issues: Prioritize immediately if real server problems arise
- Before 1.0 release: Include live testing results in 1.0 documentation
- Enterprise customers: Required for enterprise adoption (must work with their servers)
- Production deployments: Before deploying to production with real servers
- OGC certification: If seeking official OGC compliance certification
ROI Assessment:
- High for production users: Prevents costly production incidents
- High for library credibility: Demonstrates real-world validation
- Medium for open source: Shows commitment to quality and standards
- Low for prototypes: Overkill for proof-of-concept projects
- Best for: Production deployments, enterprise customers, OGC certification
Quick Win Opportunities:
- Start with Phase 1-2 (infrastructure + discovery) for 12-18 hours
- Provides immediate value: confirms library connects to real servers
- Can expand to other phases incrementally as needed
- Use Docker Compose for reliable local testing (no external dependencies)
Recommended Approach:
- Implement Phases 1-2 (infrastructure + discovery) now for quick wins (12-18 hours)
- Defer Phases 3-8 (comprehensive tests) until user demand or production needs
- Implement Phase 9 (CI/CD) when running tests regularly
- Implement Phase 10 (documentation) when results are stable
Final Note:
This is the FINAL work item (46 of 46) in the comprehensive validation project. Completing this item would provide 100% coverage of all identified validation work items, establishing the OGC-Client-CSAPI library as a fully validated, production-ready, enterprise-grade implementation of the OGC API - Connected Systems standard.