Conversation
| private static final String TEST_DATA_PATH = "serde-tests/json-rpc-1-0/input/json_1_0.json"; | ||
|
|
||
| @Param({ | ||
| "awsJson1_0_GetItemInput_Baseline", |
There was a problem hiding this comment.
This is a limitation of JMH - we can't dynamically load the json set of test cases and define a JMH parameter for it. @param values must be constant expressions at compile-time because of how JMH handles the lifecycle of a benchmark. To get the best measurements JMH is a generator that during build generates raw classes/methods for all of these.
|
This PR needs some better release notes about the various benchmarks introduced (high level) and also the It would be ideal if the PR notes would have the actual results from running these benchmarks before we merge. |
|
Not a blocker, but when profiling DDB and the JSON serailizer I found that there was quite a big variance even with 5 warmups, and 5 iterations. When I run the benchmarks on the linux host I usually pass in I think its worth running these at least 2 times and see if there's any significant noise before committing to |
RanVaknin
left a comment
There was a problem hiding this comment.
Some test case IDs appear multiple times in the results with different values (for ex awsQuery_GetMetricDataResponse_S). Can you separate the results by protocol so it's clear which number came from which marshaller?
Good idea - Done! |
|



Add Standard (cross-sdk) Benchmarks
Motivation and Context
This change as the sdk-standard-benchmarks module which contains JMH microbenchmarks for the AWS SDK for Java v2, coveringendpoint resolution and serialization/deserialization (serde) across all major
AWS protocols.
Modifications
Adds a new sdk-standard-benchmarks module with the standard (performance part-2) benchmarks for endpoint resolution and serde.
The SERDE tests are based on the standard performance test models and use our existing protocol test loader to load the c2j protocol tests defined on each of the protocol models.
Endpoint Resolution Benchmarks
Benchmarks for the standard endpoint resolution pipeline (
ruleParams()→resolveEndpoint()) for S3 and Lambda. These exercise the same code path thatruns during a real SDK API call.
S3EndpointResolverBenchmarkLambdaEndpointResolverBenchmarkSerde Benchmarks
Benchmarks for serialization (marshaling) and deserialization (unmarshalling)
across five AWS protocol types. Each protocol has a pair of benchmark classes
parameterized by test case ID via JMH
@Param.| Protocol | Marshall Class | Unmarshall Class |
|---|---|---|---|---|
| JSON RPC 1.0 |
JsonRpc10MarshallBenchmark|JsonRpc10UnmarshallBenchmark|| AWS Query |
QueryMarshallBenchmark|QueryUnmarshallBenchmark|| REST JSON |
RestJsonMarshallBenchmark|RestJsonUnmarshallBenchmark|| REST XML |
RestXmlMarshallBenchmark|RestXmlUnmarshallBenchmark|| RPC v2 CBOR |
RpcV2CborMarshallBenchmark|RpcV2CborUnmarshallBenchmark|SERDE benchmarks use custom, cross-sdk defined models that are loosely based on existing services, but designed
to test SERDE performance across SDKs. The models are maintained internally and copied in the codegen resources of this module. In addition, the test cases are defined using the protocol test format - for the Java SDK V2 we use the c2j v1 format. Those test cases are copied into the test/sdk-standard-benchmarks/src/main/resources/serde-tests by protocol and by input/output. We mostly use the existing protocol test loading/runner utilities to setup the cases, but we do no assertions. The logic for loading test cases is in the
BenchmarkTestCaseLoaderwhich also definespatchMemberNamesfor handling fluentSetter names likeSS=>ss.Testing
Run all benchmarks
Results:
Screenshots (if appropriate)
Types of changes
Checklist
mvn installsucceedsscripts/new-changescript and following the instructions. Commit the new file created by the script in.changes/next-releasewith your changes.License