Skip to content

Latest commit

 

History

History
247 lines (189 loc) · 8.01 KB

File metadata and controls

247 lines (189 loc) · 8.01 KB

Usage

SeBS has three basic commands: benchmark, experiment, and local. For each command you can pass --verbose flag to increase the verbosity of the output. By default, all scripts will create a cache in the directory cache to store code with dependencies and information on allocated cloud resources. Benchmarks will be rebuilt after a change in source code is detected. To enforce redeployment of code, benchmark inputs, container deployment (supported in AWS) please use flags --update-code, --update-storage and --container-deployment respectively.

Note: The cache does not support updating the cloud region. If you want to deploy benchmarks to a new cloud region, then use a new cache directory.

Warning

We use libcurl to make HTTP requests. During installation, pycurl will attempt to build its bindings and needs headers for that - make sure you have all development packages installed. If you see an error like this one: src/pycurl.h:206:13: fatal error: gnutls/gnutls.h: No such file or directory, it means that you are missing some of the dependencies.

Benchmark

Package

If you want to simply build a function deployment, such as a full code package or a container, then use the command below.

sebs benchmark build 110.dynamic-html --config config/example.json --deployment aws

It will create a code package (local) or build and push a container, when --container-deployment flag is used (AWS only). The resulting deployment can be inspected and used for deployment and invocations on unsupported platforms.

Invoke

This command builds, deploys, and executes serverless benchmarks in the cloud. The example below invokes the benchmark 110.dynamic-html on AWS via the standard HTTP trigger.

sebs benchmark invoke 110.dynamic-html test --config config/example.json --deployment aws --verbose

The results will be stored in experiment.json. To configure your benchmark, change settings in the config file or use command-line options. The full list is available by running sebs benchmark invoke --help.

Process

To download cloud metrics and process the invocations, run:

sebs benchmark process --output-dir results

This will read invocations from experiment.json and write the processed data to results.json.

Statistics

To summarize executions, run:

sebs benchmark statistics results.json 

Regression

Additionally, we provide a regression option to execute all benchmarks on a given platform. The example below demonstrates how to run the regression suite with test input size on AWS.

sebs benchmark regression test --config config/example.json --deployment aws

The regression can be executed on a single benchmark as well:

sebs benchmark regression test --config config/example.json --deployment aws --benchmark-name 120.uploader

Experiment

This command is used to execute benchmarks described in the paper. The example below runs the experiment perf-cost:

sebs experiment invoke perf-cost --config config/example.json --deployment aws

The configuration specifies that benchmark 110.dynamic-html is executed 50 times, with 50 concurrent invocations, and both cold and warm invocations are recorded.

"perf-cost": {
    "benchmark": "110.dynamic-html",
    "experiments": ["cold", "warm"],
    "input-size": "test",
    "repetitions": 50,
    "concurrent-invocations": 50,
    "memory-sizes": [128, 256]
}

To download cloud metrics and process the invocations into a .csv file with data, run the process construct

sebs experiment process perf-cost --config example.json --deployment aws

You can find more details on running experiments and analyzing results in the separate documentation.

Clean

You can remove all allocated cloud resources with the following command:

sebs resource clean --config config/example.json

This option is currently supported only on AWS, where it removes Lambda functions and associated HTTP APIs and CloudWatch logs, S3 buckets, DynamoDB tables, and ECR repositories.

Local

In addition to the cloud deployment, we provide an opportunity to launch benchmarks locally with the help of minio storage. This allows us to conduct debugging and a local characterization of the benchmarks.

First, launch a storage instance. The command below is going to deploy a Docker container, map the container's port to port defined in the configuration on host network, and write storage instance configuration to file out_storage.json

sebs storage start all config/storage.json --output-json out_storage.json

Then, we need to update the configuration of local deployment with information on the storage instance. The .deployment.local object in the configuration JSON must contain a new object storage, with the data provided in the out_storage.json file. Fortunately, we can achieve this automatically with a single command by using jq:

jq '.deployment.local.storage = input' config/example.json out_storage.json > config/local_deployment.json

The output file will contain a JSON object that should look similar to this one:

{
  "deployment": {
    "name": "local",
    "local": {
      "storage": {
        "object": {
          "type": "minio",
          "minio": {
            "address": "172.17.0.3:9000",
            "mapped_port": 9011,
            "access_key": "xxx",
            "secret_key": "xxx",
            "instance_id": "xxx",
            "output_buckets": [],
            "input_buckets": [],
            "version": "xxx",
            "data_volume": "minio-volume",
            "type": "minio"
          }
        },
        "nosql": {
          "type": "scylladb",
          "scylladb": {
            "address": "172.17.0.4:8000",
            "mapped_port": 9012,
            "alternator_port": 8000,
            "access_key": "xxx",
            "secret_key": "xxx",
            "instance_id": "xxx",
            "region": "xxx",
            "cpus": 1,
            "memory": "xxx",
            "version": "xxx",
            "data_volume": "scylladb-volume"
          }
        }
      }
    },
  }
}

To launch Docker containers, use the following command - this example launches benchmark 110.dynamic-html with size test:

sebs local start 110.dynamic-html test out_benchmark.json --config config/local_deployment.json --deployments 1 --remove-containers --architecture=x64

The output file out_benchmark.json will contain the information on containers deployed and the endpoints that can be used to invoke functions:

{
  "functions": [
    {
      "benchmark": "110.dynamic-html",
      "hash": "5ff0657337d17b0cf6156f712f697610",
      "instance_id": "e4797ae01c52ac54bfc22aece1e413130806165eea58c544b2a15c740ec7d75f",
      "name": "110.dynamic-html-python-128",
      "port": 9000,
      "triggers": [],
      "url": "172.17.0.3:9000"
    }
  ],
  "inputs": [
    {
      "random_len": 10,
      "username": "testname"
    }
  ],
  "storage: {
    ...
  }
}

In our example, we can use curl to invoke the function with provided input:

curl $(jq -rc ".functions[0].url" out_benchmark.json) \
    --request POST \
    --data $(jq -rc ".inputs[0]" out_benchmark.json) \
    --header 'Content-Type: application/json'

To stop containers, you can use the following command:

sebs local stop out_benchmark.json
sebs storage stop all out_storage.json

Note: The stopped benchmark containers won't be automatically removed unless the option --remove-containers has been passed to the local start command.

Memory Measurements

The local backend allows additional continuous measurement of function containers. At the moment, we support memory measurements. To enable this, pass the following flag to sebs local start

--measure-interval <val>

The value specifies the time between two consecutive measurements. Measurements will be aggregated and written to a file when calling sebs local stop <file>. By default, the data is written to memory_stats.json.