This is a collection of LTL and GR(1) specifications, as well as tools for analysing them.
Each test case is in the following JSON format:
{
"name": string, // benchmark name
"type": "GR(1)" | "LTL", // type of specification
"ins": string[], // input/environment variables
"outs": string[], // output/system variables
"domains": string[], // domain assumption formulae in the OWL format (https://gitlab.lrz.de/i7/owl/-/blob/main/doc/FORMATS.md)
"goals": string[] // goal formulae in the OWL format
}-
run_analysis.py- analysis tool that extracts statistics from specification files and outputs to CSV - part of the CI pipeline for the repoUsage:
python run_analysis.py [directory] [output.csv]- Analyzes all valid specification files in the directory (recursively)
- Extracts multiple metrics for each specification:
- Basic counts: domains, goals, environment/system variables
- Realizability: tests both
domains & goalsanddomains -> goalswith Strix. - Formula complexity: total and maximum variable/operator counts per formula
-
pipeline.py- runs a (U)BC detection pipeline on a given specification fileUsage:
python pipeline.py spec.json [-v, --verbose]- Checks if the specification is realizable, if so, exits
- Computes the unrealizable cores for the specification
- Uses pre-defined BC patterns to detect boundary conditions against the unrealizable cores
- Converts the specification Spectra, converts it to a boolean spec with the Interpolation Repair repo, then runs interpolation repair to generate assumptions refinements for the spec
- Tests if the refinements can be negated to form unavoidable boundary conditions (UBCs) for the unrealizable cores of the nodes for which they were generated
- Options:
-v, --verbose: enable verbose output
-
batch_runner.py- runs the pipeline on all specifications in a given directory and summarises some statsUsage:
python batch_runner.py [directory] [-v, --verbose] [--folder-prefix, -f FOLDER_PREFIX]- Runs the pipeline on all valid specification files in the directory (recursively)
- Options:
directory: directory to search for specification files (default: current directory)-v, --verbose: enable verbose output--folder-prefix, -f FOLDER_PREFIX: optional prefix for the output folder name
Tip
You can use python -u batch_runner.py specs -v | tee out.txt to log the output to a file while still seeing it in the terminal in real-time.
The Interpolation Repair tool is used to generate assumption refinements for unrealizable GR(1) specifications. It is used in the pipeline to generate potential UBCs.
- Clone the repo
git clone https://github.com/Noobcoder64/interpolation-repair
- Change into the directory
cd interpolation-repair - Apply the patch for compatibility with the analysis scripts
git apply ../interpolation-repair.patch
- Build the Docker image
docker build -t interpolation-repair -f Dockerfile .
Syfco is used to convert from the TLSF format. This repo has a Docker file to allow for syfco to run on any machine.
Build the Syfco container:
docker build -t syfco-container -f syfco.Dockerfile .Run Syfco (with access to the working directory):
docker run --platform=linux/amd64 --rm -v "$PWD":/data syfco-container [args]Create an alias for Syfco:
alias syfco='docker run --platform=linux/amd64 --rm -v "$PWD":/data syfco-container'With the above command, you'll be able to just run e.g. syfco --help in the terminal.
Prerequisites:
- Docker
- Python
- Conda
- Strix (accessible in PATH)
- Clone the repo
git clone https://github.com/alexanderbira/benchmark-specs.git
- Change into the directory
cd benchmark-specs - Install Spot with Conda:
conda install conda-forge::spot
- Activate the conda environment
conda activate [your_env_name]
- Install the required Python packages
pip install -r requirements.txt
- Set up Interpolation Repair (see above)