For evaluation purpose library functions for micro benchmarks are needed. (Reference Project: Google Benchmark https://github.com/google/benchmark) The following features should be provided: - [x] Benchmark macro to define and execute benchmarks - [ ] Platform independent (implement platform dependent stuff (timers etc.) for each platform) - [x] Measure execution latency (cpu ticks and time) - [ ] Benchmark repetition (configurable by parameter) - [ ] Vector of configurations as a parameter for repetitive execution - [ ] Print results in human readable way (Print as table: Benchmark name, repetitions, configuration, Min, Max, Median, average)
For evaluation purpose library functions for micro benchmarks are needed.
(Reference Project: Google Benchmark https://github.com/google/benchmark)
The following features should be provided: