Skip to content

Choice of Parameter values for benchmarks #136

@dschult

Description

@dschult

The recent algorithms that show improvement with parallel over sequential makes me start to rethink our benchmark parameters. We are timing for density values: 0.2, 0.4, 0.6, 0.8, 1.0 which spreads out equally among density values 0 and 1. But networks are almost always sparse (otherwise we wouldn't call them networks -- we would just track everyone to everyone contact). Also, it seems that our heatmaps use relatively small graphs in terms of numbers of nodes (<= 1600).

Perhaps we should be looking at larger graphs in terms of nodes and smaller graphs in terms of density. How can we decide what values to use for numbers of nodes and density?

Perhaps we should be using logarithmic spacing, something like: p in [1e-6, 1e-4, 1e-2, 1e-1, 0.2, 0.4] And for number of nodes, what do people think of making the values depend on the density we are testing. Something like: we have n^2*density = m, so choose n = sqrt(m / p) for m = [1e1, 1e2, 1e4, 1e6].

Thoughts?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions