While the need for facility with evaluations that are
capable of addressing these challenges is much discussed
[4, 11], ascertaining an approach that can improve upon our
existing practices has remained elusive. For experimenters,
part of the problem is the vast amount of evaluation method-
ologies in use. Our community draws from diverse dis-
ciplines such as psychophysics, social sciences, statistics,
and computer science, using methodologies as diverse as
laboratory based factorial design studies, field evaluations,
statistical data analysis, and automatic image evaluation.
The vastness and diversity of evaluation methodologies
make it difficult for visualization researchers and practi-
tioners to find the most appropriate approaches to achieve
their evaluation goals. Another aspect of the difficulty is
the lack of literature guidelines—while some guidelines are
available to create and analyze visualization systems, and
to evaluate visualizations, these two sets of literature are
disparate as discussions on evaluation are mostly “struc-
tured as an enumeration of methods with focus on how
to carry them out, without prescriptive advice for when to
choose between them.” ([51, p.1 ], author’s own emphasis).
We extend this by taking a different tack—we offer advice
on how to chose between evaluation approaches.
While the need for facility with evaluations that are
capable of addressing these challenges is much discussed
[4, 11], ascertaining an approach that can improve upon our
existing practices has remained elusive. For experimenters,
part of the problem is the vast amount of evaluation method-
ologies in use. Our community draws from diverse dis-
ciplines such as psychophysics, social sciences, statistics,
and computer science, using methodologies as diverse as
laboratory based factorial design studies, field evaluations,
statistical data analysis, and automatic image evaluation.
The vastness and diversity of evaluation methodologies
make it difficult for visualization researchers and practi-
tioners to find the most appropriate approaches to achieve
their evaluation goals. Another aspect of the difficulty is
the lack of literature guidelines—while some guidelines are
available to create and analyze visualization systems, and
to evaluate visualizations, these two sets of literature are
disparate as discussions on evaluation are mostly “struc-
tured as an enumeration of methods with focus on how
to carry them out, without prescriptive advice for when to
choose between them.” ([51, p.1 ], author’s own emphasis).
We extend this by taking a different tack—we offer advice
on how to chose between evaluation approaches.