This article examines the concept of credible evidence in Extension evaluations with specific attention to the measures and measurement strategies used to collect and create data. Credibility depends on multiple factors, including data quality and methodological rigor, characteristics of the stakeholder audience, stakeholder beliefs about the information source, and the evaluation context. Measurement planning involves a process of making thoughtful decisions about choosing study variables, measurement strategies, and specific measures that adequately reflect the content and goals of the program being evaluated. The use of specific measures may also entail implicit assumptions, e.g., that the respondent is being truthful and accurate, which must be accepted if resulting data are to be viewed as credible. The article discusses aspects of measurement quality, including reliability and validity, for both quantitative and qualitative forms of data. Program stakeholders should be encouraged to be attentive, reflective, and critical in their analysis of evaluation evidence, and their views on what makes data credible must be understood and considered. The use of common measures in evaluating multi-site programs can be valuable, but only if the measures are fully appropriate for local sites. The article concludes with a summary of implications and recommendations for Extension evaluation practice.



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.