Download Free Credibility Validity And Assumptions In Program Evaluation Methodology Book in PDF and EPUB Free Download. You can read online Credibility Validity And Assumptions In Program Evaluation Methodology and write the review.

This book focuses on assumptions underlying methods choice in program evaluation. Credible program evaluation extends beyond the accuracy of research designs to include arguments justifying the appropriateness of methods. An important part of this justification is explaining the assumptions made about the validity of methods. This book provides a framework for understanding methodological assumptions, identifying the decisions made at each stage of the evaluation process, the major forms of validity affected by those decisions, and the preconditions for and assumptions about those validities. Though the selection of appropriate research methodology is not a new topic within social development research, previous publications suggest only advantages and disadvantages of using various methods and when to use them. This book goes beyond other publications to analyze the assumptions underlying actual methodological choices in evaluation studies and how these eventually influence evaluation quality. The analysis offered is supported by a collation of assumptions collected from a case study of 34 evaluations. Due to its in-depth analysis, strong theoretical basis, and practice examples, Credibility, Validity and Assumptions in Program Evaluation Methodology, is a must-have resource for researchers, students, university professors, and practitioners in program evaluation. Importantly, it provides tools for the application of appropriate research methods in program evaluation.
​ A major reason complex programs are so difficult to evaluate is that the assumptions that inspire them are poorly articulated. Stakeholders of such programs are often unclear about how the change process will unfold. Thus, it is so difficult to reasonably anticipate the early and midterm changes that need to happen in order for a longer-term goalto be reached. The lack of clarity about the “mini-steps” that must be taken to reach a long-term outcome not only makes the task of evaluating a complex initiative challenging, but reduces the likelihood that all of the important factors related to the long term goal will be addressed. Most of the resources that have attempted to address this dilemma have been popularized as theory of change or sometimes program theory approaches. Although these approaches emphasize and elaborate the sequence of changes/mini steps that lead to the long-term goal of interest and the connections between program activities and outcomes that occur at each step of the way, they do not do enough to clarify how program managers or evaluators should deal with assumptions. Assumptions, the glue that holds all the pieces together, remain abstract and far from applicable. In this book the author tackles this important assumptions theme head-on-covering a breadth of ground from the epistemology of development assumptions, to the art of making logical assumptions as well as recognizing, explicit zing and testing assumptions with in an elaborate program theory from program design, implementation, monitoring and evaluation.
This book focuses on methods of choice in program evaluation. Credible methods choice lies in the assumptions we make about the appropriateness and validity of selected methods and the validity of those assumptions. As evaluators make methodological decisions in various stages of the evaluation process, a number of validity questions arise. Yet unexamined assumptions are a risk to useful evaluation. The first edition of this book discussed the formulation of credible methodological arguments and methods of examining validity assumptions. However, previous publications suggest advantages and disadvantages of using various methods and when to use them. Instead, this book analyzes assumptions underlying actual methodological choices in evaluation studies and how these influence evaluation quality. This analysis is the basis of suggested tools. The second edition extends the review of methodological assumptions to the evaluation of humanitarian assistance. While evaluators of humanitarian action apply conventional research methods and standards, they have to adapt these methods to the challenges and constraints of crisis contexts. For example, the urgency and chaos of humanitarian emergencies makes it hard to obtain program documentation; objectives may be unclear, and early plans may quickly become outdated as the context changes or is clarified. The lack of up-to-date baseline data is not uncommon. Neither is staff turnover. Differences in perspective may intensify and undermine trust. The deviation from ideal circumstances challenges evaluation and calls for methodological innovation. And how do evaluators work with assumptions in non-ideal settings? What tools are most relevant and effective? This revised edition reviews major evaluations of humanitarian action and discusses strategies for working with evaluation assumptions in crises and stable program settings.
Foundations of Program Evaluationheralds a thorough exploration of the field of program evaluation--looking back on its origins. By summarizing, comparing, and contrasting the work of seven major theorists of program evaluation, this book provides an important perspective on the current state of evaluation theory and provides suggestions for ways of improving its practice. Beginning in Chapter Two, the authors develop a conceptual framework to analyze how successfully each theory meets the specific criteria of its framework. Each subsequent chapter is devoted to the presentation of the theoretical and practical advice of a significant theorist--Michael Scriven, Donald Campbell, Carol Weiss, Joseph Wholey, Robert Stake, Lee Cronbach, and Peter Rossi.
This text provides a solid foundation in program evaluation, covering the main components of evaluating agencies and their programs, how best to address those components, and the procedures to follow when conducting evaluations. Different models and approaches are paired with practical techniques, such as how to plan an interview to collect qualitative data and how to use statistical analyses to report results. In every chapter, case studies provide real world examples of evaluations broken down into the main elements of program evaluation: the needs that led to the program, the implementation of program plans, the people connected to the program, unexpected side effects, the role of evaluators in improving programs, the results, and the factors behind the results. In addition, the story of one of the evaluators involved in each case study is presented to show the human side of evaluation. This new edition also offers enhanced and expanded case studies, making them a central organizing theme, and adds more international examples. New online resources for this edition include a table of evaluation models, examples of program evaluation reports, sample handouts for presentations to stakeholders, links to YouTube videos and additional annotated resources. All resources are available for download under the tab eResources at www.routledge.com/9781138103962.
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.
Exploring the influence and application of Campbellian validity typology in the theory and practice of outcome evaluation, this volume addresses the strengths and weaknesses of this often controversial evaluation method and presents new perspectives for its use. Editors Huey T. Chen, Stewart I. Donaldson and Melvin M. Mark provide a historical overview of the Campbellian typology adoption, contributions and criticism. Contributing authors propose strategies in developing a new perspective of validity typology for advancing validity in program evaluation including Enhance External Validity Enhance Precision by Reclassifying the Campbellian Typology Expand the Scope of the Typology The volume concludes with William R. Shadish's spirited rebuttal to earlier chapters. A collaborator with Don Campbell, Shadish provides a balance to the perspective of the issue with a clarification and defense of Campbell's work. This is the 129th volume of the Jossey-Bass quarterly report series New Directions for Evaluation, an official publication of the American Evaluation Association.
The leading program evaluation reference, updated with the latest tools and techniques The Handbook of Practical Program Evaluation provides tools for managers and evaluators to address questions about the performance of public and nonprofit programs. Neatly integrating authoritative, high-level information with practicality and readability, this guide gives you the tools and processes you need to analyze your program's operations and outcomes more accurately. This new fourth edition has been thoroughly updated and revised, with new coverage of the latest evaluation methods, including: Culturally responsive evaluation Adopting designs and tools to evaluate multi-service community change programs Using role playing to collect data Using cognitive interviewing to pre-test surveys Coding qualitative data You'll discover robust analysis methods that produce a more accurate picture of program results, and learn how to trace causality back to the source to see how much of the outcome can be directly attributed to the program. Written by award-winning experts at the top of the field, this book also contains contributions from the leading evaluation authorities among academics and practitioners to provide the most comprehensive, up-to-date reference on the topic. Valid and reliable data constitute the bedrock of accurate analysis, and since funding relies more heavily on program analysis than ever before, you cannot afford to rely on weak or outdated methods. This book gives you expert insight and leading edge tools that help you paint a more accurate picture of your program's processes and results, including: Obtaining valid, reliable, and credible performance data Engaging and working with stakeholders to design valuable evaluations and performance monitoring systems Assessing program outcomes and tracing desired outcomes to program activities Providing robust analyses of both quantitative and qualitative data Governmental bodies, foundations, individual donors, and other funding bodies are increasingly demanding information on the use of program funds and program results. The Handbook of Practical Program Evaluation shows you how to collect and present valid and reliable data about programs.
Praise for the third edition of the Handbook of Practical Program Evaluation "Mix three of the most highly regarded evaluators with a team of talented contributors, and you end up with an exceedingly practical and useful handbook that belongs on the reference shelf of every evaluator as well as program and policy officials." Jonathan D. Breul, executive director, IBM Center for The Business of Government "Joe Wholey and his colleagues have done it again a remarkably comprehensive, thoughtful, and interesting guide to the evaluation process and its context that should be useful to sponsors, users, and practitioners alike." Eleanor Chelimsky, former U.S. Assistant Comptroller General for Program Evaluation and Methodology "Students and practitioners of public policy and administration are fortunate that the leading scholars on evaluation have updated their outstanding book. This third edition of the Handbook of Practical Program Evaluation will prove once again to be an invaluable resource in the classroom and on the front lines for a public service under increasing pressure to do more with less." Paul L. Posner, director, public administration, George Mason University, and immediate former president, the American Society of Public Administration "The third edition of the Handbook of Practical Program Evaluation reflects the evolving nature of the field, while maintaining its value as a guide to the foundational skills needed for evaluation." Leslie J. Cooksy, current president, the American Evaluation Association "This third edition is even more of a must-have book than its earlier incarnations for academics to give their students a comprehensive overview of the field, for practitioners to use as a reference to the best minds on each topic, and for evaluation funders and consumers to learn what is possible and what they should expect. I've been in evaluation for 35 years, and I used the first and second editions all the time." Michael Hendricks, Ph.D., independent evaluation consultant
Including a new section on evaluation accountability, this Third Edition details 30 standards which give advice to those interested in planning, implementing and using program evaluations.