Download Free Foundations Of Program Evaluation Book in PDF and EPUB Free Download. You can read online Foundations Of Program Evaluation and write the review.

Foundations of Program Evaluationheralds a thorough exploration of the field of program evaluation--looking back on its origins. By summarizing, comparing, and contrasting the work of seven major theorists of program evaluation, this book provides an important perspective on the current state of evaluation theory and provides suggestions for ways of improving its practice. Beginning in Chapter Two, the authors develop a conceptual framework to analyze how successfully each theory meets the specific criteria of its framework. Each subsequent chapter is devoted to the presentation of the theoretical and practical advice of a significant theorist--Michael Scriven, Donald Campbell, Carol Weiss, Joseph Wholey, Robert Stake, Lee Cronbach, and Peter Rossi.
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.
First published in 1994, this text is designed to be used by graduate-level social work students in courses on evaluation and program design. Over the course of 20 years and 6 editions, the goals of the book have remained the same: to prepare students to participate in evaluative activities within their organizations; to prepare students to become critical producers and consumers of professional evaluative literature; and to prepare students for more advanced evaluation courses and texts. Grinnell, Gabor, and Unrau aim to meet these objectives by presenting a unique approach that is realistic, practical, applied, and user-friendly. While a majority of textbooks focus on program-level evaluation, some recent books present case-level evaluation methods but rely on inferentially powerful -- but difficult-to-implement -- experimental baseline designs. This text assumes that neither of these approaches adequately reflects the realities of the field or the needs of students and beginning practitioners. Instead, Program Evaluation for Social Workers offers a blend of the two that demonstrates how they can complement one another. The integration of case-level and program-level approaches provides an accessible, adaptable, and realistic framework for students to more easily grasp and implement in the real-world.
Evaluation examines policies and programs across every arena of human endeavor, from efforts to stop the spread of HIV/AIDS to programs that drive national science policy. Relying on a vast array of methods, from qualitative interviewing to econometrics, it is a "transdiscipline," as opposed to a formal area of academic study. Accounting for these challenges, Evaluation Foundations Revisited offers an introduction for those seeking to better understand evaluation as a professional field. While the acquisition of methods and methodologies to meet the needs of certain projects is important, the foundation of evaluative practice rests on understanding complex issues to balance. Evaluation Foundations Revisited is an invitation to examine the intellectual, practical, and philosophical nexus that lies at the heart of evaluation. Thomas A. Schwandt shows how to critically engage with the assumptions that underlie how evaluators define and position their work, as well as how they argue for the usefulness of evaluation in society. He looks at issues such as the role of theory, how notions of value and valuing are understood, how evidence is used, how evaluation is related to politics, and what comprises scientific integrity. By coming to better understand the foundations of evaluation, readers will develop what Schwandt terms "a life of the mind of practice," which enables evaluators to draw on a more holistic view to develop reasoned arguments and well fitted techniques.
" This timely addition to a new genre of evaluation methodology eschews the objectivity of an external evaluation in favor of internal value-driven assessments that advance the goal of self-improvement through self-determination. Fetterman offers down-to-earth, clearly written descriptions and explanations of an approach that reconciles the contingencies of organizational practice with the standards and principles of evaluation accountability. He adroitly bridges the gap between the subjectivity of self-evaluation and the objectivity of external evaluation by showing with case examples and detailed methods, forms, and narrative why empowerment evaluation extends the reach of standard evaluation practice." --Dennis Mithaug, Teacher's College, Columbia UniversityWhat is empowerment evaluation? When is it the most appropriate approach to use in an evaluation? How can it best be implemented? Aimed at demystifying empowerment evaluation, the book shows readers when to use this form of evaluation and how to more effectively use its three steps (developing a mission statement; taking stock by identifying and prioritizing the most significant program activities; and, charting a course for future strategies to accomplish program goals). Fetterman also illustrates the steps with four case examples, ranging from hospital to educational settings. In addition, he covers: how to use empowerment evaluation to meet the standards developed by the Joint Committee on Standards for Educational Evaluation; the caveats and concerns about the use of empowerment evaluation; the relationship between collaborative, participatory, stakeholder, and utilization-focused evaluation with empowermentevaluation; the role of the Internet in disseminating empowerment evaluation; and, an analysis of the strengths, weaknesses, and conditions of empowerment evaluation. This book will guide evaluators exploration of their roles
Program Theory-Driven Evaluation Science fills the gap between 21st century literature on evaluation and what is happening in practice. It features detailed examples of how evaluations actually unfold in practice to develop people, programs, and organizations. Commonly accepted strategies for practicing evaluation are outlined, followed by comprehe
Including a new section on evaluation accountability, this Third Edition details 30 standards which give advice to those interested in planning, implementing and using program evaluations.
Using an original framework, this practical introduction to evaluation shows how to identify appropriate forms and approaches, involve stakeholders in the planning process and disseminate the evaluation findings.
Program Evaluation and Performance Measurement: An Introduction to Practice, Second Edition offers an accessible, practical introduction to program evaluation and performance measurement for public and non-profit organizations, and has been extensively updated since the first edition. Using examples, it covers topics in a detailed fashion, making it a useful guide for students as well as practitioners who are participating in program evaluations or constructing and implementing performance measurement systems. Authors James C. McDavid, Irene Huse, and Laura R. L. Hawthorn guide readers through conducting quantitative and qualitative program evaluations, needs assessments, cost-benefit and cost-effectiveness analyses, as well as constructing, implementing and using performance measurement systems. The importance of professional judgment is highlighted throughout the book as an intrinsic feature of evaluation practice.
The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.