Download Free Evaluating In Practice Book in PDF and EPUB Free Download. You can read online Evaluating In Practice and write the review.

The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.
Rev. ed. of: Evaluating in practice. 1996.
This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.
"Thyer and Myers have written an easy-to-read primer on the topic of empirically evaluating the outcomes of social work practice. This resource, for social work students--graduate and undergraduate-- and for social work practitioners, presents outcome studies using both group-research and single-case designs. Unlike other books dealing with the topic of evaluating practice which use theoretical cases, Thyer and Myers use real-life examples of evaluating social work practice, ranging from those fairly low on the scale of internal validity to those that are pretty rigorous. The book begins with a refresher on evaluation research, provides a balanced approach to both single-system and group-evaluation designs, and closes with a discussion of ethical issues, myths, misconceptions, and practical cinsiderations in evaluation"--Back cover.
Evaluating Practice comes with a free CD-ROM featuring numerous programs, including the unique and innovative SINGWIN program for analyzing single-system design data (created by Charles Auerbach, David Schnall, and Heidi Heft Laporte of Yeshiva University); the CASS and CAAP programs for managing cases and scoring scales (created by Walter Hudson); and a NEW set of Microsoft Excel Workbooks and interactive exercises. Book jacket.
This straightforward and original text sets out best practice for designing, conducting and analysing research on work with young people. A creative and practical guide to evaluation, it provides the tools needed to bridge the gap between theoretical knowledge and applied practice. Written by an experienced, erudite team of authors this book provides clear, pragmatic advice that can be taken into the classroom and the field. The book: Provides strategies for involving young people in research and evaluation Showcases creative and participatory methods Weaves a real world project through each chapter, highlighting challenges and opportunities at each stage of an evaluation; readers are thus able to compare approaches Is accompanied by a website with downloadable worksheets, templates and videos from the authors This is the ideal text for postgraduate students and practitioners who work with young people in the statutory and voluntary sectors.
How can novice e-learning researchers and postgraduate learners develop rigorous plans to study the effectiveness of technology-enhanced learning environments? How can practitioners gather and portray evidence of the impact of e-learning? How can the average educator who teaches online, without experience in evaluating emerging technologies, build on what is successful and modify what is not? By unpacking the e-learning lifecycle and focusing on learning, not technology, Evaluating e-Learning attempts to resolve some of the complexity inherent in evaluating the effectiveness of e-learning. The book presents practical advice in the form of an evaluation framework and a scaffolded approach to an e-learning research study, using divide-and-conquer techniques to reduce complexity in both design and delivery. It adapts and builds on familiar research methodology to offer a robust and accessible approach that can ensure effective evaluation of a wide range of innovative initiatives, including those covered in other books in the Connecting with e-Learning series. Readers will find this jargon-free guide is a must-have resource that provides the proper tools for evaluating e-learning practices with ease.
Reflecting the latest developments in the field, the Second Edition provides readers with effective methods for evaluating health programs, policies, and health care systems, offering expert guidance for collaborating with stakeholders involved in the process. Author David Grembowski explores evaluation as a three-act play: Act I shows evaluators how to work with decision makers and other groups to identify the questions they want answered; Act II covers selecting appropriate evaluation designs and methods to answer the questions and reveal insights about the program’s impacts, cost-effectiveness, and implementation; and Act III discusses making use of the findings. Packed with relevant examples and detailed explanations, the book offers a step-by-step approach that fully prepares readers to apply research methods in the practice of health program evaluation.
Focusing on ethical challenges in program evaluation, this innovative book features six case-study scenarios that end at a point where the evaluator faces a significant decision about how to proceed. For each case, two distinguished evaluators offer insights on the best course of action to choose, and why. "What If?" boxes modify the details of the scenarios, inviting readers to reflect on whether these changes alter the ethical implications of the case. Six additional cases are presented with questions that guide readers to develop their own ethical analyses. The book is organized to follow the progress of an evaluation, from the entry/contracting phase through the utilization of results.