Download Free Bibliometrics And Research Evaluation Book in PDF and EPUB Free Download. You can read online Bibliometrics And Research Evaluation and write the review.

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
"Bibliometrics and altmetrics are increasingly becoming the focus of interest in the context of research evaluation. The Handbook Bibliometrics provides a comprehensive introduction to quantifying scientific output in addition to a historical derivation, individual indicators, institutions, application perspectives and data bases. Furthermore, application scenarios, training and qualification on bibliometrics and their implications are considered"--Publisher's website.
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
At last, the first systematic guide to the growing jungle of citation indices and other bibliometric indicators. Written with the aim of providing a complete and unbiased overview of all available statistical measures for scientific productivity, the core of this reference is an alphabetical dictionary of indices and other algorithms used to evaluate the importance and impact of researchers and their institutions. In 150 major articles, the authors describe all indices in strictly mathematical terms without passing judgement on their relative merit. From widely used measures, such as the journal impact factor or the h-index, to highly specialized indices, all indicators currently in use in the sciences and humanities are described, and their application explained. The introductory section and the appendix contain a wealth of valuable supporting information on data sources, tools and techniques for bibliometric and scientometric analysis - for individual researchers as well as their funders and publishers.
A comprehensive, state-of-the-art examination of the changing ways we measure scholarly performance and research impact.
This book analyses and discusses the recent developments for assessing research quality in the humanities and related fields in the social sciences. Research assessments in the humanities are highly controversial and the evaluation of humanities research is delicate. While citation-based research performance indicators are widely used in the natural and life sciences, quantitative measures for research performance meet strong opposition in the humanities. This volume combines the presentation of state-of-the-art projects on research assessments in the humanities by humanities scholars themselves with a description of the evaluation of humanities research in practice presented by research funders. Bibliometric issues concerning humanities research complete the exhaustive analysis of humanities research assessment. The selection of authors is well-balanced between humanities scholars, research funders, and researchers on higher education. Hence, the edited volume succeeds in painting a comprehensive picture of research evaluation in the humanities. This book is valuable to university and science policy makers, university administrators, research evaluators, bibliometricians as well as humanities scholars who seek expert knowledge in research evaluation in the humanities.
Can the methods of science be directed toward science itself? How did it happen that scientists, scientific documents, and their bibliographic links came to be regarded as mathematical variables in abstract models of scientific communication? What is the role of quantitative analyses of scientific and technical documentation in current science policy and management? Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics answers these questions through a comprehensive overview of theories, techniques, concepts, and applications in the interdisciplinary and steadily growing field of bibliometrics. Since citation indexes came into the limelight during the mid-1960s, citation networks have become increasingly important for many different research fields. The book begins by investigating the empirical, philosophical, and mathematical foundations of bibliometrics, including its beginnings with the Science Citation Index, the theoretical framework behind it, and its mathematical underpinnings. It then examines the application of bibliometrics and citation analysis in the sciences and science studies, especially the sociology of science and science policy. Finally it provides a view of the future of bibliometrics, exploring in detail the ongoing extension of bibliometric methods to the structure and dynamics of the World Wide Web. This book gives newcomers to the field of bibliometrics an accessible entry point to an entire research tradition otherwise scattered through a vast amount of journal literature. At the same time, it brings to the forefront the cross-disciplinary linkages between the various fields (sociology, philosophy, mathematics, politics) that intersect at the crossroads of citation analysis. Because of its discursive and interdisciplinary approach, the book is useful to those in every area of scholarship involved in the quantitative analysis of information exchanges, but also to science historians and general readers who simply wish to familiarize them
'The economic crisis has simultaneously placed a strong emphasis on the role of R&D as an engine of economic growth and a demand that limited public resources are demonstrated to have had the maximum possible impact. Rigorous evaluation is the key to meeting these needs. This Handbook brings together highly experienced leaders in the field to provide a comprehensive and well-organised state-of-the-art overview of the range of methods available. It will prove invaluable to experienced practitioners, students in the field and more widely to those who want to increase their understanding of the complex and pervasive ways in which technological advance contributes to economic and social progress.' – Luke Georghiou, University of Manchester, UK 'Theoretical and empirical research on program evaluation has advanced rapidly in scope and quality. A concomitant trend is increasing pressure on policymakers to show that programs are "effective". Now is the time for a comprehensive status report on state-of-the-art research and methods by leading scholars in a variety of disciplines on program evaluation. This outstanding collection of contributions will serve as a valuable reference tool for academics, policymakers, and practitioners for many years to come.' – Donald S. Siegel, University at Albany, SUNY, US There has been a dramatic increase in expenditures on public goods over the past thirty years, particularly in the area of research and development. As governments explore the many opportunities for growth in this area, they – and the general public – are becoming increasingly concerned with the transparency, accountability and performance of public programs. This pioneering Handbook offers a collection of critical essays on the theory and practice of program evaluation, written by some of the most well-known experts in the field. As this volume demonstrates, a wide variety of methodologies exist to evaluate particularly the objectives and outcomes of research and development programs. These include surveys, statistical and econometric estimations, patent analyses, bibliometrics, scientometrics, network analyses, case studies, and historical tracings. Contributors divide these and other methods and applications into four categories – economic, non-economic, hybrid and data-driven – in order to discuss the many factors that affect the utility of each technique and how that impacts the technological, economic and societal forecasts of the programs in question. Scholars, practitioners and students with an interest in economics and innovation will all find this Handbook an invaluable resource.
Policy makers, academic administrators, scholars, and members of the public are clamoring for indicators of the value and reach of research. The question of how to quantify the impact and importance of research and scholarly output, from the publication of books and journal articles to the indexing of citations and tweets, is a critical one in predicting innovation, and in deciding what sorts of research is supported and whom is hired to carry it out. There is a wide set of data and tools available for measuring research, but they are often used in crude ways, and each have their own limitations and internal logics. Measuring Research: What Everyone Needs to Know(R) will provide, for the first time, an accessible account of the methods used to gather and analyze data on research output and impact. Following a brief history of scholarly communication and its measurement -- from traditional peer review to crowdsourced review on the social web -- the book will look at the classification of knowledge and academic disciplines, the differences between citations and references, the role of peer review, national research evaluation exercises, the tools used to measure research, the many different types of measurement indicators, and how to measure interdisciplinarity. The book also addresses emerging issues within scholarly communication, including whether or not measurement promotes a "publish or perish" culture, fraud in research, or "citation cartels." It will also look at the stakeholders behind these analytical tools, the adverse effects of these quantifications, and the future of research measurement.