Download Free Evaluative Informetrics The Art Of Metrics Based Research Assessment Book in PDF and EPUB Free Download. You can read online Evaluative Informetrics The Art Of Metrics Based Research Assessment and write the review.

We intend to edit a Festschrift for Henk Moed combining a “best of” collection of his papers and new contributions (original research papers) by authors having worked and collaborated with him. The outcome of this original combination aims to provide an overview of the advancement of the field in the intersection of bibliometrics, informetrics, science studies and research assessment.
This book presents an introduction to the field of applied evaluative informetrics, dealing with the use of bibliometric or informetric indicators in research assessment. It sketches the field’s history, recent achievements, and its potential and limits. The book dedicates special attention to the application context of quantitative research assessment. It describes research assessment as an evaluation science, and distinguishes various assessment models, in which the domain of informetrics and the policy sphere are disentangled analytically. It illustrates how external, non-informetric factors influence indicator development, and how the policy context impacts the setup of an assessment process. It also clarifies common misunderstandings in the interpretation of some often used statistics. Addressing the way forward, the book expresses the author’s critical views on a series of fundamental problems in the current use of research performance indicators in research assessment. Highlighting the potential of informetric techniques, a series of new features is proposed that could be implemented in future assessment processes. It sketches a perspective on altmetrics and proposes new lines in longer term, strategic indicator research. It is written for interested scholars from all domains of science and scholarship, and especially for all those subjected to research assessment, research students at advanced master and PhD level, research managers, funders and science policy officials, and to practitioners and students in the field.
This Handbook provides a comprehensive overview of current developments, issues and good practices regarding assessment in social science research. It pays particular attention to the challenges in evaluation policies in the social sciences, as well as to the specificities of publishing in the area.
This book is an authoritative handbook of current topics, technologies and methodological approaches that may be used for the study of scholarly impact. The included methods cover a range of fields such as statistical sciences, scientific visualization, network analysis, text mining, and information retrieval. The techniques and tools enable researchers to investigate metric phenomena and to assess scholarly impact in new ways. Each chapter offers an introduction to the selected topic and outlines how the topic, technology or methodological approach may be applied to metrics-related research. Comprehensive and up-to-date, Measuring Scholarly Impact: Methods and Practice is designed for researchers and scholars interested in informetrics, scientometrics, and text mining. The hands-on perspective is also beneficial to advanced-level students in fields from computer science and statistics to information science.
‘Represents the culmination of an 18-month-long project that aims to be the definitive review of this important topic. Accompanied by a scholarly literature review, some new analysis, and a wealth of evidence and insight... the report is a tour de force; a once-in-a-generation opportunity to take stock.’ – Dr Steven Hill, Head of Policy, HEFCE, LSE Impact of Social Sciences Blog ‘A must-read if you are interested in having a deeper understanding of research culture, management issues and the range of information we have on this field. It should be disseminated and discussed within institutions, disciplines and other sites of research collaboration.’ – Dr Meera Sabaratnam, Lecturer in International Relations at the School of Oriental and African Studies, University of London, LSE Impact of Social Sciences Blog Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. Yet we only have to look around us at the blunt use of metrics to be reminded of the pitfalls. Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to positive ends is the focus of this book. Using extensive evidence-gathering, analysis and consultation, the authors take a thorough look at potential uses and limitations of research metrics and indicators. They explore the use of metrics across different disciplines, assess their potential contribution to the development of research excellence and impact and consider the changing ways in which universities are using quantitative indicators in their management systems. Finally, they consider the negative or unintended effects of metrics on various aspects of research culture. Including an updated introduction from James Wilsdon, the book proposes a framework for responsible metrics and makes a series of targeted recommendations to show how responsible metrics can be applied in research management, by funders, and in the next cycle of the Research Excellence Framework. The metric tide is certainly rising. Unlike King Canute, we have the agency and opportunity – and in this book, a serious body of evidence – to influence how it washes through higher education and research.
Traducción parcial de la Introducción: "En la actualidad, la evaluación de la investigaciones es una cuestión que se está replanteando en todo el mundo. En algunos casos, los trabajos de investigación están generando resultados muy buenos, en la mayoría de los casos los resultados son mediocres, y en algunos casos negativos. Por todo esto, la evaluación de los resultados de la investigación se convierte en una condición sine qua non. Cuando el número de investigadores eran menos, eran los propios colegas de profesión quienes evaluaban la investigación. Con el paso del tiempo, el número de investigadores aumentó, las áreas de investigación proliferaron, los resultados de la investigación se multiplicaron. La tendencia continuó y después de la Segunda Guerra Mundial, la investigación comenzó a crecer exponencialmente. Hoy en día, incluso en una estimación moderada hay alrededor de más de un millón de investigadores y producen más de dos millón de trabajos de investigación y otros documentos por año. En este contexto, la evaluación de la investigación es una cuestión de primera importancia. Para cualquier promoción, acreditación, premio y beca puede haber decenas o cientos de nominados. De entre éstos, seleccionar el mejor candidato es una cuestión difícil de determinar. Las evaluaciones inter pares en muchos casos están demostrando ser subjetivas. En 1963 se crea Science Citation Index (SCI) que cubre la literatura científica desde 1961. Unos años después, Eugene Garfield, fundador del SCI, preparó una lista de los 50 autores científicos más citados basándose en las citas que recibía el trabajo de un autor por parte de los trabajos de otros colegas de investigación. El documento titulado "¿Pueden predecirse los ganadores del Premio Nobel? 'Fue publicado en 1968 (Garfield y Malin, 1968). En el siguiente año es decir, 1969, dos científicos que figuran en la lista, por ejemplo, Derek HR Barton y Murray Gell-Mann recibieron el codiciado premio. Esto reivindicó la utilidad del análisis de citas. Cada año, varios científicos pertenecientes al campo de la Física, Química, Fisiología y Medicina reciben el Premio Nobel. De esta manera el análisis de citas se convirtió en una herramienta útil. Sin embargo, el análisis de citas siempre tuvo críticas y múltiples fallas. Incluso Garfield comentó - "El Uso del análisis de citas de los trabajos de evaluación es una tarea difícil. Existen muchas posibilidades de error '(Garfiled, 1983). Para la evaluación de la investigación, se necesitaban algunos otros indicadores. El análisis de citas, junto con la revisión por pares garantiza el mejor juicio en innumerables casos. Pero se necesita algo que sea más exacto. La llegada de la World Wide Web (WWW) brindó la oportunidad; pues un buen número de indicadores se están generando a partir de los datos disponibles en la WWW". (Trad. Julio Alonso Arévalo. Univ. Salamanca).
Scientometrics have become an essential element in the practice and evaluation of science and research, including both the evaluation of individuals and national assessment exercises. Yet, researchers and practitioners in this field have lacked clear theories to guide their work. As early as 1981, then doctoral student Blaise Cronin published "The need for a theory of citing" —a call to arms for the fledgling scientometric community to produce foundational theories upon which the work of the field could be based. More than three decades later, the time has come to reach out the field again and ask how they have responded to this call. This book compiles the foundational theories that guide informetrics and scholarly communication research. It is a much needed compilation by leading scholars in the field that gathers together the theories that guide our understanding of authorship, citing, and impact.
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
New methods in bibliometrics and alternative metrics provide us with information about research impact at both increasingly granular and global levels. Here, editor Elaine Lasda and a cast of expert contributors present a variety of case studies that demonstrate the practical utilization of these new scholarly metrics.