Download Free Web Indicators For Research Evaluation Book in PDF and EPUB Free Download. You can read online Web Indicators For Research Evaluation and write the review.

In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master’s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
Traducción parcial de la Introducción: "En la actualidad, la evaluación de la investigaciones es una cuestión que se está replanteando en todo el mundo. En algunos casos, los trabajos de investigación están generando resultados muy buenos, en la mayoría de los casos los resultados son mediocres, y en algunos casos negativos. Por todo esto, la evaluación de los resultados de la investigación se convierte en una condición sine qua non. Cuando el número de investigadores eran menos, eran los propios colegas de profesión quienes evaluaban la investigación. Con el paso del tiempo, el número de investigadores aumentó, las áreas de investigación proliferaron, los resultados de la investigación se multiplicaron. La tendencia continuó y después de la Segunda Guerra Mundial, la investigación comenzó a crecer exponencialmente. Hoy en día, incluso en una estimación moderada hay alrededor de más de un millón de investigadores y producen más de dos millón de trabajos de investigación y otros documentos por año. En este contexto, la evaluación de la investigación es una cuestión de primera importancia. Para cualquier promoción, acreditación, premio y beca puede haber decenas o cientos de nominados. De entre éstos, seleccionar el mejor candidato es una cuestión difícil de determinar. Las evaluaciones inter pares en muchos casos están demostrando ser subjetivas. En 1963 se crea Science Citation Index (SCI) que cubre la literatura científica desde 1961. Unos años después, Eugene Garfield, fundador del SCI, preparó una lista de los 50 autores científicos más citados basándose en las citas que recibía el trabajo de un autor por parte de los trabajos de otros colegas de investigación. El documento titulado "¿Pueden predecirse los ganadores del Premio Nobel? 'Fue publicado en 1968 (Garfield y Malin, 1968). En el siguiente año es decir, 1969, dos científicos que figuran en la lista, por ejemplo, Derek HR Barton y Murray Gell-Mann recibieron el codiciado premio. Esto reivindicó la utilidad del análisis de citas. Cada año, varios científicos pertenecientes al campo de la Física, Química, Fisiología y Medicina reciben el Premio Nobel. De esta manera el análisis de citas se convirtió en una herramienta útil. Sin embargo, el análisis de citas siempre tuvo críticas y múltiples fallas. Incluso Garfield comentó - "El Uso del análisis de citas de los trabajos de evaluación es una tarea difícil. Existen muchas posibilidades de error '(Garfiled, 1983). Para la evaluación de la investigación, se necesitaban algunos otros indicadores. El análisis de citas, junto con la revisión por pares garantiza el mejor juicio en innumerables casos. Pero se necesita algo que sea más exacto. La llegada de la World Wide Web (WWW) brindó la oportunidad; pues un buen número de indicadores se están generando a partir de los datos disponibles en la WWW". (Trad. Julio Alonso Arévalo. Univ. Salamanca).
In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master's students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
Aimed at academics, academic managers and administrators, professionals in scientometrics, information scientists and science policy makers at all levels. This book reviews the principles, methods and indicators of scientometric evaluation of information processes in science and assessment of the publication activity of individuals, teams, institutes and countries. It provides scientists, science officers, librarians and students with basic and advanced knowledge on evaluative scientometrics. Especially great stress is laid on the methods applicable in practice and on the clarification of quantitative aspects of impact of scientific publications measured by citation indicators. - Written by a highly knowledgeable and well-respected scientist in the field - Provides practical and realistic quantitative methods for evaluating scientific publication activities of individuals, teams, countries and journals - Gives standardized descriptions and classification of the main categories of evaluative scientometrics
This book constitutes the refereed proceedings of the International Workshop on Altmetrics for Research Outputs Measurements and Scholarly Information Management, AROSIM 2018, held in Singapore, in January 2018. The 7 revised full papers presented together with two keynote papers and one introduction paper were carefully reviewed and selected from 20 submissions. The workshop will investigate how social media based metrics along with traditional and non-traditional metrics can advance the state-of-the-art in measuring research outputs.
We intend to edit a Festschrift for Henk Moed combining a “best of” collection of his papers and new contributions (original research papers) by authors having worked and collaborated with him. The outcome of this original combination aims to provide an overview of the advancement of the field in the intersection of bibliometrics, informetrics, science studies and research assessment.
This handbook presents the state of the art of quantitative methods and models to understand and assess the science and technology system. Focusing on various aspects of the development and application of indicators derived from data on scholarly publications, patents and electronic communications, the individual chapters, written by leading experts, discuss theoretical and methodological issues, illustrate applications, highlight their policy context and relevance, and point to future research directions. A substantial portion of the book is dedicated to detailed descriptions and analyses of data sources, presenting both traditional and advanced approaches. It addresses the main bibliographic metrics and indexes, such as the journal impact factor and the h-index, as well as altmetric and webometric indicators and science mapping techniques on different levels of aggregation and in the context of their value for the assessment of research performance as well as their impact on research policy and society. It also presents and critically discusses various national research evaluation systems. Complementing the sections reflecting on the science system, the technology section includes multiple chapters that explain different aspects of patent statistics, patent classification and database search methods to retrieve patent-related information. In addition, it examines the relevance of trademarks and standards as additional technological indicators. The Springer Handbook of Science and Technology Indicators is an invaluable resource for practitioners, scientists and policy makers wanting a systematic and thorough analysis of the potential and limitations of the various approaches to assess research and research performance.
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
This book is written for anyone who is interested in how a field of research evolves and the fundamental role of understanding uncertainties involved in different levels of analysis, ranging from macroscopic views to meso- and microscopic ones. We introduce a series of computational and visual analytic techniques, from research areas such as text mining, deep learning, information visualization and science mapping, such that readers can apply these tools to the study of a subject matter of their choice. In addition, we set the diverse set of methods in an integrative context, that draws upon insights from philosophical, sociological, and evolutionary theories of what drives the advances of science, such that the readers of the book can guide their own research with their enriched theoretical foundations. Scientific knowledge is complex. A subject matter is typically built on its own set of concepts, theories, methodologies and findings, discovered by generations of researchers and practitioners. Scientific knowledge, as known to the scientific community as a whole, experiences constant changes. Some changes are long-lasting, whereas others may be short lived. How can we keep abreast of the state of the art as science advances? How can we effectively and precisely convey the status of the current science to the general public as well as scientists across different disciplines? The study of scientific knowledge in general has been overwhelmingly focused on scientific knowledge per se. In contrast, the status of scientific knowledge at various levels of granularity has been largely overlooked. This book aims to highlight the role of uncertainties, in developing a better understanding of the status of scientific knowledge at a particular time, and how its status evolves over the course of the development of research. Furthermore, we demonstrate how the knowledge of the types of uncertainties associated with scientific claims serves as an integral and critical part of our domain expertise.
‘Represents the culmination of an 18-month-long project that aims to be the definitive review of this important topic. Accompanied by a scholarly literature review, some new analysis, and a wealth of evidence and insight... the report is a tour de force; a once-in-a-generation opportunity to take stock.’ – Dr Steven Hill, Head of Policy, HEFCE, LSE Impact of Social Sciences Blog ‘A must-read if you are interested in having a deeper understanding of research culture, management issues and the range of information we have on this field. It should be disseminated and discussed within institutions, disciplines and other sites of research collaboration.’ – Dr Meera Sabaratnam, Lecturer in International Relations at the School of Oriental and African Studies, University of London, LSE Impact of Social Sciences Blog Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. Yet we only have to look around us at the blunt use of metrics to be reminded of the pitfalls. Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to positive ends is the focus of this book. Using extensive evidence-gathering, analysis and consultation, the authors take a thorough look at potential uses and limitations of research metrics and indicators. They explore the use of metrics across different disciplines, assess their potential contribution to the development of research excellence and impact and consider the changing ways in which universities are using quantitative indicators in their management systems. Finally, they consider the negative or unintended effects of metrics on various aspects of research culture. Including an updated introduction from James Wilsdon, the book proposes a framework for responsible metrics and makes a series of targeted recommendations to show how responsible metrics can be applied in research management, by funders, and in the next cycle of the Research Excellence Framework. The metric tide is certainly rising. Unlike King Canute, we have the agency and opportunity – and in this book, a serious body of evidence – to influence how it washes through higher education and research.