"In Applied Evaluative Informetricsthe author adopts a didactic approach and a pragmatic perspective, drawing on his extensive knowledge and experience. He has written the book for a general audience of non-experts and only secondarily for the scientometric or informetric community." (David A. Pendlebury, Scientometrics, Vol. 119, 2019)
Preface.- Structure of the book.- Acknowledgements.- Executive Summary.- PART 1: General Introduction and Synopsis.- 1. General introduction.- 2. Base notions and general conclusions.- 3. Synopsis (incl. summary tables).- PART 2. Historical overview lectures.- 4. De Solla Price’s Networks of Scientific Papers.- 5. The Citation Cycle and the structure of science.- 6. Science maps and mapping softwares.- 7. Eugene Garfield’s Science Citation Index.- 8. Web of Science, Scopus and Google Scholar.- 9. Informetrics as a big data science.- 10. Science-Technology linkages: Francis Narin’a patent citation analysis.- 11. Models of the relationship between science and technology.- 12. Position of firms in bibliometric rankings.- 13. Journal impact factor and SNIP.- 14. Eigenfactor and SJR.- 15. Full text downloads.-16. Relative citation rates.- 17. H-index.- 18. Altmetrics (including downloads).- PART 3. Studies on hot topics.- 19. World university rankings.- 20. The effects of Open Access publishing.- 21. Models of scientific development.- 22. Google Scholar.- PART 4. Perspective articles.- 23. The potential of altmetrics.- 24. The need to study the quality of the manuscript peer review process.- 25. The need to develop author self evaluation tools.- 26. On the measurement of societal impact.- 27. Informetric applications in other disciplines: sport science and musicology.- References.
Henk F. Moed is a former senior staff member and full professor of research assessment methodologies in the Centre for Science and Technology Studies (CWTS) at Leiden University. He obtained a Ph.D. degree in Science Studies at the University of Leiden in 1989. He has been active in numerous research topics, including: the creation of bibliometric databases from raw data from Thomson Scientific’s Web of Science and Elsevier’s Scopus; analysis of inaccuracies in citation matching; assessment of the potentialities and pitfalls of journal impact factors; the development and application of science indicators for the measurement of research performance in the basic natural- and life sciences; the use of bibliometric indicators as a tool to assess peer review procedures; the development and application of performance indicators in social sciences and humanities; studies of the effects of ‘Open Access’ upon research impact and studies of patterns in ‘usage’ (downloading) behaviour of users of electronic scientific publication warehouses; studies of the effects of the use of bibliometric indicators upon scientific authors and journal publishers.
He has published numerous research articles, and is editor of several journals in his field. He is a winner of the Derek de Solla Price Award in 1999. He edited, jointly with W. Glanzel and U. Schmoch, the Handbook on Quantitative Science and Technology Research (Kluwer 2004), and published Citation Analysis in Research Evaluation (Springer 2005), a textbook which is one of very few of these in the field.
He developed a new indicator of journal impact, SNIP (Source Normalized Impact per Paper), a so called “rolling year” journal metric. He is a member of the Board of the International Society for Scientometrics and Informetrics (ISSI). He was a Senior Scientific Advisor at Elsevier for 4 years and a founder of the Elsevier Bibliometric Research Program (EBRP, which ran till Aug. 2013) and of the Elsevier Metrics Development Program (from 2014). He also was Director of the Informetric Research Group (2012-2014).
This book presents an introduction to the field of applied evaluative informetrics, dealing with the use of bibliometric or informetric indicators in research assessment. It sketches the field’s history, recent achievements, and its potential and limits. The book dedicates special attention to the application context of quantitative research assessment. It describes research assessment as an evaluation science, and distinguishes various assessment models, in which the domain of informetrics and the policy sphere are disentangled analytically. It illustrates how external, non-informetric factors influence indicator development, and how the policy context impacts the setup of an assessment process. It also clarifies common misunderstandings in the interpretation of some often used statistics.
Addressing the way forward, the book expresses the author’s critical views on a series of fundamental problems in the current use of research performance indicators in research assessment. Highlighting the potential of informetric techniques, a series of new features is proposed that could be implemented in future assessment processes. It sketches a perspective on altmetrics and proposes new lines in longer term, strategic indicator research.
It is written for interested scholars from all domains of science and scholarship, and especially for all those subjected to research assessment, research students at advanced master and PhD level, research managers, funders and science policy officials, and to practitioners and students in the field.