Our Recent Posts

Archive

Tags

No tags yet.

Measuring scientists

Julie’s introduction:

All of us need to get used to the idea of being measured. We get measured every time we ask for funding, go for a job, look for a promotion, apply to be on a committee, compete for an award, seek an appointment etc etc. This means understanding how the measurement systems work is important. As a step in that direction, we’re very happy at the ICECReam this week to bring you a piece from Julie Birkholz. Julie is working her way through a PhD at the VU Amsterdam and her research investigates modern knowledge production behaviors in science (from understanding collaboration to the arising use of the Social Web in practicing science). You can find her online at: http://juliembirkholz.wordpress.com/

The most common measures used to evaluate science are based on bibliometric data, or rather publication records which include citation numbers, journal impact classifications, etc… (see [1] for complete description of quantitative indicators used in science). This data is commonly kept by private companies (usually former publishing houses, now “business database” companies who classify and analyze the data to understand trends). These companies also maintain the databases where you go to search for publications and include: Thompsons Reuters who run Web of Knowledge/Science, the organization behind Pubmed - National Center for Biotechnology Information and Google Scholar. The individual entities each have different breadth and depth in coverage depending upon disciplines, they index journal publications, conference proceedings, book publication data, and so forth. They also collect citation records; the number of times a record is cited by other publications in the database. Citation records are the core data used in computing traditional measures to reflect reputation, scientific performance and value of both the contributed knowledge and author(s).

Most academic research institutions evaluate researchers on one of two measures (in addition to the number of publications): the citation score, and/or h-index. (Disclaimer: The measures here only the most commonly used measures, other measures are also used depending on your field and institution). The pure citation score- the number of times a publication has been cited by others, allows one to evaluate the value of a publication given comparison to the field. This measure can also be used as a performance measure for a scientist or institution providing insight into how the scientist or institution’s publications are valued within the field.

In addition to the citation scores for evaluating scientists or a publication, citation data is also used to analyze the impact factor of a journal [2]. The impact factor (IF) measures the average number of citations per article published in a specific journal. The higher number of citations to the journal the higher impact/importance of the journal in the field. The IF can be found in common databases and is frequently used by academic institutions to specify journals that will be considered in evaluations of researchers.

Another common measure is the h-index, developed by Jorge E. Hirsch [3], which attempts to provide a single metric describing the quantity (number of publications) and quality/impact (number of citations) of a scientist’s work. In practice, publications are arranged in descending order according to how many times they have been cited (citation score). To find the h-index you go through this list of ordered publications and find the point where the order number is equal to, or greater than the citation score. The h-index is the order number of the publication where this occurs. In the example below, I have three publications arranged in descending order by citation score. My h-index is 2, because the publication ranked second has a higher citation score than the order number, the third publication does not. But you don’t need to worry about doing all the dirty work of manually calculating your h-index, most bibliometric databases and publication services do this automatically for you (or check out the Publish or Perish tool developed by Prof. Anne-Wil Harzing- http://www.harzing.com/).

h-index example

Although the h-index is used commonly, it has a number of drawbacks, it does not control for length of career of a scientist (e.g. it can never go down, nor does it indicate the age of a researcher in a field), self-citations are included (some argue counting your own citations gives a false indication of a paper’s value), nor does it take into account the number of authors of a publication. Despite these drawbacks (among others examined in detail by Prof Harzing), this measure remains among the most used in national and institutional evaluations.

As you can image these scores can be interpreted and implemented in a number of ways, and depend heavily on comparisons to correctly interpret (as every field has different dynamics and communication practices). These indexes are highly influenced by the reliability of the publication data used to compute them. Furthermore, most institutions are, unfortunately, uninformed about the application of these measures and use them as universal evaluating tools comparing disciplines with each other.

Keep these biases in mind when discussing your own work with your institution or potential employer. Being aware of these issues helps you defend the own value of your work to the field and your contribution to science. When it comes to how we as researchers are measured, these indexes can potentially make or break a research career.

[1] H. F. Moed, Wolfgang Glänzel, & Ulrich Schmoch (2004). Handbook of quantitative science and technology research: the use of publication and patent statistics in studies of S & T systems. Kluwer Academic Publishers: Dordrecht, The Netherlands. [2] Wolfgang Glänzel & Henk F. Moed (2002). “Journal impact measures in bibliometric research”. Scientometrics, Volume 53, Number 2, 171-193. [3] Hirsch, J.E. (2006) “An index to quantify an individual's scientific research output”. Proceedings of the National Academy of Sciences of Physics, Volume 5, Number 29, 16569–16572.