Basic science consumes vast amounts of public funding. Therefore, it is necessary to identify promising research, research that can provide good returns.
The gains from applied research are evident but the benefits of basic research are less obvious. The most common way to measure the productivity of a researcher working in basic science is by counting the number of “papers” that he/she publishes. Papers are scientific articles published in specialized journals. Not only the quantity but also the “quality” of the papers is considered important. Quality is mainly assessed by the prestige or impact factor of a journal. Under this system the best scientists are those with the greatest number of papers published in the highest impact factor journals.
For this reason, papers have become an obsession for some scientists and no-one is surprised when a very senior researcher addresses a junior audience in the following terms:
“Your job is to produce papers; that’s what you are paid for and that’s what the funding bodies expect from you.”
I will not contend that this assertion is wrong or bad. Although I fancy that it goes against the general view of the people that fund science with their taxes and charitable donations. Probably they didn’t give money for scientists to perform experiments with the ultimate goal of publishing an article in a magazine that they -if not involved with science and technology- probably have never heard of.
We could quiz the promoters of the idea exposed above -i.e. that the job of a junior scientist is to produce papers- and try to understand what it really means:
First it will be argued that it should not be taken literally, but it is; a poor crop of papers often means scientific ostracism, eviction from the university or research council and loss of funding. If you want to survive as a scientist, you need to publish high impact factor papers; whether this is your job or just a way of keeping your job, it is difficult to say.
It will be argued that papers are useful because they are picked up by “translational scientists” that make use of their basic concepts developing new technologies. This is a good point, perhaps the strongest one in support of papers as a measure of productivity. Unfortunately, to this solid argument another (which cannot be derived from it) is often attached implicitly: high impact factor papers are the most useful. This is clearly not true: Research that is very attractive for scientists and generates high impact factor papers some times produces no technological advances at all, whereas research that is extremely useful from a technological point of view may not be attractive for prestigious journals. This is described by L. Bornmann in EMBO Reports (1). The main problem is that the number of scientists reading a journal and quoting its articles is what makes a journal prestigious in the first place. Like fashion magazines, there is certain degree of “trendiness” in the articles that top journals publish. As if one day an editor decides that we should all dress in pale blue and everyone rush to buy pale blue jackets. Research that is highly cited or published in top journals may be good for an academic discipline but not for society or business. An editorial by the journal Nature Materials (2) is illustrative of this problem: It describes how after ranking the top 100 chemists working in material sciences, based on the number and quality of their publications, they found that 78 of them worked in just one field (nano-technology). The article comments that, in their view, important scientists working in less “trendy” subjects were left outside the top 100 list.
Many (scientist) will argue that there is no other way to measure scientific productivity than papers. Perhaps there are other ways -better ways- but they may be less straight forward. In addition, some top scientists would like to protect themselves behind a wall of papers shouting: “I produce top notch science!”. And it’s very likely they do produce high quality research, but as they do it with public funding we may want something more in addition to that.
What else can be used for measuring scientific productivity? The h-index (a quantity that describes the number of times the work of a scientist is mentioned by other scientist, regardless of the journals in which he/she has published it) has been put forward as a good estimation. This is an improvement from the impact factor-based system, but it is not without its flaws: as a person’s h-index can reflect longevity as well as quality and cannot decrease even if a scientist’s output does.
An interesting concept, supported by the editors of Nature Immunology (3), a journal with an extremely high impact factor, is the analysis of the number of times a research paper is accessed or downloaded online. This is more up-to-date than assessing citations by other researcher and utilizes a wider audience than just scientists.
Once I heard a business person complaining: “Scientists are full of themselves. They talk loudly about how useful their research is, but it is difficult for me to figure out if they are being honest about the value of what they do”. Patents and transfer of technology should be prized and weighted more heavily. They prove that scientists are really trying to make their research useful.
Training of students and mentoring should also be measured and better appreciated. Industry couldn’t work without appropriately trained BScs and MScs. Yet, their training by young scientists is not rewarded. Mentoring is extremely important. The number of disciples that go to become senior scientists ensures that work will be carried forward long after an individual scientist retires and papers become out of date.
(1) Nature Materials vol. 10 p477 2011
(2) EMBO reports vol. 13 p673 2012
(3) Nature Immunology vol. 11 p873 2010