The purpose of these notes is to present an objective view of the evaluation of Portuguese research units that is taking place and which, if not halted, will destroy a large part of a system that has taken years to build.
The Portuguese system
Funding systems may differ greatly between countries. In Portugal research units are, to a large extent, independent from other units such as departments and universities. The funding of these research units is mostly done by the Foundation for Science and Technology (FCT), the only public national funding agency for science in Portugal. Thus, unlike in other countries like France, where there exists more than one source of funding such as the ANR and the CNRS, for instance, in Portugal practically all the public funding for science is channeled via FCT. Universities do not play a role in research funding – their purpose from a financial perspective is providing salaries for staff members and carry out the administration of research funds coming from FCT.
This means that a research unit that did not make it to the second stage of the undergoing evaluation (with site visits) will inevitably stop existing as such – in fact, not even in the least demanding of areas from a financial point of view can a 30-researcher strong unit survive with a total budget of 5000 Euros per year, as it has been announced. This possibility cannot be taken seriously. For comparison, the same unit with the same classification received after the 2007 evaluation 2750 Euros per researcher per year, that is, 82,500 Euros per year. Many researchers do agree that a funding per head is not a good system since in that case units have an advantage in increasing their numbers irrespective of quality. There are, however, other ways of addressing this issue while still ensuring that the amounts involved do allow the active researchers in the units to continue developing their work, which will not be possible with the proposed funding (or, better, lack of it) in the present evaluation.
Previous FCT evaluations
No researcher in Portugal objects to the dividing of units into slots which will determine their funding, the best being allocated more funds than the others. That is how science funding should work and, in fact, this is how things have been working in Portugal until now. And although there have always been some (minor and isolated) complaints after each of the previous evaluations, nothing on this scale had been witnessed before.
In previous evaluations we had 5 grades, namely: Poor, Fair, Good, Very Good and Excellent. Units below the Very Good mark were not entitled to strategic funding. However, units with Good still received funding which allowed them to work to improve their standing. In particular, between the 2003 and the 2007 evaluations, while the percentage of units with Excellent remained at 21%, those with Very Good saw an increase from 31% to 38% - these numbers exclude the so-called Associate Labs.
It should also be made clear that the panels which carried out the previous evaluations were, as they are now, international, and that nearly all of the units being evaluated agree that it should remain that way. In fact, the only voices who have disagreed publicly are those who work in fields where the Portuguese language plays a strong role, like some of the Humanities. These do accept that involving Brazilian researchers or other specialists in Portuguese language and culture abroad would solve the problem.
The current evaluation
In this evaluation there are now 6 grades: Poor, Fair, Good, Very Good, Excellent and Outstanding. The lowest three levels are not entitled to any strategic funding, while, as described above, those with Good will still receive some funding but which is ridiculous and definitely not enough to carry on research at any level.
Those in the higher three levels will be visited by the panels and will be entitled to strategic funding. However, it has not been stated how this strategic funding is to be allocated as a function of the classification nor how much of what was requested by the unit will be conceded – in particular, there were no rules on how much could be asked by each unit.
Regarding panels, the main difference between this and the 2007 evaluation was that, before, there were 25 panels divided into scientific areas containing a total of 256 experts and, with some very specific exceptions, each panel had at least 8 members. This allowed for a wide covering of the different areas and all the experts had a global view of the area they were evaluating.
In the current evaluation, there were only 6 panels plus one multidisciplinary panel with a total of 73 experts. A list of the panels and the names of the researchers involved in the 2007 evaluation may be found
here, while the corresponding lists for the current evaluation which may be viewed in Table 1 may be found
here.
Exact
Sciences
|
Engineering
Sciences
|
Health
and Life Sciences
|
Natural
and Environmental Sciences
|
Social
Sciences
|
Humanities
|
The
Methodist Hospital Research Institute, Houston, United States
|
City
University, London, United Kingdom
|
Université
de Lyon, France
|
Grant Bigg University
of Sheffield, United Kingdom
|
Lund
University, Sweden
|
|
Leiden
University, the Netherlands
|
Imperial
College London, United Kingdom
|
University
of Glasgow, United Kingdom
|
|
West
London, United Kingdom
|
National
University of Ireland, Galway, Ireland
|
Jordi Jose Universitat
Politecnica de Catalunya (UPC), Barcelona, Spain
|
Universita
degli Studi di Salerno, Italy
|
Université
Louis Pasteur, Illkirch, France
|
University
of Groningen, the Netherlands
|
Rosemary
Deem
(Chair)
Royal
Holloway, University of London, United Kingdom
|
Dublin
City University, Ireland
|
|
|
|
Mark Johnson National
University of Ireland, Galway, Ireland
|
|
|
J.W.
Goethe University, Frankfurt, Germany
|
|
National
Research Council (CNR), Padova, Italy
|
|
University
of Valencia, Spain
|
|
Consiglio
Nazionale delle Ricerche, Catania, Italy
|
|
Martine Raes Faculté
Universitaire Notre Dame de la Paix, Namur, Belgium
|
|
|
|
Queen
Mary, University of London, United Kingdom
|
wp@soton.ac.ukWilliam
Powrie University
of Southampton, United Kingdom
|
Royal
Holloway University of London, United Kingdom
|
Universita
degli Studie della Tuscia, Viterbo, Italy
|
|
University
of Aarhus, Denmark
|
(Chair)
The
Open University, Milton Keynes, United Kingdom
|
Ecole
Polytechnique de Bruxelles, Belgium
|
|
(Chair)
Università
degli studi di Trieste, Italy
|
University
of Edinburgh, United Kingdom
|
|
Charles
University, Prague, Czech Republic
|
|
|
|
Johns
Hopkins University SAIS, Bologna, Italy
|
|
|
|
|
|
|
(Chair)
University
of Oxford, United Kingdom
|
Centre
de Mathématiques Appliquées, Palaiseau, France
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Ann Thomson European
University Institute, Florence, Italy
|
|
|
|
|
Frankfurt
School of Finance & Management, Frankfurt/Main, German
|
KU
Leuven, Belgium
|
|
|
|
|
|
|
|
|
|
|
|
University
of Zagreb, Croatia
|
|
|
|
|
University
of Copenhagen, Denmark
|
|
Table 1. Panels responsible for the final decisions as to whether research units made it to the second stage or not. Not making it to the second round implies either no funding whatsoever or only a very limited amount which is not enough to keep any research going on. There are 48% of units in this situation.
Each of these 6 panels brought together several different areas such as Chemistry and Mathematics (Exact Sciences), Chemical and Civil Engineering (Engineering Sciences), Public Health and Experimental Biology (Health and Life Sciences), Forestry Sciences and Marine Sciences (Natural and Environmental Sciences), Educational Sciences and Economics (Social Sciences) and History and Psychology (Humanities). With an average of about 12 members per panel and numbers actually raging from 8 to 17, it is not clear how these panels could cover all areas in an appropriate way, not to mention not introducing significant and serious biases. Although FCT claims that using 585 remote reviewers allowed for a proper covering of all areas, it was in fact the reduced panels who made the decisions, in many cases overriding the external experts opinions – see point 5 in the section (entitled What are researchers objecting to?) below; in fact, while the FCT's President claims that the evaluation was “robust”, he also stated that a 19 given by a reviewer could really just be worth a 13 or 14 (sic), although it is not clear who is supposed to do this re-interpretation of the grades.
How did we reach this situation?
FCT's proposal laying out this evaluation's rules was put out for public discussion in the first trimester of 2013. From the start it was clear that this would be a complete departure from previous evaluations. Several serious criticisms were made by the Council of Rectors of Portuguese Universities (CRUP), the Council of Associate Labs (CLA), many universities and several researchers on an individual basis. The main issues raised were ignored by FCT, in spite of the fact that the points raised at the time are now proving themselves to have drastic consequences.
We thus started an evaluation with rules that most, if not all, units objected to.
On June 27 a public presentation of the general results was made by the head of FCT and the head of the Health and Life Sciences panel, Professor William Cushley.
When the actual results were made public later that day, at first researchers were taken aback by what was happening. However, it soon became clear that something was not right. A too large number of units which had been considered to be Very Good or even Excellent in previous evaluations had now been relegated to the status of Good or even Fair. In some cases, even units which had received positive reports from all referees prior to the rebuttal phase were now eliminated.
Using statistics and some reverse engineering, a group of researchers put forth a very strong case that quotas had been used. FCT denied this, stating that there had only been some standardization of grades between areas.
However, and under (legal) pressure, on July 18 FCT eventually had to make public the
contract made with ESF, which several entities had been demanding for many months. Here the number of centres to pass on to the 2nd stage is indicated
explicitly. In spite of this, to this day FCT still denies the existence of quotas and claims that this number was given as an estimate only, based on what had happened in the 2007 evaluation, for the purpose of calculating the budget. It should be stressed that even this number is actually incorrect, as in the calculations of the 2007 evaluation FCT did not take into account the Associate Labs, which are now being evaluated simultaneously with all other units. The actual percentage of Centres above the threshold is in fact not 50% but 58% and this does not take into account that many units merged or ceased to exist (the total in 2007 was 404, while the total of units applying this time was only 322), or of natural improvements to the quality of the existing units.
Furthermore, in another part of the contract (Work Plan, General Principles – Stage 1 Assessment), where nothing related to the budget is being mentioned, but instead indications to the panels on how the process should be handled are being issued, the “magic” number of 50% exclusions also appears. There it is stated explicitly that
“Stage 1 will result in a shortlist of half of the research units that will be selected to proceed to stage 2.”
FCT's official position is still that there were no quotas and that it had no influence whatsoever in the evaluation results. This position is, in fact, that all the decisions were made independently by the panels, who are then made responsible for everything that is happening.
What are researchers objecting to?
Apart from the fact that researchers feel cheated with the hidden quotas rule, they are complaining about the following issues.
1. The
document that was put up for discussion by FCT in 2013 describing the evaluation process and which, with some modest changes, laid out the whole process, was updated at the end of April 2014, that is, with the evaluation already in full-swing, by another document referred to euphemistically as
Additional Information. In spite of the name, this in fact changed at least two fundamental aspects of the process completely, to wit:
i) while the first mentioned that at most 5 reports would be produced, the new document reduced this number to only two external (anonymous) reports plus one internal report.
ii) in the first document, the referee picked from among those proposed by the unit would take part in the discussion leading to the writing of the consensus report; this aspect is absent from the procedure describing the elaboration of the final report; in fact, and, as far as we can tell, the referee mentioned by the unit was never consulted regarding the consensus report.
Apart from the fact that it is completely unacceptable to change the rules once the process has started, it is clear that having 5 reports provides a more robust basis from which to proceed. This is definitely not guaranteed with only three reports. In particular, the data a group of analysts gathered so far (about 1/3 of all units) indicates that the difference between the maximum and the minimum marks
of the three referees is at least 5 points (out of 17) in 50% of the cases.
2. Many of the units which did not make it to the 2nd stage feel that this was the case because there were quotas and by this alone. In other words, had panels been left to their own devices to decide whether or not a given unit satisfied the criteria indicated, some of the units which are now staying behind would have made it to the second stage. This implies that there are units at very similar levels where one made it to the second stage and the other stayed behind. In particular, many think this from what was written in the consensus reports where, although some of the keywords which appear pointed to a classification of at least Very Good, the result was only Good.
In short, if this evaluation is allowed to go ahead, it will introduce unacceptable discontinuities between the quality of research units and their funding, effectively killing off nearly half of the research in the country. This was one of the many points that was already raised during
the public discussion of the evaluation rules but one to which FCT paid no
attention to.
3. Many experimental units have equipment that requires funds not only for running but also for maintenance; the actual funding provided to units classified with Good is totally ridiculous and the implication is that the corresponding equipment will either be lost or, at best, left unused. This amounts to a terrible loss, both scientifically and financially. In fact, very relevant and expensive scientific equipment is going, if nothing happens, to be sold or simply abandoned. The scientific production will, no doubt, be affected.
In any case, what should be made very clear is that this distinction made between two similar units implies that one will get funding while the other, very close to it, will cease to exist, its researchers being forced to leave research. It is also not clear that the best researchers in units which will disappear will be able to join other units, both for geographical reasons and because it is not clear that units will be willing to receive more researchers without an increase, albeit modest, of their funding – in fact, these extra researchers were not counted in the strategic development of the unit. Concerning the geographical issue, it is also feared that certain areas will cease to exist completely in certain parts of the country, as is the case of Physics which will be mainly concentrated in the capital.
4. A detailed
statistical analysis has shown that the size of the units has been the main decisive indicator as to whether a unit made it to the second stage or not. Although it is not clear whether this was intentional or not, it had the effect of eliminating small units on non-scientific grounds.
A
second statistical analysis has also shown that there is no evidence of an improvement in any of the indicators with respect to scale, that is, larger units do not benefit from being large when it comes to outputs.
5. In many cases, the external referees' reports were ignored by the panel in the elaboration of the final consensus report, sometimes in completely unexplained ways. With the data currently available to the analysts group (which have collected data about 1/3 of all units), it is estimated that over 20% of the cases fall into situations where the average of the external reports is contrary to that of the internal referee, but the latter's opinion prevailed in the consensus report. Only about 10% fall in the reverse situation.
6. In all previous evaluations, all units were visited by the panel, giving them the possibility to present their work, see the labs, and answer all questions that evaluators might have. Now this will only be the case for those units that made it to the second stage.
7. A complaint that has been voiced for some time now is that it is not clear how the funding will be distributed among all units that made it to the second stage. This was raised, for instance, by the Council of Associate Laboratories as early as March 2013. We recall that the mechanism by which this will be carried out by FCT has never been made public, a fact which is felt as unacceptable. Rules should be well defined and kept from the very beginning.
Who has made strong criticisms of the evaluation process publicly?
Here is a list, by no means complete:
1. The Council for Associate Laboratories
Associate Laboratories are some of the major research units in the country, representing over 2500 researchers and being transversal across almost all areas. There are 20 associate laboratories, of which 19 made it to the second stage. Their statement, which may be read
here, covers many of the main complaints of Portuguese researchers, namely, those points referred to in the previous section.
In fact, the CLA had already alerted FCT and the research community as early as March 2013 to the many dire consequences that would ensue should this evaluation go ahead with the regulations proposed by FCT.
2. The Council of Rectors of Portuguese Universities (CRUP) has, after having met with the Secretary of State for Science and Technology, issued a
statement where it considers that units which have had 14 in the evaluation or which had Excellent or Very Good in the previous evaluation should also make it to the next round. This proposal has been turned down by FCT, while the Minister for Education and Science claims that the evaluation should go ahead as it is and rules should not be changed in the middle – we recall, however, that FCT did just that in April this year (see point 1. in the previous section).
3. Following CRUPs statement, the Rector of the University of Lisbon (by far the largest in Portugal, and with a rate of approval of 73% units to the second stage) distanced himself from that position and
requested the prime-minister to suspend the evaluation. In a public interview, where he made severe criticisms to the FCT-ESFs methodology, he stated that this type of policy will destroy an important part of the national scientific system. He also criticized the discontinuities that are being introduced in the system, where all of a sudden units with many active researchers with international careers will cease to exist.
4. The Portuguese Societies of Chemistry, Physics and Philosophy all produced statements which may be found
here,
here and
here. The National Mathematics Commission, which includes representatives from all of the research units in mathematics, of the Portuguese Statistical Society and the Portuguese Mathematical Society, has also criticized the process in a statement which may be found
here.
5. The very own Scientific Councils of FCT have made a public statement mentioned
here, where they criticize many of the evaluation aspects.
6. A group from the Social Sciences, including researchers from both units who made it and those who did not make it to the next stage wrote an open letter to the president of FCT which may be found
here.
7. Many centres who did not make it to the second stage have made their opinions known; they have also made their evaluation reports public to let their peers evaluate by themselves. In some instances, this also includes a statement by the accompanying panel of the centre, an international entity recognized by FCT made up, in general, by top scientists.
8. Several researchers who have seen the whole system grow and develop within the last 30 years, such as retired Professor Maria de Sousa, a eminent immunologist, in an article in one of the most important Portuguese daily newspapers (articles may be found
here,
here,
here,
here,
here,
here and
here).
Some reports have also started to appear in the international specialised press (
Nature,
Physics World,
The Conversation, etc.). Note that some of these were written before issues such as the quotas and the changing of the rules became known. Physics Today, the journal of the American Physical Society, is in the process of writing an article on the subject, giving international voice to the Portuguese physicists.
Who has defended the evaluation process publicly?
Except for FCT (and the ruling coalition MPs, when they voted against the opposition proposal to review the evaluation), only one voice so far has made itself heard to defend the evaluation. This is a retired, although well-known, researcher (António Coutinho), who stated that mediocrity should not be funded, only excellence, implying that there is nothing in between. There are in this position echoes of the statements made by Vince Cable in the UK in 2010, in particular that only about half of research should be funded. We recall that this position was heavily criticised at the time by both researchers and commentators.
What will happen?
Most likely the whole process will end up in court, with many units and institutions having already pledged to do so. Some researchers' unions are preparing a large scale court action aimed at stopping the evaluation on its tracks. Since the general impression is that the direction of FCT is not listening to anyone, it is feared this might be the only way out, with all the wear and tear it will cause, not to mention the precious wasted research time.
But isn't this just a local issue?
We do not see it like that for several reasons:
1. As soon as ESF became involved, this became a European issue. Although it is FCT's evaluation, ESF did have to agree to several issues such as reduced panels, the imposition of quotas, and the changing of the rules already after the process was well under way.
2. This type of funding, where all is given to very few might become the norm; we recall that the current head of FCT has been elected to be President of Science Europe, taking office as of the 1st September.
3. This is also the philosophy behind the statements made by the British Secretary of State for Business, Innovation and Skills, Vince Cable, back in 2010. At the time, this caused quite an uproar in the UK and many scientists and commentators spoke out against it. Since we believe that the situation is similar, we record here three of the very many reactions that took place. Robert M. May (former Chief Scientific Adviser to the UK Government and President of the Royal Society) declared that:
“He [Vince Cable] was clearly badly briefed, and it's a shame he didn't care to get all the facts beforehand. In particular, his claim that public money should not be made available to research that 'is neither commercially useful nor theoretically outstanding' is just plain stupid."
while the President of the Academy for Medical Sciences, John Bell, stated that
"A long term commitment to publicly funded research is vital if we are to harness the competitive advantage previous investment has generated."
Finally, Mark Henderson, science editor of the Times at the time, pointed out that
“His [VC's] claim that 45 per cent of research fails to pass muster is as credible as Blair’s claim that Iraq could launch WMDs in 45 minutes.”
4. What is at stake here is science, which is universal. Can science progress if only what is deemed to be excellent or exceptional at a certain point by a controversial process is funded? Recall that there are no other national sources of funding for almost all of the researchers involved.
5. Finally, we recall that although the outcome and most of the problems were caused by FCT's demand that 50% of the units should be eliminated, its official position is that it had nothing to do with the evaluation so far and that the outcome is the work of ESF's evaluation and the panels of experts alone. It is thus also ESF's and the panel members' reputations that are at stake here.
Carlos Fiolhais
(on behalf of a group of researchers, who have been analyzing the ESF/FCT evaluation process)