Research assessment: How can you evaluate a science team?
Where possible, Ghent University's vision on research evaluation is also applied to the evaluation of science teams. This is the case, for example, with the Interdisciplinary Research Consortia aimed at societal impact (IDC). Some teams are not formally evaluated, but are looking for inspiration to evaluate their work on a voluntary basis. Here they can find inspiration on how such evaluation can be approached.
What is a 'science team'?
A 'science team' is any group of researchers who conduct research together. This can be either permanent organisational units (e.g. research partnerships, research groups, IDCs, faculties) or temporary (formal or informal) partnerships (e.g. researchers who pool their basic funding to collaborate on a specific research question).
How do you evaluate a team?
There is no fixed template or method for evaluating a team. When the evaluation of a team is not mandatory – and the framework is therefore not imposed – this offers the freedom to tailor the evaluation as closely as possible to the needs and requirements of the group itself. What does a team want to learn from the evaluation? What information is relevant to the team? Addressing this increases support for the evaluation and contributes to the evaluation being followed up afterwards (e.g. leading to action points or development programmes). Naturally, every evaluation must take into account Ghent University's vision on responsible evaluation.
What makes a team evaluation different from a researcher evaluation?
A team evaluation is more than the sum of the evaluations of each individual member of the team. The place, role and contribution of the individual members in a team are, of course, part of the evaluation of that team, but in the evaluation of the team, the focus shifts from the individual researcher to the framework in which team members work together, how they contribute to the team and how the team as a whole contributes to the growth of the team members. The emphasis is on the research culture and on the structures, systems and processes that shape the research. The research itself is of course also discussed, but in a team evaluation it is less necessary (and sometimes not necessary at all) to evaluate the quality of the research carried out by the team members. Carefully considering which elements from an individual-level evaluation are included in a team-level evaluation ensures that the team evaluation remains manageable.
Why conduct a team-level evaluation?
Possible reasons for evaluation are (see SCOPE):
- Analyse: gain insight into the composition, functioning, and/or results (and their impact) of the team
- Monitoring: following up on whether the team is achieving its set objectives
- Adjusting: improving the composition, functioning and/or results of the team
- Profiling: highlighting the strengths and successes of the team
- Comparing: comparing the team with other teams within or outside Ghent University
- Rewarding: evaluating in order to award a prize or funding.
What to evaluate?
What questions should an evaluation answer? What insights should it yield? An evaluation can, for example, assess the shared research culture, the creation and implementation of a joint research agenda, the functioning of a team, the long-term viability of a group, and/or... This obviously depends on the purpose of the evaluation.
Time frame
An evaluation can be a snapshot, set up as a comparison between two points in time, or reflect an evolution over several years.
Looking back or looking ahead
An evaluation can focus on what the team has achieved (“past performances”), on the future perspective (the further development of the team), or on a combination of both.
Indicators
What information is needed to answer the central questions of the evaluation? It will usually be necessary to combine multiple sources of information. This can include both quantitative and qualitative information. The larger the group being evaluated, the more time-consuming it is to collect, interpret and evaluate qualitative data. The responsible use of quantitative indicators may then be an option.
Some suggestions for evaluating the development, organisation and functioning of the team: consider the strategic plan, the team's (mono-, inter- and/or transdisciplinary) objectives (ambitions), impact strategy, each team member's contribution to the team and to achieving the objectives, the way in which each team member experiences the team (and its functioning), partnerships and interactions that would not have been possible without the team, the way in which leadership is organised, the way in which good research practices and scientific integrity are implemented, internal consultation structures, agreements regarding the supervision of doctoral students, available financial resources and infrastructure, and so on.
For evaluating academic output and impact: (Open Access) (co-)publications by team members, (reused) (open) datasets, (interdisciplinary) partnerships with other teams at home and abroad, defended PhDs, teaching assignments, peer reviews by team members, research funding, etc. For each of these criteria, the achievements that have been realised in collaboration with other members of the team and the (individual) achievements that would not have been possible without the team can be assessed.
For evaluating social (including economic) output and impact: social media posts, involvement of external stakeholders in the research, bilateral research collaborations and services with third parties (companies, governments, etc.), lectures for the general public, textbooks for teachers, patents, curatorship of exhibitions, membership of advisory committees outside the university, etc.
Here too, the emphasis can be on how the (individual) research contributes to the team's achievements.
Methods
How is the information collected, analysed and evaluated? Depending on the indicators chosen, qualitative or quantitative methods can be used. It will usually be necessary to combine several methods.
Some possibilities: bibliometric or scientometric analysis, surveys, focus groups, interviews, 360° evaluation, self-evaluation report, drafting of impact case studies, SWOT analysis, etc.
The evaluators
The information collected is preferably assessed by a panel of experts (peer review). The composition of the panel will depend on the nature of the team and the purpose of the evaluation (whether or not to involve international panel members, stakeholders, etc.). It is advisable that at least some panel members have experience in evaluating “science teams”. The panel members must receive clear instructions about the design and purpose of the evaluation, and about what is expected of them (a comparison with other teams, an internal report with points for improvement, a report intended for external communication, etc.).
What happens after the evaluation?
What happens to the results of an evaluation – and who is responsible for follow-up – should preferably be agreed before the start of the evaluation and in consultation with the team being evaluated. The follow-up required depends on the objective of the evaluation. For example, if the aim is to adjust the functioning of the team, this will require a different (and longer) follow-up than if the aim is to highlight success stories. In addition, it may be useful to evaluate the evaluation itself – how did the process go and what lessons can be learned from it? This is certainly the case if the intention is to repeat the evaluation periodically or to repeat the evaluation process for another team.
Want to get started yourself?
More indicators can be found in the Dutch Strategy Evaluation Protocol 2021-2027.
Groups looking for a methodology to design a responsible evaluation can use the SCOPE methodology.
Ghent University webpages with useful information:
- intranet page on impact
- intranet page on Altmetric and accompanying research tip
Need help?
The Function Domain Research is happy to assist groups that want to set up an evaluation. Support can be provided in the following areas:
- Developing an action plan for the evaluation
- Supporting the organisation of a SCOPE workshop
- Providing (valorisation) information from databases: Biblio, VABB, WoS, Altmetric, GISMO/Research Explorer, UGI (or successor), Oasis, SAP-HR, SAP-FIN, UGI-One Office, TTRM, MyTT/UGI
- Performing or supporting bibliometric analyses
- Organising (tailor-made) training courses on Altmetric, InCites
Contact: Nele.Bracke@UGent.be
More tips
- Research assessment: guideline for responsible evaluation (Research integrity & ethics)
- Research Assessment: guidelines for evaluators (Research integrity & ethics)
- Research assessment: guidelines when using quantitative indicators (for evaluation organisers) (Research integrity & ethics)
- Research assessment: tips and tricks for new evaluators (Research integrity & ethics)
- Research assessment: tips for researchers who want to use quantitative indicators in their CV, project application, etc. (Research integrity & ethics)
Translated tip
Last modified Dec. 2, 2025, 12:04 p.m.