back to search

GenAI: Translating Research Integrity into Responsible (Gen)AI Use

As a member of Ghent University, you are expected to demonstrate (scientific) integrity. (Gen)AI may not be used to infringe upon or to justify any violation of scientific integrity. Researchers must also take all necessary precautions to prevent unintentional violations. 

The four fundamental principles for (Gen)AI use in accordance with research integrity

According to the The European Code of Conduct for Research Integrity (also known as the ALLEA Code), this concept of research integrity consists of four fundamental principles: accountability, reliability, honesty and respect. The use of generative AI can compromise some of these fundamental principles. Below, we "translate" the fundamental principles for research discussed in the ALLEA Code into the ethical use of (gen)AI by researchers. 

1.   Accountability for the research from idea to publication, for its management and organisation, for training, supervision, and mentoring, and for its wider societal
impacts.

You remain responsible for any use of (gen)AI, the (quality of the) generated output, what you do with the output of the tools, and can be held accountable.
 
2.   Reliability in ensuring the quality of research, reflected in the design, methodology, analysis, and use of resources.

You take the necessary precautions to use (gen)AI tools correctly and to check the generated results so that the quality of your research is guaranteed. This requires a good understanding of the (technical) possibilities (or limitations) of the application of (gen)AI tools and the (ethical) implications of their use. For example, it is important not to simply accept output at face value, due to the risk of errors  and factual inaccuracies (cfr. the risk of unreliability). So be sure to check all claims that are made. You can also use a (gen)AI tool to help you prepare your analysis. Here too, it will be important to check whether the generated outcome is even possible, taking your research design into account . 
  
3.   Honesty in developing, undertaking, reviewing, reporting, and communicating research in a transparent, fair, full, and unbiased way.

With regard to (gen)AI, this means that researchers are transparant about substantial use of (gen)AI, according to the standards of their discipline. What constitutes “substantial” use also depends on the context and standards of the specific discipline. In such cases, you should indicate the use of the (gen)AI tool, just as we expect researchers to do in the context of other software, applications and methodologies within their discipline. This good practice was also explicitly included in the ALLEA code, which states that...

"Researchers report their results and methods, including the use of external services or AI and automated tools, in a way that is compatible with the
accepted norms of the discipline and facilitates verification or replication, where applicable." (ALLEA Code, 2023, p. 7, emphasis added) 

Concealing the use of AI in the creation of content or the drafting of publications is considered unacceptable misconduct in research (2023, p.10). 

 
4.   Respect for colleagues, research participants, research subjects, society, ecosystems, cultural heritage, and the environment.

In the context of (gen)AI, this means that as a researcher, when using the tools, you ensure that you continue to treat research objects and subjects with respect, that you acknowledge the work of colleagues (even if the tools do not always do this automatically), that you treat information obtained from research subjects or others as confidential (and therefore do not simply enter it into a tool), ... This also concerns privacy and confidentiality (cfr. the risk of the violation of privacy and confidentiality) and respect for intellectual property rights (cfr. the risk of the violation of copyright / plagiarism).
 
If you do not comply with the above principles, including in the use of (gen)AI, you may be violating scientific integrity. An example of this is plagiarism. If you know how the tools work (see the concise intranet page AI or GenAI: what is it and how does it work? or the in-depth Ufora module “How does generative AI work?”) and where the risks lie in their use (see "What are the risks associated with generative AI use?"), you realise that the generated texts, images, etc. contain ideas from others. Some tools will display the correct source (e.g. tools used in academic research), but others, such as ChatGPT, do not display sources or sometimes invent sources. It is your responsibility to always check the sources and search for the original source in case it is missing or incorrect.

Consider generative AI tools as instruments that can assist you in your work. Do not view them as machines that will take over all your responsibilities.

For more information regarding research integrity, click here.

For more information regarding the responsible use of (Gen)AI, please consult:  

 

Would you like more information about (Gen)AI at Ghent University?

Ghent University already offers a wealth of information about dealing with (generative) AI from various perspectives and for various purposes (e.g. about its functioning, risks, responsible use, tools and applications, exercises, training, peer review and evaluation, transcription, research proposals, etc.). An overview of all (Gen)AI-related pages can be found at “GenAI: Overview of information about (Gen)AI at Ghent University”, such as the research and education tips, the intranet, the general webpage, the Ufora infosites, etc.

 Be sure to take a look at our general webpage “Generative AI at Ghent University”! Here you will find the official framework for the responsible use of AI at Ghent University, an overview of basic information, and the range of information and training courses offered at Ghent University.

 


Last modified Oct. 29, 2025, 1:54 p.m.