Use of Generative AI

Policy on the Use of Generative AI in Academic Publishing

1. Scope and Purpose

This policy outlines the ethical use, disclosure, and oversight of Generative AI (GenAI) in the submission, peer review, and editorial processes of this journal. It aligns with the principles and best practices established by leading organizations in academic publishing, including the Committee on Publication Ethics (COPE)the World Association of Medical Editors (WAME), the International Committee of Medical Journal Editors (ICMJE)the Open Access Scholarly Publishers Association (OASPA), and the Directory of Open Access Journals (DOAJ).

2. Definition

Generative AI refers to artificial intelligence technologies capable of producing text, images, or other content in response to user inputs. Tools such as ChatGPT, DALL-E, and similar models fall under this category.

3. Principles and Standards

3.1. Ethical Use (COPE Guidelines)

3.2. Transparency (ICMJE and DOAJ)

3.3. Authorship and Accountability (ICMJE)

3.4. Peer Review Integrity (WAME and OASPA)

4. Acceptable Uses of Generative AI

5. Prohibited Uses of Generative AI

6. Disclosure Requirements (Aligned with COPE and ICMJE)

All authors must provide a clear disclosure statement regarding the use of GenAI. The statement should include:

All authors must provide a clear disclosure statement regarding the use of GenAI. The statement should include:

Example Disclosure Statement: "Generative AI tools, such as ChatGPT, were used for language editing and improving the clarity of the manuscript. All intellectual content, research design, and data analysis were conducted solely by the authors."

7. Editorial Oversight (WAME and OASPA)

8. Consequences of Misuse

In alignment with COPE’s misconduct policies, violations of this policy may result in: