1. Purpose and scope
This policy sets out clear guidance for authors, reviewers, and editors regarding the use of generative artificial intelligence (generative AI) and AI-assisted tools, with the aim of safeguarding transparency, traceability, originality, confidentiality, and accountability in research and scholarly publishing.
This policy applies to all submissions to the journal, the peer-review process, editorial
handling, and post-publication corrections.
2. Definitions
(1) Generative AI / AI tools: Systems capable of automatically generating text, images, audio, video, computer code, or synthetic data (e.g., large language models, image generation models).
(2) AI-assisted copy editing: The use of AI tools solely to improve grammar, spelling, punctuation, tone, or readability of text written by humans, without generating new content or substantively altering the scientific claims or interpretations.
3. Core principles
(1) Responsibility for the accuracy, originality, integrity, appropriate citation, and ethical compliance of all manuscript content rests entirely with the authors, reviewers, and editors.
(2) Any use of AI that may influence the development of content, methods, analyses, conclusions, or presentation must be transparently disclosed in a reproducible manner, including the name and version of the tool, its purpose, scope of use, and the extent of human oversight.
(3) Unpublished manuscripts, related correspondence, peer-review reports, and editorial decision letters must not be uploaded to external generative AI tools, in order to avoid breaches of confidentiality, copyright, or personal data protection.
For Authors
1. Responsible use of AI tools
(1) Authors may use generative AI tools prior to submission for language editing, proofreading, or improving readability. Authors remain fully responsible for the accuracy, originality, and completeness of citations and references.
(2) Generative AI must not be used to generate or distort research conclusions,fabricate data, or invent citations or references.
(3) Authors must not input unpublished data, identifiable personal data, interview transcripts, or other confidential or restricted materials into third-party generative AI platforms, unless lawful authorization has been obtained and appropriate de-identification and security measures have been applied.
2. Disclosure requirements
(1) If generative AI has been used in manuscript preparation (e.g., text rewriting, abstract polishing, translation, or formatting), authors must include a “Generative AI Use Statement” in the manuscript, specifying the name and version of the tool used, the purpose and scope of use, and how the output was reviewed and verified by the authors.
(2) Disclosure is not required when AI tools are used solely for basic AI-assisted copy editing limited to grammar, spelling, or punctuation.
3. Authorship
Generative AI tools may not be listed as authors or co-authors, nor cited as having authorship responsibility. Authorship and accountability must be attributed exclusively to human contributors.
4. Disclosure requirements
(1) The use of generative AI to create, modify, or manipulate images or figures in manuscripts is not permitted. Only adjustments that do not alter the underlying information (e.g., brightness, contrast, or color balance) are allowed.
(2) Where AI-generated images or AI-based image processing form part of the research design or methodology, authors must fully disclose the tools, versions, workflows, and verifiability in the Methods section and provide original data upon editorial request.
For Reviewers and Editors
1. No replacement of human judgment
Peer review and editorial decisions must be made by qualified humans and must not be delegated to Generative AI.
2. Confidentiality
Reviewers/editors must not upload submitted manuscripts or peer-review materials to
third-party Generative AI services due to confidentiality, IP, and privacy risks.