Acta Materia Medica adheres to the principles of publication ethics and authorship set out by the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE). In light of the increasing use of artificial intelligence (AI) and automated tools in research and publishing, the journal has established the following policy to ensure transparency, accountability, and patient safety.
Use of AI and Automated Tools by Authors
Authors may use automated tools, including generative artificial intelligence (AI) systems such as large language models (LLMs), during manuscript preparation only in accordance with the following requirements:
- Disclosure and Transparency
Consistent with COPE and ICMJE guidance, authors must disclose any use of generative AI or automated tools that contributed to the preparation of the manuscript, beyond routine language editing, copy-editing, or formatting. The disclosure must identify the tool used and describe how it was applied. - Author Accountability
In line with ICMJE authorship criteria, authors are fully responsible for the content of their manuscript, including all text, data interpretation, analyses, and conclusions generated or assisted by automated tools. Authors must verify the accuracy, validity, and originality of all content, with particular attention to clinical statements, patient-related information, and treatment recommendations. - Patient Confidentiality and Data Protection
Authors must not upload identifiable patient information or confidential clinical data to generative AI systems or third-party tools that do not comply with data protection and confidentiality requirements. - Authorship Criteria
Automated tools and AI systems do not meet ICMJE criteria for authorship and must not be listed as authors or co-authors. - Citation of Sources
Generative AI tools must not be cited as sources. All references must be to verifiable, authoritative, and peer-reviewed scientific literature.
Failure to comply with these requirements may result in rejection, correction, or retraction of the article in accordance with COPE guidelines.
Use of AI and Automated Tools by Peer Reviewers and Editors
- Peer Review and Editorial Judgement
In accordance with COPE guidance, peer reviewers and editors must not use generative AI tools to produce peer review reports, editorial assessments, or publication decisions. Such use carries risks including breaches of manuscript confidentiality, biased or superficial feedback, fabricated references, and clinically unsafe conclusions. - Limited Acceptable Uses
Limited use of automated tools for language editing or rewriting may be acceptable if confidentiality is preserved and such use is disclosed to the journal when relevant. - Confidentiality Obligations
Reviewers and editors must not upload unpublished manuscripts or associated materials to external AI systems that may store, reuse, or disclose content.
Use of AI and Automated Tools by the Journal and Publisher
- Transparency and Testing
The journal may employ automated tools for routine editorial and publishing processes, such as plagiarism detection, image integrity checks, or research integrity screening. Any routine use of such tools will be disclosed, and tools will be appropriately tested prior to implementation, in line with COPE recommendations. - Human Oversight (Human-in-the-Loop)
All automated tools are used with human oversight. Editorial staff review and interpret automated outputs before any editorial decisions are made. Automated tools do not replace editorial responsibility or judgement. - Verification of Integrity Concerns
Editors or trained staff will verify automated alerts relating to text similarity, image manipulation or duplication, undeclared use of generative AI, or automated reviewer suggestions before taking any action.
Policy Review
Recognising the rapid evolution of AI technologies, this policy will be reviewed regularly and updated as necessary to remain consistent with COPE guidance, ICMJE recommendations, and best practices in STM publishing.