Artificial Intelligence

Embedded support, never autonomous control
Every manuscript that passes through Impact Journals is parsed by a bespoke large-language model trained on biomedical writing. The tool flags inconsistencies in structure, grammar, style and reference format; checks readability; screens for plagiarism; and suggests layout tweaks that speed typesetting. It can also identify missing items from CONSORT, PRISMA and other reporting check-lists. Importantly, the model does not make editorial decisions, accept or reject papers, or draft peer-review reports. Those judgements are reserved for qualified editors and reviewers. We recognise that papers written with a large language model or heavily edited are less likely to be accepted and more likely to contain significant errors, especially when AI has been used for writing rather than selective editing. 

Human-in-the-loop governance
Editors view AI-generated comments and decide which to accept, modify or ignore. No change is applied to the author’s text unless a human approves it, ensuring that nuance, tone and disciplinary conventions are respected. Data uploaded to the journal is processed on secure, UK-based servers and is not used to train external models.

Author transparency – the GAIT guideline
We accept that authors are increasingly using AI, especially for writing. We require a Generative-AI Transparency (GAIT) statement in every submission that used AI. GAIT was developed by the GAIT 2024 Collaborative Group and aligns with recent ICMJE, COPE and WAME recommendations. The statement should cover five domains:

  1. Tool (name, provider, version)

  2. Purpose (e.g. language polishing, code drafting, figure generation)

  3. Section(s) affected

  4. Prompts (summarised or provided in a supplement)

  5. Human oversight (confirmation that authors take full responsibility)

Example GAIT statement, that can be included alongside conflicts of interests and acknowledgements:

“ChatGPT (GPT-4, OpenAI, May 2025) was used to improve grammar and clarity in the Introduction and Discussion. Prompts are listed in Supplementary Table S1. All outputs were reviewed and edited by the authors, who accept full responsibility for the final content.”

GAIT reference:

GAIT 2024 Collaborative Group. Generative Artificial Intelligence Transparency in scientific writing: the GAIT 2024 guidance. Impact Surgery, 2(1), 6–11. https://doi.org/10.62463/surgery.134

Alignment with emerging standards
Our policy is consistent with:

  • ICMJE recommendations on AI use in scholarly publishing (2023);

  • COPE discussion paper “Artificial Intelligence in Decision-making” (2023);

  • WAME guidance “ChatGPT and Chatbots in Scholarly Manuscripts” (2023).

We do not list AI systems as authors and we forbid the generation of patient-identifiable text or images.

Continuous review
The field is moving quickly. Our AI steering group reviews new models, regulatory guidance and community feedback every six months, updating processes. Readers can trust that AI is used to enhance quality and speed, never to replace scholarly judgement.