Regulations for the Use of Artificial Intelligence in the Scientific Publication Cycle

(Authors, Reviewers, Editors, and Journals)

In accordance with the "Guidelines for the Use of Artificial Intelligence in Research" issued by the Ministry of Science, Research and Technology (Aban 1404 / October-November 2025)

Introduction

To intelligently guide technological advancements and preserve the integrity of the national scientific system, the "Guidelines for the Use of Artificial Intelligence in Research" document has been issued by the Ministry of Science, Research and Technology as a mandatory directive. This guideline, comprising 13 main sections and numerous executive clauses, establishes the legal and ethical frameworks for utilizing generative artificial intelligence. By clearly delineating the responsibilities of authors, reviewers, and publishing officials, the document emphasizes process transparency and stakeholder accountability, and will serve as the legal basis for addressing research misconduct in this domain.

Section One: Regulations and Responsibilities of Authors

  1. Principle of Transparency and Full Disclosure of Tool Usage

According to Clause 7-1 of the Guidelines, authors are obligated to transparently declare any use of generative AI tools at various research stages, including ideation, design, data collection and analysis, writing, editing, or translation. This declaration must be included in designated sections such as "Methodology" or a "Statement on AI Use." Concealing or failing to mention the tool's name and its manner of use constitutes research misconduct and may lead to article rejection or retraction.

  1. Prohibition of Authorship Attribution to AI

Based on Clause 7-3, AI tools lack legal personality and the ethical competence to assume responsibility for scientific content; therefore, listing them as authors is strictly prohibited. These tools may only be mentioned as "assistants" or "research tools." The ultimate responsibility for the accuracy of all content, data, and results lies exclusively with the human authors.

  1. Standard Citation of AI Outputs

According to Clause 7-2, if specific content (text, idea, image, or analysis) is directly generated by AI, the author must cite it following standard citation styles. This citation should include the tool's name, version, date of use, and, if possible, a link to the conversation or prompt.

  1. Prohibition of Generating Fake or Fabricated Data

Citing Clauses 6-1 and 6-2, using AI to generate fabricated raw data or alter real results is strictly prohibited. While the use of synthetic data in specific fields may be permitted with transparent methodology, fabricating sources or citing non-existent articles generated by AI constitutes a clear violation.

Section Two: Regulations and Responsibilities of Reviewers

  1. Prohibition of Using AI in Review Decision-Making

According to Clause 9-1, reviewers are not permitted to delegate their primary reviewing duties—critical evaluation and the final decision (accept or reject)—to AI. Scientific judgment requires deep understanding, human intuition, and ethical accountability, which machine tools lack.

  1. Obligation to Maintain Confidentiality and Prohibition of Uploading Manuscripts

Based on Clause 9-2, manuscripts submitted for review are confidential. Uploading the full text, abstract, or article data to AI platforms for summarization or editing constitutes a clear breach of confidentiality, as these platforms may use the input data to train their models, potentially disclosing author information prior to publication.

  1. Full Reviewer Responsibility for Limited Instrumental Use

According to Clause 9-3, limited use of AI solely to improve the linguistic quality of the review report or to better understand certain specialized concepts (without uploading manuscript data) may be permissible. However, the reviewer bears full and ultimate responsibility for the accuracy of their report. A reviewer cannot attribute errors in their report to AI mistakes.

Section Three: Regulations and Responsibilities of Editors and Editors-in-Chief

  1. Transparency in the Evaluation and Editing Process

According to Clause 10-1, editors and editorial board members must also adhere to the principle of transparency if they use AI for initial screening, reviewer selection, or linguistic editing of correspondence. Any systematic use of intelligent tools in the manuscript management process must be communicated to the authors.

  1. Non-Delegation of Final Decision-Making to Machines

Based on Clause 10-2, decisions regarding article rejection or acceptance, setting journal policies, and final adjudication should not be delegated to AI. Intelligent tools only play a supportive role (e.g., checking for plagiarism or format compliance) and cannot replace human editorial judgment.

  1. Protection of Author Confidentiality

According to Clause 10-3, editors are responsible for safeguarding the information in received manuscripts. Uploading manuscript files or unpublished data to public AI tools for editing or analysis breaches trust. Editors must use secure tools approved by the publisher.

  1. Accountability and Responsibility

According to Clause 10-4, editors bear full responsibility for the ethical, legal, and scientific consequences of using AI tools in the publication process. In the event of errors or data leaks, they cannot shift responsibility to the AI tool developer.

Section Four: Requirements for Journals and Scientific Publishers

  1. Development and Communication of Transparent Policies

Scientific journals are required to develop and publish clear, explicit policies regarding the permissible limits of AI use in their "Instructions for Authors." These policies must specify which types of use are allowed and which are prohibited.

  1. Establishment of Oversight Mechanisms

Scientific journals must establish processes to review authors' and reviewers' compliance with AI regulations. This includes requesting a "disclosure statement" from authors and monitoring the quality of review reports to ensure they are not machine-generated.

  1. Training and Awareness Building

Scientific journals should familiarize researchers with the ethical and technical aspects of AI use through workshops and supplementary guides to prevent unintentional violations.

 

This directive is mandatory for all authors, reviewers, editors, and journals of the University of Tehran, and strict adherence to it is required at all stages of publication.