Generative AI Policies
EIUSC supports the responsible use of AI to improve research efficiency and quality. However, AI must never replace human judgment, expertise, or accountability. Authors remain fully responsible for all aspects of their work.
For Authors
1. Authorship and Responsibility
– AI tools (including generative AI, large language models, or similar systems) cannot be listed as authors.
– All authors are accountable for the accuracy, originality, integrity, and ethical compliance of their submission, including any sections supported by AI tools.
2. Permissible Uses of AI
– AI tools may be used for language editing, grammar correction, readability improvements, data analytics, coding assistance, and statistical analyses — the generated outputs are validated by the authors.
– Authors must critically review and adapt AI outputs to ensure the manuscript reflects their own authentic contribution, insights, and interpretation.
3. Mandatory Disclosure
– Any use of AI tools must be clearly disclosed in the submission.
– Basic spelling/grammar checks need no disclosure. AI used in research methods must be described in detail in the methods section.
– A disclosure statement should name the tool, version, provider, purpose of use, and affirm authors’ responsibility (before the reference list).
Example:
“The authors employed [AI tool name, provider, version] to assist in grammar editing. All outputs were critically assessed, verified, and finalized by the authors, who take full responsibility for the manuscript.”
4. Prohibited Uses of AI
– Entirely or primarily AI-generated manuscripts are not acceptable.
– AI-generated references, fabricated data, unverifiable content, or misleading outputs are strictly prohibited.
– Generative AI or AI-assisted tools must not be used to create or alter figures, images except when explicitly part of the research design. Such use must be fully described in the methods section, including tool details.
5. Compliance and Consequences
– Failure to disclose the use of AI, or misuse of AI tools, constitutes a violation of research ethics.
– The EIUSC Secretariat and Editorial Board may use in-house or licensed AI-assisted tools (e.g., Turnitin) for screening (plagiarism detection and AI-writing indicators).
– Based on reports and corroborating evidence, the EIUSC Organizing Committee may issue a desk rejection, request clarifications or additional documentation, retract published proceedings, or impose future submission bans.
For Reviewers
1. Strict Confidentiality
– Manuscripts, data, and author identities are confidential. Do not upload any part of the submission into generative or AI-assisted tools. Doing so may breach confidentiality, intellectual property, and data privacy.
2. Confidentiality of Review Reports
– Do not upload review reports to AI tools, even for language polishing. The report can contain confidential information about the manuscript and/or the authors.
3. Responsibility in Peer Review
– Peer review requires original human judgment. Do not use AI to evaluate merit, interpret results, or recommend decisions. The reviewer is fully responsible for the report.
4. Author Transparency Check
– Authors may use AI with oversight and disclosure. Reviewers should check for an AI usage statement (before the reference list).