Policy on the Use of Artificial Intelligence Tools

Policy on the Use of Artificial Intelligence Tools

Cakrawala Pendidikan

1. Introduction

Cakrawala Pendidikan recognizes the increasing use of Artificial Intelligence (AI), including generative AI and AI-assisted technologies, in academic writing, research, peer review, and scholarly publishing. AI tools may assist authors in improving language clarity, organizing ideas, formatting references, translating text, supporting data analysis, and enhancing technical aspects of manuscript preparation.

However, the use of AI must be carefully governed to protect the integrity of scholarly work. The journal emphasizes that AI tools must not replace human intellectual responsibility, critical analysis, methodological rigor, ethical judgment, or scholarly originality.

This policy applies to the use of AI tools by authors, reviewers, editors, and editorial staff throughout the processes of manuscript preparation, submission, peer review, editorial handling, revision, publication, and post-publication assessment.

This policy is grounded in internationally recognized publication ethics principles, particularly those concerning authorship accountability, transparency, confidentiality, originality, privacy, intellectual property, and responsible editorial decision-making.


2. Definition of AI Tools

For the purposes of this policy, AI tools refer to digital systems, platforms, applications, or software that use artificial intelligence techniques, including machine learning, natural language processing, deep learning, or large language models, to generate, analyze, translate, summarize, classify, edit, visualize, or modify scholarly content.

Examples of AI tools include, but are not limited to:

  1. Generative AI tools such as ChatGPT, Gemini, Claude, Copilot, and other large language models;
  2. AI-assisted writing, grammar, paraphrasing, and translation tools such as Grammarly, DeepL Write, QuillBot, and similar platforms;
  3. AI-supported tools for literature discovery, reference management, coding, data cleaning, statistical analysis, visualization, or modeling;
  4. AI-assisted tools used to generate, modify, interpret, or enhance textual, numerical, visual, audio, or multimodal data.

3. Acceptable Use of AI Tools by Authors

Authors may use AI tools only when they retain full responsibility for the accuracy, originality, integrity, and scholarly validity of the manuscript. The use of AI tools must be transparent where required and must not compromise the ethical standards of academic publishing.

3.1 Permissible Uses Without Disclosure

Disclosure is not required when AI tools are used only for basic language and technical assistance that does not alter the scholarly meaning, argument, analysis, or interpretation of the manuscript.

Examples include:

  1. Grammar, spelling, and punctuation checking;
  2. Minor improvements to language fluency and readability;
  3. Formatting references according to the journal’s citation style;
  4. Minor editing of sentence structure without changing the meaning, claims, findings, or interpretations.

3.2 Permissible Uses Requiring Disclosure

Disclosure is required when AI tools are used beyond basic proofreading or formatting. This includes, but is not limited to:

  1. Substantive rewriting, restructuring, summarizing, or reorganizing manuscript sections;
  2. Translation support that goes beyond minor language polishing;
  3. Assistance in coding, data cleaning, statistical analysis, qualitative data analysis, or data visualization;
  4. Literature mapping, idea generation, keyword generation, or conceptual framing;
  5. Support in developing research instruments, rubrics, prompts, analytical categories, or thematic codes;
  6. Any AI-assisted process that materially influences the manuscript’s argument, analysis, interpretation, or presentation.

In all such cases, authors must independently verify the accuracy, validity, and reproducibility of AI-assisted outputs.


4. Restricted and Prohibited Uses

Authors must not use AI tools to:

  1. Generate an entire manuscript or substantial portions of scholarly content in place of the authors’ own intellectual work;
  2. Fabricate, falsify, manipulate, or distort data, findings, quotations, images, tables, results, or interpretations;
  3. Generate citations, references, or bibliographic information without verifying that each source exists and is accurately represented;
  4. Summarize, paraphrase, or rewrite published works in ways that constitute plagiarism or obscure the original source;
  5. Create false claims of novelty, originality, methodological rigor, or empirical evidence;
  6. Produce content that infringes copyright, privacy, confidentiality, intellectual property, or third-party rights;
  7. Replace the authors’ own critical thinking, theoretical interpretation, pedagogical judgment, methodological reasoning, or educational analysis.

5. Use of AI in Images, Figures, Tables, and Artwork

Cakrawala Pendidikan does not permit the use of generative AI or AI-assisted tools to create, alter, manipulate, or fabricate images, figures, visual data, or research evidence in submitted manuscripts.

This includes the use of AI tools to:

  1. Enhance, obscure, remove, move, or introduce specific features in an image;
  2. Generate artificial visual data presented as research evidence;
  3. Alter classroom images, participant documentation, screenshots, observational evidence, or visual research materials in ways that affect interpretation;
  4. Produce graphical abstracts or visual artwork without proper permission and attribution.

Basic adjustments such as brightness, contrast, or color balance may be acceptable only if they do not obscure, remove, or misrepresent information present in the original image.

Exception for Methods-Based AI Use

If AI-assisted image generation, image recognition, automated classification, multimodal analysis, or visual interpretation is part of the research method, authors must describe it clearly and reproducibly in the Methods section. The description must include:

  1. The name of the AI tool or model;
  2. The version, developer, or provider where available;
  3. The purpose and procedure of use;
  4. The type of input and output generated;
  5. The verification process conducted by the authors.

The journal may request raw data, original images, screenshots, logs, prompts, or additional documentation for editorial and ethical assessment.


6. AI Writing Detection and Editorial Screening

Cakrawala Pendidikan may use AI-writing detection tools, similarity-checking software, and other screening systems to support editorial assessment. However, the journal will not rely solely on automated tools to determine whether a manuscript complies with this policy.

All AI-related concerns will be assessed through editorial review, contextual evaluation, and, where necessary, direct communication with the authors.

The journal may request clarification, revision, disclosure, supporting materials, or methodological explanation if inappropriate or undisclosed AI use is suspected.


7. Responsibilities of Authors

Authors are fully responsible for all content submitted to Cakrawala Pendidikan, including any content generated, revised, translated, analyzed, or formatted with the assistance of AI tools.

Authors must:

  1. Verify the accuracy, originality, and reliability of all AI-assisted content;
  2. Check all references and citations to ensure that they are real, relevant, and accurately represented;
  3. Ensure that the manuscript is free from plagiarism, factual errors, bias, hallucinated information, or fabricated evidence;
  4. Ensure that all external sources, data, quotations, images, instruments, and identifiable materials are properly cited and ethically used;
  5. Retain intellectual ownership of the manuscript’s argument, analysis, interpretation, and conclusions;
  6. Be able to explain how AI tools were used when disclosure is required;
  7. Accept full responsibility for any ethical breach, error, omission, or misrepresentation resulting from AI-assisted work.

8. Authorship and AI

AI tools cannot be listed as authors or co-authors. Authorship is limited to human contributors who meet recognized authorship criteria, including substantial intellectual contribution, approval of the final manuscript, and accountability for the integrity of the work.

AI tools must not be included in:

  1. The list of authors;
  2. Author notes;
  3. Author contribution statements;
  4. Corresponding author information;
  5. Acknowledgement sections in a way that implies authorship or intellectual responsibility.

Any attempt to attribute authorship to AI tools may result in editorial rejection, correction, or retraction, depending on the stage of publication.


9. Privacy, Confidentiality, and Intellectual Property

Authors must ensure that the use of AI tools does not violate privacy, confidentiality, data protection, intellectual property, or research ethics requirements.

Authors must not upload or enter the following into third-party AI tools unless there is a clear legal basis and adequate protection:

  1. Unpublished manuscripts belonging to other authors;
  2. Confidential research data;
  3. Personally identifiable information of participants, teachers, students, schools, reviewers, or institutions;
  4. Interview transcripts, classroom observation data, or student work containing identifiable information;
  5. Proprietary, restricted, or sensitive institutional documents;
  6. Materials protected by copyright or confidentiality agreements.

For research involving human participants, the use of AI tools must be consistent with informed consent, institutional ethics approval, and applicable data protection regulations.


10. Disclosure Requirements for Authors

Authors must disclose the use of AI tools when such use goes beyond basic grammar, spelling, punctuation, or reference formatting.

The disclosure must include:

  1. The name of the AI tool;
  2. The version, developer, or provider where available;
  3. The purpose and extent of use;
  4. The manuscript section or research process affected by AI assistance;
  5. A statement confirming that the authors reviewed, verified, edited, and take full responsibility for all AI-assisted content.

Recommended Locations for Disclosure

Authors should place the disclosure in one of the following sections, depending on the nature of AI use:

  1. Methods section: if AI tools were used for data analysis, coding, modeling, instrument development, visualization, or research procedures;
  2. Acknowledgements section: if AI tools were used for substantive language support, translation, restructuring, or editing beyond basic proofreading;
  3. Declaration of AI Tool Usage: a dedicated statement before the References section is strongly recommended when AI use is substantial.

Suggested Disclosure Statement

Declaration of AI Tool Usage

During the preparation of this manuscript, the authors used [insert AI tool name, version, and provider] for [describe the purpose and extent of use]. All AI-assisted outputs were critically reviewed, verified, and edited by the authors to ensure factual accuracy, methodological soundness, clarity, originality, and compliance with academic standards. The authors take full responsibility for the integrity and content of this manuscript.


11. Reviewer Policy: Confidentiality and Integrity

Peer review in Cakrawala Pendidikan is based on human expertise, scholarly judgment, confidentiality, and ethical responsibility.

Reviewers must:

  1. Treat all manuscripts under review as confidential documents;
  2. Not upload submitted manuscripts, manuscript sections, tables, figures, data, supplementary files, or review materials into generative AI tools;
  3. Not upload review reports, editorial correspondence, reviewer forms, or confidential comments into AI tools, even for language improvement;
  4. Not use generative AI tools to conduct the scholarly assessment, formulate review conclusions, or make recommendations;
  5. Ensure that all review comments are based on their own expertise, critical reading, and professional judgment.

Reviewers remain fully responsible and accountable for the content, accuracy, fairness, and ethical integrity of their review reports.


12. Editor and Editorial Staff Policy

Editors and editorial staff must preserve the confidentiality, independence, and integrity of the editorial process.

Editors and editorial staff must:

  1. Treat submitted manuscripts, review reports, editorial notes, decision letters, author correspondence, and reviewer identities as confidential;
  2. Not upload manuscripts, review reports, confidential correspondence, decision letters, or editorial files into generative AI tools;
  3. Not use generative AI tools to perform editorial evaluation, recommend decisions, or determine manuscript acceptance or rejection;
  4. Not use AI tools in ways that compromise reviewer anonymity, author confidentiality, or editorial independence;
  5. Ensure that all editorial decisions are made through human scholarly judgment and in accordance with journal policy.

Editors remain fully responsible and accountable for editorial decisions, communications, and publication-ethics handling.


13. Editorial and Peer Review Oversight

Cakrawala Pendidikan will evaluate AI disclosure and AI-related concerns as part of its ethical, methodological, and editorial assessment.

If undisclosed, inappropriate, or unethical AI use is suspected, the editorial office may:

  1. Request clarification from the authors;
  2. Require a revised AI disclosure statement;
  3. Request supporting materials, raw data, prompts, outputs, analysis logs, or methodological documentation;
  4. Require manuscript revision;
  5. Reject the manuscript;
  6. Refer the case to the relevant institution if research or publication misconduct is suspected;
  7. Initiate correction, expression of concern, or retraction procedures after publication where necessary.

The journal will consider the severity, intent, impact, and stage of publication when determining the appropriate action.


14. Consequences of Non-Compliance

Failure to comply with this policy may result in:

  1. Rejection of the manuscript at any stage of editorial evaluation or peer review;
  2. Withdrawal of the manuscript from the editorial process;
  3. Publication of a correction, expression of concern, or retraction after publication;
  4. Notification to the authors’ institution, funder, or ethics committee in cases of suspected misconduct;
  5. Temporary or permanent restriction on future submissions if misuse is severe, repeated, or intentional.

15. Appeals and Dispute Resolution

Authors who disagree with an editorial decision related to AI tool usage may submit a formal written appeal to the Editor-in-Chief.

The appeal must include:

  1. A clear explanation of the disagreement;
  2. Relevant evidence or documentation;
  3. A description of how AI tools were used, if applicable;
  4. Any supporting materials requested by the editorial office.

Appeals will be reviewed according to the journal’s ethical and editorial procedures. The decision following appeal review will be communicated to the authors in writing.


16. Policy Updates and Author Guidance

As AI technologies and publication-ethics standards continue to evolve, this policy may be reviewed and updated periodically.

Authors, reviewers, editors, and editorial staff are expected to consult this policy before participating in manuscript preparation, review, or editorial handling. Anyone uncertain about whether a particular AI use is acceptable should contact the editorial office before submission or before taking action in the review or editorial process.


17. Ethical Framework and References

This policy is informed by internationally recognized guidance on AI use, authorship, peer review, publication ethics, and responsible scholarly communication, including:

Committee on Publication Ethics. (2023). Authorship and AI tools. COPE.

Committee on Publication Ethics. (2023). Artificial intelligence and peer review. COPE.

Committee on Publication Ethics. (2023). Artificial intelligence in editorial decision-making. COPE.

Council of Science Editors. (2023). Recommendations on machine learning and artificial intelligence tools in scholarly publishing. CSE.

Elsevier. (2024). Generative AI policies for journals. Elsevier.

International Committee of Medical Journal Editors. (2024). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. ICMJE.

World Association of Medical Editors. (2023). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on ChatGPT and chatbots in relation to scholarly publications. WAME.