Generative AI Policy

Journal Natapraja acknowledges the rapid development of Generative AI (GenAI) and AI-assisted technologies (e.g., ChatGPT, GrammarlyGO, DeepL, etc.). To maintain the integrity of the scholarly record and ensure transparency, we have established the following policy for authors, editors, and peer reviewers. 

 

Authorship and Accountability

  1. AI cannot be an author. Generative AI tools do not meet the criteria for authorship. They cannot share responsibility for the work, cannot consent to publication, and cannot be held legally accountable for the content. Therefore, AI tools must not be listed as authors or co-authors.

  2. Human responsibility. The human author(s) are ultimately responsible and accountable for the content of the work. This includes ensuring the accuracy, validity, and originality of the manuscript, as well as the absence of plagiarism. Authors must carefully review and edit any output generated by AI tools to ensure it is accurate and free from bias.

Permitted and Prohibited Uses

Permitted Uses

Authors may use Generative AI tools for the following specific purposes, provided they exercise human oversight:

  1. Improving the readability and quality of language (e.g., grammar checking, paraphrasing for clarity)

  2. Brainstorming or structuring ideas during the initial drafting phase.

  3. Translating text, provided the final translation is verified for accuracy by the author.

 

Prohibited Uses:

  1. Data generation. AI tools must not be used to generate or manipulate raw data, results, or scientific conclusions.

  2. Image generation.  The use of AI to create, alter, or manipulate images or figures is generally prohibited unless the AI use is part of the research methodology itself (which must be clearly explained).

  3. Reviewing literature. Authors should not rely solely on AI to summarize literature, as these tools can hallucinate (fabricate) citations. All references must be verified against the original source.

 

Disclosure Requirements

Transparency is mandatory. If an author has used Generative AI or AI-assisted technologies in any part of the manuscript preparation (writing, editing, data processing, or coding), they must declare it.

How to Declare: Authors must include a statement at the end of their manuscript (before the References section) titled "Declaration of Generative AI and AI-Assisted Technologies in the Writing Process". The author need to clearly explain the use of AI including tools, section, and methodology. The author might also need to explain on the methodology section.

If no AI tools were used, this section may be omitted or the author may state: "No generative AI tools were used in the writing of this manuscript."

 

Policy for Editors and Reviewers

To protect the confidentiality of the authors' work:

  1. Confidentiality. Editors and reviewers are strictly prohibited from uploading submitted manuscripts (or parts of them) into Generative AI tools (e.g., ChatGPT) for the purpose of generating reviews, summaries, or decision letters. Uploading unpublished work to public AI platforms may violate the authors' confidentiality and proprietary rights.

  2. Review Generation. Reviewers must not use AI tools to generate their peer review reports. The assessment of scientific merit requires critical human thinking and expertise.