AI Policy

본문 바로가기
사이드메뉴 열기

AI Policy HOME

AI Policy

KODISA Generative AI Usage Policy
 


KODISA Generative AI Usage Policy

 


1. Introduction

 

KODISA recognizes the potential of Generative AI tools (such as large language models and multimodal models) to enhance the research and publication process.

This policy provides guidance to authors, editors, and reviewers on the responsible use of such technologies while maintaining the highest standards of academic integrity.

 

Examples of Generative AI tools include: ChatGPT, Claude, Gemini, Copilot, Bard, Jasper AI, DALL-E, Midjourney, etc.

 


2. Policy for Authors

 

2.1 Permitted Uses

 

Authors may use Generative AI tools for the following purposes:

 

Language improvement and translation: Enhancing readability, grammar, and expression for non-native speakers or in manuscripts written in non-native languages

Use AI tools that provide robust data security, confidentiality, and privacy protections

 Editors may use Generative AI only for language improvement in their own original editorial correspondence, provided no confidential manuscript information is included.


KODISA does not permit the use of Generative AI for creating or manipulating any visual research outputs, including images, figures, charts, and data tables.

 

Generating substantive research content (including research questions, hypotheses, methodologies, analyses, or conclusions) without rigorous human revision and validation

 Reviewers are fully responsible and accountable for the content of their reviews. While reviewers may use AI tools for improving the language of their own original review text, they must ensure no confidential information is shared.

 

2.2 Prohibited Uses

 

Authors must NOT use Generative AI tools for:

 

 

Generating core research content without rigorous human revision and validation

 

Creating original research data or substituting missing data

 

Generating images, figures, charts, or data tables for publication

 

Producing abstracts or substantial sections of manuscripts without thorough human oversight

 

Replacing fundamental researcher responsibilities including data analysis, results interpretation, and conclusion drawing

 

 

2.3 Authorship

 

Generative AI tools cannot be listed as authors or co-authors. Only humans who meet KODISA's authorship criteria can be designated as authors. Authorship requires accountability, responsibility for content accuracy, and the ability to respond to questions about the research—capabilities that AI tools do not possess.

 


2.4 Disclosure Requirements

 

All use of Generative AI tools must be disclosed transparently. Authors must include a statement in the Methods or Acknowledgments section that specifies:

 

 

The full name of the AI tool(s) used (including version number if applicable)

 

How the tool was used (e.g., "ChatGPT-4 was used to improve the English language and readability of this manuscript")

 

The specific purpose and extent of use

 

Confirmation that all AI-generated content has been carefully reviewed and verified for accuracy

 

 

Example disclosure statement:

 

 "The authors used ChatGPT-4 (OpenAI) to improve the language and readability of sections of this manuscript. All content was subsequently reviewed and edited by the authors, who take full responsibility for the accuracy and integrity of the final text."

 

2.5 Author Responsibilities

 

Authors who use Generative AI tools must:

 

 Accept full responsibility for the accuracy, validity, and originality of all content

Carefully review and verify all AI-generated output for factual accuracy, proper citations, and absence of bias

 

Ensure compliance with copyright and intellectual property standards

 

Use AI tools that provide adequate data security, confidentiality, and privacy protections

 

Be aware that AI tools may produce inaccurate information ("hallucinations") or fail to properly attribute sources

 

 

2.6 Images and Figures

 

KODISA does not permit the use of Generative AI for creating or manipulating images, figures, charts, data tables, or other visual research outputs. This prohibition stems from unresolved legal and ethical issues regarding intellectual property and copyright. Traditional image editing for improved clarity (brightness, contrast, color balance) remains acceptable when it does not obscure or alter original information.

 


3. Policy for Editors

 


3.1 Confidentiality Requirements

 

Editors must maintain strict confidentiality of submitted manuscripts. Editors must NOT upload manuscripts, reviewer reports, decision letters, or any related confidential information into Generative AI tools, as this may violate:

 

 Author confidentiality

Intellectual property rights

 

Data privacy regulations (especially for personally identifiable information)

 

 

3.2 Editorial Responsibilities

 

Editors are responsible for evaluating author disclosures of AI use

Editors may request additional information if AI use appears excessive or inappropriate

 

Editors retain discretion to reject manuscripts where AI has been used improperly

 

Editors should contact KODISA if they suspect policy violations

 

 

3.3 Permitted Editor Use

 

Editors may use Generative AI only for language improvement in their own editorial communications, provided no confidential manuscript information is included.

 


4. Policy for Reviewers

 


4.1 Confidentiality Requirements

 

Reviewers must NOT upload submitted manuscripts or any portion thereof into Generative AI tools. This includes:

 

 The manuscript text or data

Author information

 

Their own review reports (even for language improvement purposes)

 

 

Uploading manuscripts to AI tools violates confidentiality agreements and may breach intellectual property and privacy rights.

 


4.2 Review Integrity

 

 Peer review requires expert judgment, critical thinking, and original assessment—responsibilities that only humans can fulfill

Generative AI should not be used to analyze manuscripts, summarize content, or generate review comments

 

Reviewers are fully responsible and accountable for the content of their reviews

 

Reviewers may use AI only for improving the language of their own original review text, but must ensure no confidential information is shared

 

 

5. Consequences of Policy Violations

 

Violations of this policy may result in:

 

 

Manuscript rejection or retraction

 

Investigation by KODISA's editorial board

 

Potential sanctions in accordance with KODISA's research ethics policies

 

Reporting to relevant institutional authorities in cases of serious misconduct

 

 

6. Policy Updates

 

This policy will be reviewed and updated regularly as Generative AI technologies evolve and as academic standards and legal frameworks develop. Authors, editors, and reviewers should check for the most current version before submission or review.

 


7. Questions and Guidance

 

For questions about this policy or specific cases, please contact: 

KODISA Editorial Office: kodisajournals@gmail.com


GUIDE FOR AUTHORS Guide for Authors Review Policy Review Policy