Skip Navigation
Skip to contents

JYMS : Journal of Yeungnam Medical Science

Indexed in: ESCI, Scopus, PubMed,
PubMed Central, CAS, DOAJ, KCI
FREE article processing charge
OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Yeungnam Med Sci > Volume 42; 2025 > Article
Original article
Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration
Sangzin Ahn1,2orcid
Journal of Yeungnam Medical Science 2025;42:14.
DOI: https://doi.org/10.12701/jyms.2024.00794
Published online: December 11, 2024

1Department of Pharmacology and PharmacoGenomics Research Center, Inje University College of Medicine, Busan, Korea

2Center for Personalized Precision Medicine of Tuberculosis, Inje University College of Medicine, Busan, Korea

Corresponding author: Sangzin Ahn, MD, PhD Department of Pharmacology, Inje University College of Medicine, 75 Bokji-ro, Busanjin-gu, Busan 47392, Korea Tel: +82-51-890-5909 • E-mail: sangzinahn@inje.ac.kr
• Received: July 31, 2024   • Revised: October 31, 2024   • Accepted: November 21, 2024

© 2025 Yeungnam University College of Medicine, Yeungnam University Institute of Medical Science

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 616 Views
  • 76 Download
  • Background
    Large language models (LLMs), the most recent advancements in artificial intelligence (AI), have profoundly affected academic publishing and raised important ethical and practical concerns. This study examined the prevalence and content of AI guidelines in Korean medical journals to assess the current landscape and inform future policy implementation.
  • Methods
    The top 100 Korean medical journals determined by Hirsh index were surveyed. Author guidelines were collected and screened by a human researcher and AI chatbot to identify AI-related content. The key components of LLM policies were extracted and compared across journals. The journal characteristics associated with the adoption of AI guidelines were also analyzed.
  • Results
    Only 18% of the surveyed journals had LLM guidelines, which is much lower than previously reported in international journals. However, the adoption rates increased over time, reaching 57.1% in the first quarter of 2024. High-impact journals were more likely to have AI guidelines. All journals with LLM guidelines required authors to declare LLM tool use and 94.4% prohibited AI authorship. The key policy components included emphasizing human responsibility (72.2%), discouraging AI-generated content (44.4%), and exempting basic AI tools (38.9%).
  • Conclusion
    While the adoption of LLM guidelines among Korean medical journals is lower than the global trend, there has been a clear increase in implementation over time. The key components of these guidelines align with international standards, but greater standardization and collaboration are needed to ensure the responsible and ethical use of LLMs in medical research and writing.
Large language models (LLMs) have emerged as a groundbreaking artificial intelligence (AI) technology with the potential to revolutionize various domains, including medical scientific publishing [1]. LLM tools, such as ChatGPT (OpenAI, San Francisco, CA, USA) and Claude (Anthropic, San Francisco, CA, USA), have been rapidly implemented and have gained in popularity because of their ability to generate human-like text based on various user prompts [2]. In the context of medical research and academic writing, LLMs have numerous applications such as assisting in literature reviews, data analysis, and manuscript preparation [3,4].
For medical researchers and authors, LLM tools present several opportunities including improving grammar and language quality, facilitating translation, generating novel research ideas, synthesizing large amounts of data, and streamlining the overall research process [5,6]. However, the use of LLM tools in medical scientific writing poses significant challenges such as the risk of inaccuracy, bias, plagiarism, and lack of accountability [7,8]. These concerns are particularly pressing in the medical domain because the misinformation generated by LLMs can have severe consequences for patient care and public health [9].
As the use of LLMs in academic medical writing has become more prevalent, there is a growing need for clear guidance and policies to ensure responsible and ethical practices. Medical journals and publishers have responded to this challenge using various approaches ranging from outright prohibition to cautious acceptance with strict disclosure requirements [10]. The Committee on Publication Ethics (COPE) has issued a position statement on AI tools in research publications emphasizing the importance of human authorship and responsibility while suggesting ways to disclose AI use [11]. However, a study conducted in October 2023 that surveyed the top 100 scientific journals revealed substantial heterogeneity, with many specific guidelines not fully aligned with COPE recommendations [12].
Among Korean medical journals, the Journal of Radiology has been actively sharing its positions on AI and has revised its policies accordingly [13]. Other manuscripts have mentioned the current status of AI and its influence on academic medical writing [14,15]. However, limited research has been conducted on the prevalence and content of AI policies in Korean academic medical publications. This knowledge gap underscores the importance of examining the current landscape of AI-related author guidelines in Korean medical journals to inform future policy development and implementation.
This study aims to address this knowledge gap by conducting a comprehensive survey of the top 100 Korean medical journals to determine the prevalence and content of LLM-related policies in the journal author guidelines. Notably, this study employs a novel methodology that combines human and AI collaboration during the analysis process. By analyzing adoption rates and key policy components, and comparing findings with global trends, this study seeks to inform future policy development in Korea, fostering alignment with international standards. Ultimately, the goal is to contribute to the discourse on the ethical and responsible use of LLM tools in medical research, ensuring that their benefits are harnessed while mitigating the associated risks.
1. Data collection
The Korea Citation Index (KCI) Journal search page (https://www.kci.go.kr/kciportal/po/search/poSereSearList.kci) was used to identify Korean medical journals. Journals categorized under “Medicine” and registered with the KCI were selected. A total of 538 journal entries were downloaded, resulting in 311 unique journals after duplicates were removed. Journal metrics, including the Hirsch index (h-index), were obtained from Scimago (https://www.scimagojr.com/journalrank.php). The final sample consisted of the top 100 journals based on h-index, with values ranging from 3 to 104 (Supplementary Material 1).
The official websites of the selected journals were manually searched, and the author guidelines were downloaded. Guidelines were available in portable document format (PDF) for 84 journals and as webpages for 16 journals. The webpage guidelines were converted into text file (TXT) format for further analysis. To ensure an accurate snapshot of the available guidelines, the data collection process was completed within 24 hours on March 5, 2024.
2. Identification of guidelines containing artificial intelligence or large language model policy
The human researcher initially screened all 100 guidelines to identify those that contained instructions related to AI or LLMs. This process yielded 18 guidelines containing AI-related content. The guideline files (PDF or TXT) were uploaded to an AI chatbot (Claude 3 Opus, Anthropic) to verify the presence of AI-related content. The results of the human and AI identification processes were compared, and no discrepancies were observed.
The last revision date of each guideline was recorded during the screening process. If the revision date was not available, it was marked as “Unknown.” Seventy guidelines had revision dates, whereas 30 did not. The analytical process is illustrated in Fig. 1.
3. Content analysis of artificial intelligence-related items in guidelines
The AI chatbot was used to identify and extract the AI-related content from each of the 18 guidelines that have adopted an AI policy. All the extracted content was again uploaded to the AI chatbot, which then suggested key items for comparison across journals. A human researcher examined the AI-suggested key items and revised them into 11 key items for comparison across journals.
The human researcher manually assessed the presence of the 11 key items in each of the 18 AI-related guidelines. The AI chatbot was also provided with each of the 18 guidelines and asked to check for the presence of the 11 key items, answering “YES,” “NO,” or “UNSURE” for each item. Any discrepancies between the human researcher and AI chatbot assessments, as well as items marked “UNSURE” by the chatbot, were re-examined by the human researcher. Five discrepancies were observed: two in “Emphasize human responsibility,” two in “Prohibition of AI usage other than language improvement,” and one in “Declare in manuscript.” All other items tagged “UNSURE” were determined to be “NO” by the human researcher (Supplementary Material 2).
4. Analysis of journal characteristics and artificial intelligence guideline adoption
The journals were divided into quartiles based on their h-index values. The percentage of journals with AI guidelines in each quartile was calculated. The journals were also grouped according to the last revision date of their guidelines as follows: unknown, up to 2022, the first quarter of 2023 (2023 1Q), 2023 2Q, 2023 3Q, 2023 4Q, and 2024 1Q. The percentage of journals with AI guidelines in each revision-date group was calculated.
5. Artificial intelligence chatbot usage
This study employed Claude 3 Opus, the most intelligent model developed by Anthropic as of March 2024. This model, a part of the Claude 3 series, which also includes Sonnet and Haiku, is distinguished by its ability to navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding [16]. The AI chatbot was used to identify and extract AI-related content from the guidelines, suggest key items for analysis, and check for the inclusion of each item in the author guidelines. All the prompts used in this study are provided in Supplementary Material 3. Additionally, the AI chatbot was used during manuscript writing to suggest ideas and improve linguistic quality.
6. Statistical analysis and visualization
Percentages were calculated to summarize the prevalence and characteristics of AI guidelines in the selected Korean medical journals. The adoption of AI policies was calculated overall, by h-index quartile, and by revision date. All calculations and visualizations were initially performed using data analyst by ChatGPT, an AI-powered data analysis tool. To ensure the accuracy and reliability of the results, the human researcher subsequently confirmed the findings using the Matplotlib package (ver. 3.7.1) in Python (ver. 3.10.12) in a Google Collaboratory environment (Google LLC, CA, USA).
1. Prevalence and trends in artificial intelligence guideline adoption in Korean medical journals
The survey of 100 Korean medical journals revealed that only 18% had guidelines addressing the use of AI tools in research and writing processes (Fig. 2A). Most journals (82%) lacked formal policies or guidelines on this topic.
Further analysis of the journal H-index quartiles showed a trend toward higher AI guideline adoption rates among journals with greater impact (Fig. 2B). The adoption rate was lowest in the first quartile (Q1) at 8.0%, increasing to 15.4% in Q2, 20.8% in Q3, and 28.0% in the highest impact quartile (Q4).
Examining temporal trends in the implementation of AI guidelines, a clear increase was observed over time (Fig. 2C). Among the journals with unknown revision dates, 13.3% had AI guidelines. This proportion was only 3.9% for journals with guidelines last revised prior to 2023. However, the adoption rate increased to 10.0%, 28.6%, 33.3%, and 27.3% in the first, second, third, and fourth quarters of 2023, respectively. Notably, 57.1% of the journals that revised their guidelines in the first quarter of 2024 incorporated AI policies.
2. Key components of artificial intelligence policies in Korean medical journal guidelines
Analysis of the 18 journals with AI guidelines revealed the following key components (Fig. 3):
1) No AI author: The majority of journals (94.4%) explicitly stated that AI tools could not be listed as authors because they did not qualify for authorship.
Example: “Generative AI, including language models, chatbots, image creators, machine learning, or similar technologies do not qualify for authorship.”
2) Human responsibility for content: Nearly three-quarters of the journals (72.2%) emphasized the responsibility of human authors for the scientific integrity and content of the manuscript.
Example: “Authors bear the responsibility for the scientific integrity of the content generated by AI.”
3) Only for language improvement: A small proportion of the journals (16.7%) limited the application of AI tools for translation and language improvement.
Example: “Journal of Korean Neurosurgical Society’s author’s policy states that authors are allowed to use AI-assisted technologies (such as LLMs, chatbots, image creator, or other) only for English correction or translation to improve the language and readability of their paper.”
4) Declaring AI tool use: All journals (100%) required authors to declare the use of AI tools in some form.
Example: “At submission, authors must disclose whether they used AI-assisted technologies (such as LLMs, chatbots, or image creators) in the production of submitted work.”
5) Disclosing technical details: Some journals (27.8%) required authors to provide specific information on AI technology, such as name, version, and manufacturer.
Example: “As the field of AI is rapidly evolving, authors using AI should declare this fact and provide specific technical details about the AI model used, including its name, version, source, and the method of application in the manuscript.”
6) Declaration template: A minority of the journals (16.7%) provided templates for AI-usage disclosure statements.
Example: “During the preparation of this work the author(s) used [NAME TOOL/SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.”
7) Discouraging AI content generation: Nearly half (44.4%) of the guidelines included phrases that discouraged the use of AI tools for content generation such as text, images, and figures.
Example: “Using AI technologies in creating or altering figures, images, and artwork is discouraged unless such use is part of the research design or methods.”
8) Consequences of non-disclosure: Some guidelines (22.2%) included statements on the consequences of non-disclosure, such as rejection or retraction.
Example: “Manuscripts that fail to include necessary information regarding content generated with the assistance of AI may be rejected from review, and previously published papers may be retracted.”
9) Basic tools/linguistic use exempted: Over one-third of journals (38.9%) mentioned exemptions from the declaration requirement for the use of basic tools or AI solely for enhancing linguistic quality, with five journals specifying that “traditional” basic tools for grammar, spelling, and references do not need declaration, and two journals explaining that “modern” tools can be used for language improvement without declaration.
Example: “The use of AI tools to enhance the linguistic quality of a submission is considered acceptable and does not require specific disclosure.”
10) Declare in cover letter: One-third of the guidelines (33.3%) instructed authors to include a statement in the cover letter.
Example: “If generative AI and AI-assisted technologies have been used in the manuscript preparation, it is essential for the authors to disclose how it was used in both the cover letter and the manuscript itself.”
11) Declare in manuscript section: Regarding the location of the AI declaration, some guidelines specified which section to state it: four journals in the Materials and Methods section, three in the Acknowledgments section, three in a dedicated section, and one in the Acknowledgments section or Materials and Methods section.
Example: “Authors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the References list. The statement should be placed in a new section entitled Declaration of generative AI and AI-assisted technologies in the writing process.”
This study investigated the prevalence and content of AI guidelines in the top 100 Korean medical journals. The findings revealed that only 18% of these journals implemented policies addressing the use of AI tools in research and writing. Although this adoption rate is lower than the global prevalence of 87% among top scientific journals as reported by Ganjavi et al. [12], it is crucial to recognize the increasing trend in AI guideline adoption in Korean journals, particularly in the first quarter of 2024. These results indicate that Korean medical journals may be in the earlier stages of adopting AI policies than other global journals.
A comparison of the key components of AI guidelines in Korean journals with common themes identified in global studies by Ganjavi et al. [12] and Inam et al. [17] revealed several similarities. The requirement for authors to disclose AI use and the universal prohibition on listing AI as an author are consistent across Korean and international journals. Additionally, the shared emphasis on human author responsibility for manuscript content aligns with the recommendations set forth by the International Committee of Medical Journal Editors and COPE [11,18]. These core principles underscore the importance of maintaining human oversight and accountability when using AI tools in medical research and writing.
Moreover, the proactive approach of journals such as the Journal of Educational Evaluation for Health Professions demonstrates a commitment to address the ethical challenges posed by AI in academic publishing [19]. Their detailed guidelines, which include optional disclosure of AI use and an emphasis on limiting AI-generated text to ensure original thought, align closely with international standards and could serve as a model for other Korean journals seeking to develop or enhance their own policies. This example highlights that, while some Korean journals are making notable progress in establishing comprehensive AI guidelines, there remains a need for greater standardization across the board.
Despite similarities in key principles, there are notable differences in the specificity and structure of AI guidelines between Korean and global journals. Korean journals tend to have less detailed requirements regarding the information that should be disclosed when using AI tools, whereas international journals often provide more comprehensive guidance. Furthermore, there is significant variability among Korean journals in terms of where and how AI use should be disclosed in manuscripts. The rates of requiring technical details, providing disclosure templates, and specifying consequences for noncompliance also vary. These findings highlight the need for greater standardization of the guideline content and structure among Korean medical journals, a challenge that is also present globally.
Although not noted in the results, some author guidelines (three of 18 journals) mentioned that AI tools must not be used in the peer review process. This aligns with the findings of Inam et al. [17], who highlighted the strict prohibition of AI use in peer review among the top 25 journals in Cardiology and Cardiovascular Medicine. The prohibition of AI in peer review is due to concerns regarding confidentiality and the need for expert human insight in evaluating scientific work. However, it is important to note that the guidelines for reviewers were not assessed in this study, and the policies regarding the use of AI tools during the review process may warrant further investigation in future research.
The results of this study underscore the need for Korean medical journals to continue expanding their AI policy adoption, particularly among lower-impact journals. To address heterogeneity in guideline content and structure, Korean journals should collaborate with global initiatives to develop cohesive cross-disciplinary standards for AI use in medical research and writing. Additionally, author education and editorial oversight are crucial for ensuring compliance with these guidelines. As the capabilities of LLM-based tools continue to evolve rapidly, regular updates to the guidelines will be necessary to keep pace with technological advancements and emerging ethical concerns.
It is important to acknowledge the limitations of this study, such as its focus on a limited number of high-impact Korean medical journals and the lack of assessment of reviewer guidelines. Future research should examine AI policy implementation, author compliance, the impact of these policies on the peer review process, and the use of AI tools in peer review. As LLM tools become more sophisticated and widely adopted, it will be essential to investigate their influence on the quality, integrity, and efficiency of medical research and publishing [20,21].
In conclusion, this study provides valuable insights into the status of AI guidelines in Korean medical journals. Although the adoption rate is lower than global trends, there has been a clear increase in the implementation of AI policies over time. The key components of these guidelines align with international standards; however, greater standardization and collaboration are needed to ensure the responsible and ethical use of AI in medical research and writing. As AI tools continue to advance and transform the medical research landscape, it is crucial to establish clear, comprehensive, and cohesive policies that promote the transparency, integrity, and pursuit of scientific knowledge.
Supplementary Materials 1 to 3 can be found at https://doi.org/10.12701/jyms.2024.00794.
Supplementary Material 1.
jyms-2024-00794-Supplementary-Material-1.xlsx
Supplementary Material 2.
Detailed analysis of 11 key items of guidelines related to AI use
jyms-2024-00794-Supplementary-Material-2.pdf
Supplementary Material 3.
Prompts used in the study
jyms-2024-00794-Supplementary-Material-3.pdf

Conflicts of interest

No potential conflict of interest relevant to this article was reported.

Funding

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (grant No. 2018R1A5A2021242).

Fig. 1.
Evaluation workflow for artificial intelligence (AI) policy in author guidelines. Initially, both a human researcher and an AI chatbot verified the existence of AI policies within 100 guidelines. From the 18 guidelines identified as having AI policies, the AI chatbot extracts AI-related content and proposes key items for evaluation. After revision of the key items by the human researcher, inclusion of these items in the 18 guidelines is checked by both the researcher and AI. Final confirmation of item inclusion is performed by the human researcher.
jyms-2024-00794f1.jpg
Fig. 2.
Prevalence of and trends in artificial intelligence (AI) guideline adoption in Korean medical journals. (A) Proportion of journals with (18%) and without (82%) AI guidelines. (B) Proportion of journals with AI guidelines by Hirsch index quartile (Q1–Q4). Results show higher adoption rates in journals with higher impact (Q1, 8.0%; Q2, 15.4%; Q3, 20.8%; Q4, 28.0%). (C) Proportion of journals with AI guidelines by revision date. Results show increasing adoption of AI guidelines over time, with 57.1% of journals implementing such policies in the first quarter of 2024, compared to only 10.0% in the first quarter of 2023.
jyms-2024-00794f2.jpg
Fig. 3.
Key components of artificial intelligence (AI) policies in Korean medical journal guidelines. All journals require AI use declaration, and most (94.4%) prohibit listing AI as an author. Many (72.2%) emphasize human responsibility for content. Exemptions for basic/linguistic AI tools are less common (38.9%), and fewer journals discourage AI content generation (33.3%), require technical disclosure (27.8%), specify non-disclosure consequences (22.2%), or provide disclosure templates (16.7%).
jyms-2024-00794f3.jpg
  • 1. Lund BD, Wang T, Mannuru NR, Nie B, Shimray S, Wang Z. ChatGPT and a new academic reality: artificial intelligence‐written research papers and the ethics of the large language models in scholarly publishing. J Assoc Inf Sci Technol 2023;74:570–81.Article
  • 2. Kocoń J, Cichecki I, Kaszyca O, Kochanek M, Szydło D, Baran J, et al. ChatGPT: Jack of all trades, master of none. Inf Fusion 2023;99:101861.Article
  • 3. Dergaa I, Chamari K, Zmijewski P, Ben Saad H. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport 2023;40:615–22.ArticlePubMedPMC
  • 4. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 2023;15:e35179.ArticlePubMedPMC
  • 5. Liu Y, Han T, Ma S, Zhang J, Yang Y, Tian J, et al. Summary of chatgpt-related research and perspective towards the future of large language models. Meta Radiol 2023;1:100017.Article
  • 6. Ghim JL, Ahn S. Transforming clinical trials: the emerging roles of large language models. Transl Clin Pharmacol 2023;31:131–8.ArticlePubMedPMCPDF
  • 7. Khlaif ZN, Mousa A, Hattab MK, Itmazi J, Hassan AA, Sanmugam M, et al. The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Med Educ 2023;9:e47049.ArticlePubMedPMC
  • 8. Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clin Proc Digit Health 2023;1:226–34.Article
  • 9. Thirunavukarasu AJ, Ting DS, Elangovan K, Gutierrez L, Tan TF, Ting DS. Large language models in medicine. Nat Med 2023;29:1930–40.ArticlePubMedPDF
  • 10. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023;613:620–1.ArticlePubMedPDF
  • 11. Committee on Publication Ethics (COPE). Authorship and AI tools [Internet]. London: COPE; 2024 [cited 2024 Mar 8]. https://publicationethics.org/cope-position-statements/ai-author.
  • 12. Ganjavi C, Eppler MB, Pekcan A, Biedermann B, Abreu A, Collins GS, et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ 2024;384:e077192.ArticlePubMedPMC
  • 13. Park SH. Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean J Radiol 2023;24:715–8.ArticlePubMedPMCPDF
  • 14. Kim JK, Chua M, Rickard M, Lorenzo A. ChatGPT and large language model (LLM) chatbots: the current state of acceptability and a proposal for guidelines on utilization in academic medicine. J Pediatr Urol 2023;19:598–604.ArticlePubMed
  • 15. Bom HS. Exploring the opportunities and challenges of ChatGPT in academic writing: a roundtable discussion. Nucl Med Mol Imaging 2023;57:165–7.ArticlePubMedPMCPDF
  • 16. Anthropic. Introducing the next generation of Claude [Internet]. San Francisco: Anthropic; 2024 [cited 2024 Mar 8]. https://www.anthropic.com/news/claude-3-family.
  • 17. Inam M, Sheikh S, Minhas AM, Vaughan EM, Krittanawong C, Samad Z, et al. A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Curr Probl Cardiol 2024;49:102387.ArticlePubMed
  • 18. International Committee of Medical Journal Editors (ICMJE). Recommendations. Defining the role of authors and contributors [Internet]. Philadelphia: ICMJE; 2024 [cited 2024 Mar 8]. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html.
  • 19. Huh S. Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review. J Educ Eval Health Prof 2023;20:40.ArticlePubMedPDF
  • 20. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF Jr, et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Colomb Med (Cali) 2023;54:e1015868.ArticlePubMedPMC
  • 21. Kaebnick GE, Magnus DC, Kao A, Hosseini M, Resnik D, Dubljević V, et al. Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Med Health Care Philos 2023;26:499–503.ArticlePubMedPMCPDF

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Figure
      • 0
      • 1
      • 2
      Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration
      Image Image Image
      Fig. 1. Evaluation workflow for artificial intelligence (AI) policy in author guidelines. Initially, both a human researcher and an AI chatbot verified the existence of AI policies within 100 guidelines. From the 18 guidelines identified as having AI policies, the AI chatbot extracts AI-related content and proposes key items for evaluation. After revision of the key items by the human researcher, inclusion of these items in the 18 guidelines is checked by both the researcher and AI. Final confirmation of item inclusion is performed by the human researcher.
      Fig. 2. Prevalence of and trends in artificial intelligence (AI) guideline adoption in Korean medical journals. (A) Proportion of journals with (18%) and without (82%) AI guidelines. (B) Proportion of journals with AI guidelines by Hirsch index quartile (Q1–Q4). Results show higher adoption rates in journals with higher impact (Q1, 8.0%; Q2, 15.4%; Q3, 20.8%; Q4, 28.0%). (C) Proportion of journals with AI guidelines by revision date. Results show increasing adoption of AI guidelines over time, with 57.1% of journals implementing such policies in the first quarter of 2024, compared to only 10.0% in the first quarter of 2023.
      Fig. 3. Key components of artificial intelligence (AI) policies in Korean medical journal guidelines. All journals require AI use declaration, and most (94.4%) prohibit listing AI as an author. Many (72.2%) emphasize human responsibility for content. Exemptions for basic/linguistic AI tools are less common (38.9%), and fewer journals discourage AI content generation (33.3%), require technical disclosure (27.8%), specify non-disclosure consequences (22.2%), or provide disclosure templates (16.7%).
      Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration

      JYMS : Journal of Yeungnam Medical Science
      TOP