Background Recently, there have been various developments in medical service robots (MSRs). However, few studies have examined the perceptions of those who use it. The purpose of this study is to identify user perceptions of MSRs.
Methods We conducted a survey of 320 patients, doctors, and nurses. The contents of the survey were organized as follows: external appearances, perceptions, expected utilization, possible safety accidents, and awareness of their responsibilities. Statistical analyses were performed using t-test, chi-square test, and analysis of variance.
Results The most preferred appearance was the animal type, with a screen. The overall average score of positive questions was 3.64±0.98 of 5 points and that of negative questions was 3.24±0.99. Thus, the results revealed that the participants had positive perceptions of MSR. The overall average of all expected utilization was 4.05±0.84. The most expected utilization was to guide hospital facilities. The most worrisome accident was exposure to personal information. Moreover, participants thought that the overall responsibility of the robot user (hospital) was greater than that of the robot manufacturer in the case of safety accidents.
Conclusion The perceptions of MSRs used in hospital wards were positive, and the overall expected utilization was high. It is necessary to recognize safety accidents for such robots, and sufficient attention is required when developing and manufacturing robots.
Citations
Citations to this article as recorded by
Robotic Anesthesia: A Vision for 2050 Thomas M. Hemmerling, Sean D. Jeffries Anesthesia & Analgesia.2024; 138(2): 239. CrossRef
Exploring the influence of anthropomorphic appearance on usage intention on online medical service robots (OMSRs): A neurophysiological study Yi Ding, Ran Guo, Muhammad Bilal, Vincent G. Duffy Heliyon.2024; 10(5): e26582. CrossRef
Human-Robot Interaction and Social Robot: The Emerging Field of Healthcare Robotics and Current and Future Perspectives for Spinal Care In Ho Han, Dong Hwan Kim, Kyoung Hyup Nam, Jae Il Lee, Kye-Hyung Kim, Jong-Hwan Park, Ho Seok Ahn Neurospine.2024; 21(3): 868. CrossRef
Nurses' perceptions of medical service robots in negative‐pressure isolated wards and in general wards: A cross‐sectional survey Jung Hwan Lee, In Ho Han, Jong Hwan Park, Kye‐Hyung Kim, Jaehyun Hwang, Dong Hwan Kim, Jae Il Lee, Kyoung Hyup Nam Nursing Open.2024;[Epub] CrossRef
Customer acceptance of service robots under different service settings Yi Li, Chongli Wang, Bo Song Journal of Service Theory and Practice.2023; 33(1): 46. CrossRef
Background It is not possible to measure how much activity is required to understand and code a medical data. We introduce an assessment method in clinical coding, and applied this method to neurosurgical terms.
Methods Coding activity consists of two stages. At first, the coders need to understand a presented medical term (informational activity). The second coding stage is about a navigating terminology browser to find a code that matches the concept (code-matching activity). Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT) was used for the coding system. A new computer application to record the trajectory of the computer mouse and record the usage time was programmed. Using this application, we measured the time that was spent. A senior neurosurgeon who has studied SNOMED CT has analyzed the accuracy of the input coding. This method was tested by five neurosurgical residents (NSRs) and five medical record administrators (MRAs), and 20 neurosurgical terms were used.
Results The mean accuracy of the NSR group was 89.33%, and the mean accuracy of the MRA group was 80% (p=0.024). The mean duration for total coding of the NSR group was 158.47 seconds, and the mean duration for total coding of the MRA group was 271.75 seconds (p=0.003).
Conclusion We proposed a method to analyze the clinical coding process. Through this method, it was possible to accurately calculate the time required for the coding. In neurosurgical terms, NSRs had shorter time to complete the coding and higher accuracy than MRAs.
Citations
Citations to this article as recorded by
Are ICD codes reliable for observational studies? Assessing coding consistency for data quality Stuart J. Nelson, Ying Yin, Eduardo A. Trujillo Rivera, Yijun Shao, Phillip Ma, Mark S. Tuttle, Jennifer Garvin, Qing Zeng-Treitler DIGITAL HEALTH.2024;[Epub] CrossRef