An Analysis of Explainable Artificial Intelligence Implementation in Mobile Health-Based Disease Diagnosis Systems in Indonesia

Authors

  • Galih Rakasiwi Program Studi Sistem Informasi, Universitas Harapan Bangsa, Purwokerto, Indonesia
  • Khanza Khanza Program Studi Sistem Informasi, Universitas Harapan Bangsa, Purwokerto, Indonesia
  • Aulia Izzatunnisa Program Studi Sistem Informasi, Universitas Harapan Bangsa, Purwokerto, Indonesia

Keywords:

Explainable Artificial Intelligence, Mobile Health, Artificial Intelligence, Disease Diagnosis Systems, User Trust and Interpretability

Abstract

This study investigates the implementation of Explainable Artificial Intelligence in mobile health (mHealth)-based disease diagnosis systems in Indonesia, focusing on improving transparency, user understanding, and trust. The growing integration of Artificial Intelligence in healthcare has enhanced diagnostic efficiency and accessibility; however, many systems still function as “black-box” models, limiting interpretability and reducing user confidence. This study addresses the gap between high diagnostic accuracy and low explainability in mHealth applications. A mixed-method approach was used, combining quantitative and qualitative data. The research analyzed selected mHealth applications to assess the availability and effectiveness of explainability features. In addition, user data were collected through surveys and semi-structured interviews involving patients and healthcare professionals. Quantitative data were analyzed statistically to examine relationships between explainability, user understanding, and trust, while qualitative data were explored through thematic analysis to capture user experiences and perceptions. The findings reveal that XAI significantly enhances users’ understanding of AI-generated diagnoses, particularly when explanations are simple, visual, and context-specific. Improved understanding was found to positively influence user trust and acceptance of mHealth systems. However, the study also identifies a trade-off between interpretability and model performance, along with challenges related to digital literacy, infrastructure, and usability. Furthermore, the effectiveness of explainability depends on user characteristics and the design of explanation mechanisms. Overall, this research provides empirical insights into the practical implementation of XAI in mHealth systems and offers recommendations for developers, policymakers, and healthcare institutions to design more transparent, user-centered, and trustworthy AI-driven healthcare solutions.

Downloads

Download data is not yet available.

References

Akande, O. A. (2020). Leveraging explainable AI models to improve predictive accuracy and ethical accountability in healthcare diagnostic decision support systems. World Journal of Advanced Research and Reviews, 8(2), 415–434.

Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V. I., & Consortium, P. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 310.

Barda, A. J., Horvat, C. M., & Hochheiser, H. (2020). A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Medical Informatics and Decision Making, 20(1), 257.

Barnum, C. M. (2020). Usability testing essentials: Ready, set... test! Morgan Kaufmann.

Beaudouin, V., Bloch, I., Bounie, D., Clémençon, S., d’Alché-Buc, F., Eagan, J., Maxwell, W., Mozharovskyi, P., & Parekh, J. (2020). Flexible and context-specific AI explainability: a multidisciplinary approach. ArXiv Preprint ArXiv:2003.07703.

Bjerring, J. C., & Busch, J. (2021). Artificial intelligence and patient-centered decision-making. Philosophy & Technology, 34(2), 349–371.

Cutillo, C. M., Sharma, K. R., Foschini, L., Kundu, S., Mackintosh, M., Mandl, K. D., & 1, M. I. in H. W. W. G. B. T. 1 C. E. 1 C. C. 1 G. K. 1 G. V. 1 J. R. 8 S. B. 9 S. N. (2020). Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digital Medicine, 3(1), 47.

Dang, H. D. (2011). A latent transition analysis of self-efficacy among men treated for cocaine dependance. University of California, Los Angeles.

Horn, C., Pitman, D., & Potter, R. (2019). An evaluation of the visualisation and interpretive potential of applying GIS data processing techniques to 3D rock art data. Journal of Archaeological Science: Reports, 27, 101971.

Kennedy, G., & Gallego, B. (2019). Clinical prediction rules: a systematic review of healthcare provider opinions and preferences. International Journal of Medical Informatics, 123, 1–10.

Liao, Q. V., & Varshney, K. R. (2021). Human-centered explainable ai (xai): From algorithms to user experiences. ArXiv Preprint ArXiv:2110.10790.

Lyu, D., Yang, F., Kwon, H., Dong, W., Yilmaz, L., & Liu, B. (2021). Tdm: trustworthy decision-making via interpretability enhancement. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(3), 450–461.

Malvey, D., & Slovensky, D. J. (2014). mHealth: transforming healthcare. springer.

Noei, E., Zhang, F., Wang, S., & Zou, Y. (2019). Towards prioritizing user-related issue reports of mobile applications. Empirical Software Engineering, 24(4), 1964–1996.

Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., & Turini, F. (2019). Meaningful explanations of black box AI decision systems. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9780–9784.

Ramar, V. A., & Rathna, S. (2018). AI-Driven Cloud-Based Deep Learning for Predictive Healthcare Analytics: Enhancing Disease Diagnosis with CNNs in Medical Imaging. Chinese Traditional Medicine Journal, 1(5), 12–18.

Sandelowski, M., & Barroso, J. (2003). Creating metasummaries of qualitative findings. Nursing Research, 52(4), 226–233.

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.

Sokol, K., & Flach, P. (2020). One explanation does not fit all: The promise of interactive explanations for machine learning transparency. KI-Künstliche Intelligenz, 34(2), 235–250.

Van Der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404.

Wang, S., Qureshi, M. A., Miralles-Pechuan, L., Huynh-The, T., Gadekallu, T. R., & Liyanage, M. (2021). Applications of explainable AI for 6G: Technical aspects, use cases, and research challenges. ArXiv Preprint ArXiv:2112.04698.

Ye, T., Xue, J., He, M., Gu, J., Lin, H., Xu, B., & Cheng, Y. (2019). Psychosocial factors affecting artificial intelligence adoption in health care in China: cross-sectional study. Journal of Medical Internet Research, 21(10), e14316.

You, Y., Kou, Y., Ding, X., & Gui, X. (2021). The medical authority of AI: A study of AI-enabled consumer-facing health technology. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16.

Zecca, L., & Cotza, V. (2021). Distance Education and Beyond: A Student Voice Research toward an Ecological Perspective. Book of Abstracts of 7th International Conference on Education (ICEDU 2021), 128.

Zhang, Z., Genc, Y., Wang, D., Ahsen, M. E., & Fan, X. (2021). Effect of AI explanations on human perceptions of patient-facing AI-powered healthcare systems. Journal of Medical Systems, 45(6), 64.

Downloads

Published

2026-03-30

How to Cite

Rakasiwi, G., Khanza, K., & Izzatunnisa, A. (2026). An Analysis of Explainable Artificial Intelligence Implementation in Mobile Health-Based Disease Diagnosis Systems in Indonesia. Idea: Future Research, 4(1), 1–11. Retrieved from https://idea.ristek.or.id/index.php/idea/article/view/62