The Impact of Deepfake Exposure on Public Perception and Trust in Digital Media: The Moderating Role of Digital Literacy

Authors

  • Briella Celine Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore
  • Jasmine Jasmine Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore

Keywords:

Deepfake, Public perception, Trust in digital media, Digital literacy, Misinformation

Abstract

The rapid advancement of artificial intelligence has led to the emergence of deepfake technology, which enables the creation of highly realistic manipulated digital content. While this innovation offers creative and technological benefits, it also raises serious concerns regarding information authenticity, public perception, and trust in digital media. This study aims to analyze the impact of deepfake exposure on public perception and trust in digital media, as well as to examine the moderating role of digital literacy. This research employs a quantitative approach with an explanatory design, combining survey methods and experimental stimuli. Data were collected from active digital media users through a structured questionnaire, measuring variables such as deepfake exposure, public perception, trust in digital media, and digital literacy. The data were analyzed using statistical techniques, including Structural Equation Modeling (SEM), to examine both direct and indirect relationships among variables. The results indicate that exposure to deepfake content significantly affects public perception by increasing uncertainty and skepticism in evaluating digital information. Public perception is found to mediate the relationship between deepfake exposure and trust, while digital literacy plays a moderating role, reducing the negative effects among individuals with higher levels of critical understanding. In conclusion, this study demonstrates that deepfake technology not only distorts perception but also undermines trust in digital media ecosystems.  This research contributes to the growing body of literature by providing an integrative analysis of psychological and media-related impacts of deepfake technology in the digital era.

Downloads

Download data is not yet available.

References

Ahmad, J., Salman, W., Amin, M., Ali, Z., & Shokat, S. (2024). A survey on enhanced approaches for cyber security challenges based on deep fake technology in computing networks. Spectrum of Engineering Sciences, 133–149.

Ahmed, S. (2023). Examining public perception and cognitive biases in the presumed influence of deepfakes threat: empirical evidence of third person perception from three studies. Asian Journal of Communication, 33(3), 308–331.

Ainur, A. K., Sayang, M. D., Jannoo, Z., & Yap, B. W. (2017). Sample size and non-normality effects on goodness of fit measures in structural equation models. Pertanika Journal of Science & Technology, 25(2).

Boediman, E. P. (2025). Exploring the impact of deepfake technology on public trust and media manipulation: A scoping review. Jurnal Komunikasi, 19(2), 131–152.

Broersma, M. (2010). The unbearable limitations of journalism: On press critique and journalism’s claim to truth. International Communication Gazette, 72(1), 21–33.

Da Silva, S., Gupta, R., & Monzani, D. (2023). Highlights in psychology: cognitive bias. In Frontiers in Psychology (Vol. 14, p. 1242809). Frontiers Media SA.

Dehghani, A., & Saberi, H. (2025). Generating and detecting various types of fake image and audio content: A review of modern deep learning technologies and tools. ArXiv Preprint ArXiv:2501.06227.

Dobber, T., Metoui, N., Trilling, D., Helberger, N., & De Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91.

Faleiros, F., Käppler, C., Pontes, F. A. R., Silva, S. S. da C., Goes, F. dos S. N. de, & Cucick, C. D. (2016). Use of virtual questionnaire and dissemination as a data collection strategy in scientific studies. Texto & Contexto-Enfermagem, 25, e3880014.

George, A. S., Baskar, T., & Srikaanth, P. B. (2024). The erosion of cognitive skills in the technological age: How reliance on technology impacts critical thinking, problem-solving, and creativity. Partners Universal Innovative Research Publication, 2(3), 147–163.

Heinecke, S. (2019). The game of trust: Reflections on truth and trust in a shifting media ecosystem. In Media trust in a digital world: Communication at crossroads (pp. 3–13). Springer.

Johnson, G. (2025). The Impact of Deepfakes on Public Trust in Digital Media: Exploring Public Perception of Media Trust.

Karnouskos, S. (2020). Artificial intelligence in digital media: The era of deepfakes. IEEE Transactions on Technology and Society, 1(3), 138–147.

Martens, B., Aguiar, L., Gomez-Herrera, E., & Mueller-Langer, F. (2018). The digital transformation of news media and the rise of disinformation and fake news.

Omweri, F. S. (2024). A systematic literature review of e-government implementation in developing countries: examining urban-rural disparities, institutional capacity, and socio-cultural factors in the context of local governance and progress towards SDG 16.6. International Journal of Research and Innovation in Social Science, 8(8), 1173–1199.

Rubin, V. L. (2022). Credibility assessment models and trust indicators in social sciences. In Misinformation and disinformation: detecting fakes with the eye and AI (pp. 61–94). Springer.

Shen, C., Kasra, M., Pan, W., Bassett, G. A., Malloch, Y., & O’Brien, J. F. (2019). Fake images: The effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media & Society, 21(2), 438–463.

Sibona, C., Walczak, S., & White Baker, E. (2020). A guide for purposive sampling on twitter. Communications of the Association for Information Systems, 46(1), 22.

Spector, P. E. (2019). Do not cross me: Optimizing the use of cross-sectional designs. Journal of Business and Psychology, 34(2), 125–137.

Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative Cambridge, MA.

Tandoc Jr, E. C., Ling, R., Westlund, O., Duffy, A., Goh, D., & Zheng Wei, L. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society, 20(8), 2745–2763.

Weber, T. J., Hydock, C., Ding, W., Gardner, M., Jacob, P., Mandel, N., Sprott, D. E., & Van Steenburg, E. (2021). Political polarization: Challenges, opportunities, and hope for consumer welfare, marketers, and public policy. Journal of Public Policy & Marketing, 40(2), 184–205.

Yadav, S., Kesharwani, A., & Sharma, D. (2025). Blurred Boundaries of Truth: A Review of Deepfakes and Fake News. Journal of Internet Commerce, 1–23.

Zafar, G., Kasheer, M., Hameed, R. M., Ullah, I., Khan, W. A., Shakeel, R., Nisar, H., & Niazi, S. (2025). Impact of deep-fake advertising disclosure on purchase intention with mediating roles of perceived reality, trust, perceived ethicality, and irritation. International Journal of Social Sciences Bulletin, 3(4), 578–603.

Downloads

Published

2025-10-30

How to Cite

Celine, B., & Jasmine, J. (2025). The Impact of Deepfake Exposure on Public Perception and Trust in Digital Media: The Moderating Role of Digital Literacy. Idea: Future Research, 3(3), 11–118. Retrieved from https://idea.ristek.or.id/index.php/idea/article/view/59