AI images vs the users’ visual perception: What AI should know about the viewer when generating images?

Authors

  • Weronika Kortas Uniwersytet Mikołaja Kopernika w Toruniu https://orcid.org/0000-0002-4276-7651
  • Veslava Osińska Uniwersytet Mikołaja Kopernika w Toruniu
  • Adam Szalach Akademia Kultury Społecznej i Medialnej w Toruniu

DOI:

https://doi.org/10.24917/20811861.23.23

Keywords:

AI images, GAN graphics, eye tracking, visual perception, chat GPT

Abstract

Thesis/Objective of the Article: In recent years, the photorealism of images generated by artificial intelligence algorithms has significantly improved. This study assesses the realism of AI-generated images by comparing them with real photographs. The analysis focused on four categories of objects: human faces, cats, cars, and buildings. Methodology: The study was experimental and used eye-tracking technology to analyze visual perception. Participants were asked to evaluate the authenticity of the presented images and indicate essential visual features. Quantitative data were collected through fixation analysis, and statistical tests were applied to identify differences between selected groups. Additionally, participants completed questionnaires during and after the experiment. Results: Human faces were the most challenging category to identify correctly, while cars and buildings were the easiest. The analysis of gaze patterns and timing characteristics revealed differences based on the participants’ gender and level of domain knowledge. Conclusions: The pilot study indicates that participants with advanced knowledge of generative graphics focused on detecting artifacts, typical errors produced by AI algorithms. The findings suggest that the perception of AI-generated images depends on the viewer’s experience and the type of depicted object. 

 

References

AI or Not – AI Detection for Truth Seekers, [on-line:] https://www.aiornot.com – 14.11.2025.

Duchowski A.T., Eye-tracking methodology, Springer, Berlin 2017.

Fallis D., The epistemic threat of deepfakes, „Philosophy & Technology” 2021, vol. 34, s. 623–643, https://doi.org/10.1007/s13347-020-00419-2.

Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A., Bengio Y., Generative Adversarial Networks, [w:] Advances in Neural Information Processing Systems, 2014, vol. 2, https://doi.org/10.48550/arXiv.1406.2661.

Holmqvist K., Nyström M., Andersson R., Dewhurst R., Jarodzka H., van de Weijer J., Eye-tracking: a comprehensive guide to methods and measures, Oxford University Press, Oxford, 2015.

Jacques J., How to Identify AI-generated and Fake Images, 2023, https://jacquesjulien.com/identify-fake-images/ – 16.05.2025.

Kamali N., Nakamura K., Chatzimparmpas A., Hullman J., Groh M., How to Distinguish AI-Generated Images from Authentic Photographs, Northwestern University, 2024, https://doi.org/10.48550/arXiv.2406.08651.

Karras T., Aila T., Laine S., Lehtinen J., Progressive growing of GANs for improved quality, stability, and variation, [w:] International Conference on Learning Representations, 2018, [on-line:] https://www.researchgate.net/publication/320707565 – 18.11.2025.

Khanna N., Chiu G.T.C., Allebach J.P., Delp E.J., Forensic techniques for classifying scanner, computer generated and digital camera images, [w:] IEEE. International Conference on Acoustics, Speech and Signal Processing, IEEE, 2008, https://doi.org/10.1109/ICASSP.2008.4517944.

Lalonde J.F., Efros A.A., Using color compatibility for assessing image realism, [w:] IEEE. International Conference on Computer Vision, IEEE, 2007, https://doi.org/10.1109/ICCV.2007.4409107.

Lyu S., Farid H., How realistic is photorealistic?, „IEEE Transactions on Signal Processing” 2005, vol. 53, issue 2, s. 845–850, https://doi.org/10.1109/TSP.2004.839896.

Moshel M.L., Robinson A.K., Carlson T.A., Grootswagers T., Are you for real? Decoding realistic AI-generated faces from neural activity, „Vision Research” 2022, vol. 199, https://doi.org/10.1016/j.visres.2022.108079.

OpenAI, ChatGPT (wersja GPT-4) [Model językowy], 2025, [on-line:] https://chat.openai.com/ – 16.05.2025.

Patel D.M., Artificial Intelligence & Generative AI for Beginners: The Complete Guide (Generative AI & Chat GPT Mastery Series), Independently published, 2023.

Park E.,. Kim K.J, del Pobil A.P., Facial Recognition Patterns of Children and Adults Looking at Robotic Faces, „International Journal of Advanced Robotic Systems” 2012, vol. 9, issue 1, https://doi.org/10.5772/47836.

Smołucha D., Eye-tracking in Cultural Studies, „Perspektywy Kultury” 2019, t. 27, nr 4, s. 169–183, https://doi.org/10.35765/pk.2019.2704.12.

Tauscher J.P., Castillo S., Bossey S., Magno M., EEG-Based Analysis of the Impact of Familiarity in the Perception of Deepfake Videos, [w:] IEEE. International Conference on Image Processing (ICIP), IEEE, 2021, https://doi.org/10.1109/ICIP42928.2021.9506082.

How to spot AI-generated images, Techoist, 24.01.2013, [on-line:] https://www.youtube.com/watch?v=zqRcjbft3zg – 16.05.2025.

Twardoch-Raś E., Co widzą sieci neuronowe? Strategie widzenia maszynowego w projektach artystycznych opartych na technikach rozpoznawania i analizy twarzy, „Kultura Współczesna” 2023, nr 1 (121), https://doi.org/10.26112/kw.2023.121.03.

Tsagaris A., Pampoukkas A., Create Stunning AI Art Using Craiyon, DALL-E and Midjourney: A Guide to AI-Generated Art for Everyone Using Craiyon, DALL-E and Midjourney, Independently published, 2022.

Hees J. van, Grootswagers T., Quek G.L., Varlet M., Human perception of art in the age of artificial intelligence, „Frontiers in Psychology” 2025, vol. 15, https://doi.org/10.3389/fpsyg.2024.1497469.

Wawer R., Wawer M., Wykorzystanie nowoczesnych technik komputerowych do pomiaru emocji na podstawie badania fotografii, „Annales Universitatis Paedagogicae Cracoviensis. Studia De Cultura” 2011, t. 2, s. 49–57, [on-line:] https://studiadecultura.uken.krakow.pl/article/view/1572 – 22.11.2025.

Wang Y., Moulin P., On discrimination between photorealistic and photographic images, [w:] IEEE. International Conference on Acoustics, Speech, and Signal Processing, IEEE, 2006, https://doi.org/10.1109/ICASSP.2006.1660304.

Wang H.Y.Z., Wang X., Expertise differences in cognitive interpreting: A meta-analysis of eye-tracking studies across four decades, „Wiley Interdisciplinary Reviews: Cognitive Science” 2024, vol. 15, issue 1, https://doi.org/10.1002/wcs.1667.

Yarbus A.L., Eye Movements and Vision, Springer, Boston 1967.

Xiang J., On generated artistic styles: Image generation experiments with GAN algorithms, „Visual Informatics” 2023, vol. 7, issue 4, s. 36–40, https://doi.org/10.1016/j.visinf.2023.10.005.

Published

2026-01-10

How to Cite

Kortas, W., Osińska, V., & Szalach, A. (2026). AI images vs the users’ visual perception: What AI should know about the viewer when generating images? . AUPC Studia Ad Bibliothecarum Scientiam Pertinentia, 23, 489–512. https://doi.org/10.24917/20811861.23.23

Issue

Section

Artykuły / Articles