Files / Emerging technologies

Trust, Attitude, and Use of Artificial Intelligence: 2024 Global Study

The University of Melbourne and KPMG jointly released a report, based on a global sample of ,, providing an in-depth analysis of public, employee, and student trust in , adoption patterns, risk perceptions, and regulatory expectations.

Detail

Published

22/12/2025

List of Key Chapter Titles

  1. Public Attitudes Towards AI
  2. Employee Attitudes Towards AI in the Workplace
  3. Student Attitudes Towards AI in Education
  4. Conclusions and Implications
  5. Research Methodology and Statistical Notes
  6. Sample Demographic Characteristics
  7. Key Indicators by Country
  8. Trends in Key Indicators Across 17 Countries
  9. Public Usage and Understanding of AI Systems
  10. Public Trust and Acceptance of AI Systems
  11. How the Public Perceives and Experiences the Benefits and Risks of AI
  12. Public Expectations for AI Regulation and Governance

Document Introduction

This study, jointly conducted by the University of Melbourne and KPMG, aims to provide an evidence-based understanding of global public trust, attitudes, usage, and governance expectations regarding artificial intelligence. The report is based on 48,340 valid questionnaires collected from 47 countries and regions between November 2024 and January 2025, covering all major geographical areas, with samples nationally representative in terms of age, gender, and regional distribution. This is the fourth iteration of this series of studies and, for the first time, provides a trend comparison with data from 17 countries in 2022 (prior to the release of ChatGPT), offering a unique perspective on the evolution of public attitudes following the proliferation of generative AI.

The report structure is divided into three core empirical sections and a conclusion. The first section, "Public Attitudes Towards AI," systematically examines the adoption, understanding, trust, sentiment, risk and benefit perceptions, and regulatory expectations of AI at the societal level. Using structural equation modeling, the research identified four complementary pathways influencing AI trust and acceptance: the knowledge pathway (AI literacy and training), the motivation pathway (perceived benefits), the uncertainty pathway (perceived risks), and the institutional pathway (adequacy of regulation and confidence in entities), with the institutional pathway having the most significant impact. The study found that despite high AI adoption rates (66% of the public regularly and intentionally uses AI), trust remains a major challenge, with more than half (54%) cautious about trusting AI. The public has greater trust in AI's technical capabilities but is more skeptical about its safety, security, and social impact. Emerging economies significantly lead developed economies in AI adoption, trust, acceptance, and AI literacy.

The second section, "Employee Attitudes Towards AI in the Workplace," delves into the integration of AI in work settings. Data shows that the era of AI at work has arrived, with 58% of employees regularly using AI in their jobs, with generative AI tools being the most prevalent. While AI brings significant performance benefits (such as increased efficiency, innovation, and decision quality), it is also accompanied by widespread self-reported employee misuse, complacent use, and non-transparent use, increasing organizational risks. Simultaneously, employees report complex impacts of AI on workload, stress, interpersonal collaboration, and monitoring. The study found that organizational support and governance for responsible AI use lag behind the pace of adoption, particularly in developed economies.

The third section, "Student Attitudes Towards AI in Education," reveals the widespread use of AI in education and its impacts. 83% of students regularly use AI in their studies and report benefits such as increased efficiency and personalized learning. However, student misuse, over-reliance, and complacent use are more prevalent and have mixed effects on critical thinking, collaboration, and assessment fairness. The research points out that educational institutions are significantly lagging in providing policy guidance and supporting students' responsible use of AI.

The conclusion section synthesizes the research findings, noting that alongside rapid AI adoption, the public, employees, and students all exhibit significant ambivalence—anticipating its benefits while also worrying about its risks and negative impacts. This tension highlights the "grand challenge" of responsible AI integration at the individual, organizational, societal, and international levels. The report provides concrete action pathways for policymakers, organizational leaders, educational institutions, and individuals, emphasizing that investing in AI literacy, establishing robust governance and regulatory frameworks, promoting international collaboration, and ensuring AI development and use are centered on human well-being are key to achieving trustworthy and sustainable AI adoption.