The ‘Insights 2024: Attitudes toward AI’ report reveals: researchers and clinicians believe in AI’s potential but demand transparency and trust
Artificial intelligence (AI) is expected to transform research and healthcare, yet the adoption of AI for work use remains low, even with the popularity of platforms like Bard and ChatGPT, according to a new study by Elsevier, a global leader in scientific information and data analytics. The Insights 2024: Attitudes toward AI report, based on a survey of 3,000 researchers and clinicians across 123 countries, reveals that both groups see AI having the greatest potential in accelerating knowledge discovery, increasing research output, and saving costs.
However, to maximize the use of AI, both groups have specific concerns that need to be addressed: they want assurances of quality content, trust, and transparency before integrating AI tools into their daily work. The majority of clinicians and researchers said they believe in AI’s potential to help them and their organizations in their work:
• Accelerating knowledge discovery: 94% of researchers and 96% of clinicians think AI will help accelerate knowledge discovery.
• Increasing research volume: 92% of researchers and 96% of clinicians think AI will help rapidly increase the volume of scholarly and medical research.
• Cost savings: 92% of researchers and clinicians foresee cost savings for institutions and businesses.
• Improving work quality: 87% think it will help increase work quality overall.
• Higher productivity: 85% of both groups believe AI will free up time to focus on higher-value projects.
However, both respondent groups fear that the further rise in misinformation could impact critical decisions:
• Misinformation: 95% of researchers along with 93% of clinicians believe AI will be used for misinformation.
• Critical errors: 86% of researchers and 85% of clinicians believe AI could cause critical errors or mishaps.
• Over-reliance in Clinical Decisions: 82% of doctors in India are concerned that physicians will become overly reliant on AI for clinical decisions.
• Societal disruptions: 79% of respondents are concerned that AI will cause societal disruptions, such as unemployment.
Building Trust in AI
Researchers and clinicians expect tools to be based on high-quality, trusted content and want transparency about the use of generative AI:
• Transparency is key: 81% of researchers and clinicians expect to be told whether the tools they are using depend on generative AI.
• High-quality sources: 71% expect generative AI-dependent tools’ results to be based on high-quality trusted sources only.
• Peer-review transparency: 78% of researchers and 80% of clinicians expect to be informed if the peer-review recommendations they receive about manuscripts utilise generative AI.
• Utilisation with trusted content: If AI tools are backed by trusted content, quality controls, and responsible AI principles, 89% of researchers who expressed belief AI can benefit their work would use it to generate a synthesis of articles, while 94% of clinicians who believe AI can benefit their work said they would employ AI to assess symptoms and identify conditions or diseases.
Kieran West, Executive Vice President Strategy at Elsevier, said, “AI has the potential to transform many aspects of our lives, including research, innovation, and healthcare, all vital drivers of societal progress. As it becomes more integrated into our everyday lives and continues to advance at a rapid pace, its adoption is expected to rise. Researchers and clinicians worldwide are telling us they have an appetite for adoption to aid their profession and work but not at the cost of ethics, transparency, and accuracy. They have indicated
that high-quality verified information, responsible development, and transparency are paramount to building trust in AI tools over time and alleviating concerns over misinformation and inaccuracy. This report has highlighted the steps that need to be taken to build confidence and usage in the AI tools of today and tomorrow.”
AI adoption in India: High Awareness but building trust and transparency in AI will be key
• Awareness: 96% of Indian respondents are aware of AI, including generative AI.
• Current usage: Only 22% of Indian respondents have used AI for work purposes.
• Future usage: 79% of respondents in India who have not yet used AI expect to use it within the next two to five years.
• Positive sentiments: 41% of Indian respondents feel positive about the future impact of AI on their work.
• Transformative impact: 72% of Indian respondents believe AI will have a transformative or significant impact on their work.
• Clinical applications: 94% of clinicians in India believe AI can bring significant benefits in clinical activities such as assessing symptoms and identifying conditions or diseases.
• Building trust: Transparency and quality of content are crucial. 81% of Indian researchers and clinicians expect to be informed if the tools they use rely on generative AI, and 71% expect results to be based on high-quality, trusted sources.
For more than two decades, Elsevier has been using AI and machine learning technologies in combination with our world-class peer-reviewed content and extensive data sets to create products that help the research, life sciences, and healthcare communities be more effective every day. We do so in line with Elsevier’s Responsible AI Principles and Privacy Principles and in collaboration with our communities to ensure our solutions help them achieve their goals. By incorporating generative AI in our offerings, we aim to make it easier and more intuitive for customers to find the information they can trust to accelerate scientific discovery, empower collaboration, and transform patient care.