A recent survey conducted by Oxford University Press involving over 2,300 international researchers reveals strong interest in artificial intelligence (AI) among academic researchers, paired with widespread concerns. This complex attitude toward AI spans all disciplines, as highlighted by the latest report from Oxford University Press.
The findings underscore researchers' significant worries about potential risks associated with AI. While many participants expressed concerns about losing critical thinking skills, issues of intellectual property infringement, and distrust of AI providers, a substantial number still utilize AI tools in their research. The survey, which encompassed various fields—including humanities, STEM (science, technology, engineering, and mathematics), and social sciences—reflects participation from researchers at different stages of their careers.
In response to these findings, Oxford University Press has pledged to support academic researchers in harnessing AI technology to enhance their work, while collaborating closely with tech suppliers to establish clear principles that protect researchers' rights. Notably, 76% of respondents reported using AI tools in various research phases, with machine translation and chatbots being the most popular applications, followed by AI-driven search engines or research assistants. The primary uses of AI in academic research currently include discovering, editing, and summarizing existing findings.
When considering AI's potential impact on academia, more than 28% of respondents believe it could "revolutionize the conduct and dissemination of academic research." Among those who have adopted AI, 67% indicated that it has been beneficial to some extent, although its effectiveness remains to be fully verified. The survey team noted a general apprehension about the broader implications of AI on academic research, with one-third of participants expressing concerns about its possible negative effects on researchers' professional skills. This concern is evident across all fields.
Moreover, the report indicates that 25% of respondents feel that AI technologies may diminish the need for critical thinking, posing a potential threat to the development of essential skills in the research community. Responses to AI usage are not uniform: only 8% of researchers trust AI companies to use their research data responsibly. Moreover, just 6% believe that these companies adhere to data privacy and security standards. Over 60% are worried that AI use in research might infringe on intellectual property rights, depriving original authors of their rightful compensation.
Given these concerns, 69% of participants stress the importance of "thoroughly assessing the impact" of AI tools before incorporating them into research. However, only 10% sought guidance on using AI in academic research. The report also notes that institutional responses to AI are inconsistent. Nearly half of the respondents indicated that their institutions have not established AI-related policies, with over a quarter uncertain of whether such policies exist. Currently, most researchers view academic associations as a primary source of guidance.
Interestingly, researchers from the Baby Boomer generation (born 1946-1964) and Generation X (born late 1960s to early 1980s) are more inclined to fully embrace AI compared to Millennials. Among early-career researchers, one-quarter remain skeptical of AI, whereas later-career researchers show greater acceptance of AI in their work.
The concerns voiced by researchers pose significant challenges for academic publishing institutions. David Clark, Director and General Manager of Academic Publishing at Oxford University Press, states, "We are committed to embracing the new opportunities presented by technological advancements while ensuring that rigorous, high-quality academic resources meet the needs of the academic community and receive the recognition and protection they deserve."
He further emphasizes that the survey on researchers’ perceptions of AI will help the institution better understand how they view generative AI and its applications in research. As technology evolves rapidly, collaborating with researchers and the broader academic community to establish clear standards for AI development is paramount.
Clark adds, "This is a fast-evolving and complex field, but we believe that institutions like Oxford University Press can bridge the gap between researchers and technology providers, facilitating the responsible application of these tools in academic research." Additionally, Oxford University Press is actively working with companies developing large language models to explore responsible development and usage, aiming not only to enhance research quality but also to recognize the vital role researchers play in an AI-driven world.