British universities have been urged to reassess their assessment methods after new research revealed a significant rise in students using genAI for their projects. A survey of 1,000 undergraduates found that 88% of students used AI tools like ChatGPT for assessments in 2025, up from 53% last year. Overall, 92% of students now use some form of AI, marking a substantial shift in academic behaviours in just a year.
The report, by the Higher Education Policy Institute and Kortext, highlights how AI is being used for tasks such as summarising articles, explaining concepts, and suggesting research ideas. While AI can enhance the quality of work and save time, some students admitted to directly including AI-generated content in their assignments, raising concerns about academic misconduct.
The research also found that concerns over AI's potential impact on academic integrity vary across demographics. Women, wealthier students, and those studying STEM subjects were more likely to embrace AI, while others expressed fears about getting caught or receiving biased results. Despite these concerns, students generally feel that universities are addressing the issue of academic integrity, with many believing their institutions have clear policies on AI use.
Experts argue that universities need to adapt quickly to the changing landscape, with some suggesting that AI should be integrated into teaching rather than being seen solely as a threat to academic integrity. As AI tools become an essential part of education, institutions must find a balance between leveraging the technology and maintaining academic standards.