British universities automating international student interviews to save time and resources are now encountering a new challenge: deepfake applicants. According to Enroly, a software platform used for streamlining application processes, a small number of applicants have attempted to use AI-generated deepfakes to manipulate video interviews.
The use of artificial intelligence to alter faces and voices raises concerns about fraud in university admissions. While cases remain limited, Enroly describes the trend as “the future of fraud” and warns of the increasing sophistication of deceptive technologies. Universities risk penalties if they fail to properly vet international students, making the detection of such fraud crucial.
AI-powered deception detected in online interviews
Enroly revealed that out of 20,000 interviews conducted for UK university applicants in January, 30 cases involved deepfake technology. While this represents only 0.15% of total applications, it highlights the growing use of AI in fraud attempts. The company also found that 1.3% of all interviews involved some form of deception, including impersonation and off-camera assistance.
According to Phoebe O’Donnell, Enroly’s head of services, deepfake fraud presents a serious challenge: “Fake faces layered over real ones, complete with expressions and movements. It’s like something out of a spy film. And yes, they’re incredibly hard to detect.”
Automated interview systems are increasingly used by universities to assess applicants before issuing confirmation of acceptance for studies (CAS), a document required for student visa applications. These systems allow students to record answers to randomised questions, which are later reviewed by admissions staff. If responses appear suspicious, applicants may be flagged for additional checks, including live interviews.
Universities and technology firms respond to evolving fraud risks
As the use of deepfake technology grows, UK universities are implementing advanced detection methods. Enroly states that its fraud prevention system includes facial recognition, passport matching, and real-time monitoring to identify anomalies.
According to the company, while deepfake fraud remains rare, universities must remain vigilant to avoid breaching Home Office regulations. Institutions risk losing their student sponsorship licence if over 10% of sponsored applicants are refused visas.
Deepfake-related fraud is not limited to academia. In 2023, WPP’s chief executive was targeted in a sophisticated AI-generated scam involving a cloned voice and deepfake video during a conference call. This highlights a broader trend of AI-driven fraud extending beyond university admissions.
O’Donnell emphasised that Enroly is working with universities and the wider education sector to stay ahead of fraudsters: “It’s a small but growing trend, and we’re determined to stay ahead of it.”