According to the Guardian, more than 200,000 people have been wrongfully investigated for housing benefit fraud and error after a government algorithm failed to operate as expected.
Two-thirds of claims marked as potentially high risk by a Department for Work and Pensions (DWP) automated system in the previous three years were legal, according to official numbers provided under freedom of information legislation.
DWP Algorithm Leads to Unwarranted Scrutiny of UK Housing Benefit Claims
Every month, hundreds of UK households’ housing benefit claims are unduly scrutinised due to an algorithm’s inaccurate judgement, which incorrectly identifies their claims as high risk.
It also means that around £4.4 million has been spent on authorities doing checks that did not save any money.
The data was intially obtained by Big Brother, which said: “DWP’s overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised, and vulnerable in the backseat.”
The DWP stated that it would be unable to respond during the pre-election period. Labour, which might take control of the system in less than two weeks, has been approached for comment.
Controversy Surrounds DWP’s Use of Automation in Benefits System
Last year, an inquiry by the Information Commissioner into algorithms and similar technologies employed by a sample of 11 local authorities revealed: “We have not found any evidence to suggest that claimants are subjected to any harms or financial detriment as a result of the use of algorithms or similar technologies in the welfare and social care sector.”
Turn2us, a charity that helps people on benefits, said the results proved it was time for the government to “work closely with actual users so that automation works for people rather than against them”.
To assess the risk that a claim is incorrect or fraudulent, the technology considers claimants’ personal information such as age, gender, number of children, and kind of lease agreement.
Once the automated system signals a housing benefit claim as possibly fraudulent or erroneous, council officials are tasked with analysing and validating the claim information, which includes gathering evidence from claimants over the phone or online. They must recognise changes in circumstances and may revise claimants’ housing benefit allocations.
The DWP opted to launch the automated tool, which does not employ artificial intelligence or machine learning, after a test revealed that 64% of instances designated as high risk by the DWP model were receiving the incorrect benefit entitlement.
However, subsequent case studies involving claims indicated significantly less fraud and error. Only 37% of suspected instances were incorrect in 2020-21, 34% in 2021-22, and 37% in 2022-23. This is approximately half as effective as the prediction.
Nonetheless, the approach saved the taxpayer money, with each pound spent on thorough case examinations of dubious claims generating £2.71 in savings, according to DWP estimates for 2021/22.
Last year, the DWP expanded its use of artificial intelligence to detect fraud and error in the universal credit system, which cost £6.5 billion in the previous fiscal year, despite warnings of algorithmic bias against disadvantaged applicants. It has been chastised for not being transparent about how it uses machine learning tools.
In January, it was revealed that the DWP has stopped routinely suspending benefit claims identified by its AI-powered fraud detector. The move was made in response to criticism from claimants and elected officials.
Susannah Copson, a legal and policy officer at Big Brother Watch, stated: “This is yet another example of DWP focusing on the prospect of algorithm-led fraud detection that seriously underperforms in practice. In reality, DWP’s overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat.”
She warned of “a real danger that DWP repeats this pattern of bold claims and poor performance with future data-grabbing tools”.
“It was only recently that the government tried – and failed – to push through intrusive measures to force banks to conduct mass algorithmic monitoring of all customer accounts under the premise of tackling social security fraud and error. Although the powers failed to make it through legislative wash-up, concerns for DWP’s relentless pursuit of privacy-invading tech remain.”
D4S DigiStaff sells one version of the technology to local governments through the government’s digital marketplace website.
The document tells councils: “Our innovative HBAA intelligent automation solution will allow you to process all of your reviews with minimal impact on your staff.”
It lists benefits such as freeing up personnel to focus on higher-value jobs and saving councils money on DWP subsidies.
Got a reaction? Share your thoughts in the comments
Enjoyed this article? Subscribe to our free newsletter for engaging stories, exclusive content, and the latest news.
I need help with this fraud because people do it me and I don’t know what to do