Not A Numbers Game

Not A Numbers Game

Dec 15, 2023

Bank of Ireland UK reprimanded by the ICO for data errors on >3k customer accounts

https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/12/bank-of-ireland-uk-reprimanded-for-inaccurate-data-on-customers-accounts/

We’ll never know how many people were declared ineligible for financial products they’d applied for, as a result of these data errors, or the knock-on impact to their welfare and circumstances that followed, because it’s just too much work to figure it all out. It would be nuts to assume that no-one has suffered any hardship though - especially since we’re in a cost of living crisis, the housing bubble is ridiculous, bills and food bank use are trending up.

This case illustrates the fallacy of thinking about #DataHarms in terms of ‘risk’. 

Because risk is about probability - but when it is ‘impossible’ (read: ‘too expensive and inconvenient’) to trace adverse impacts on individuals across an insanely complex network of systemic effects, it is therefore (actually) impossible to reliably calculate frequency or severity of impact. 

Risk-focused thinking permits some people to be harmed, as long as the harms that are done to them remain under the radar. DP law doesn’t allow data subjects to be written off on a numbers basis. It requires protection of all data subjects’ rights and freedoms, all the time.

That’s why I advocate thinking about data protection safeguards in terms of hazards, not ‘risks’. 

A hazard is something that is inherently dangerous unless safeguards are placed around it which alert the people engaging with it to the fact that it is dangerous, and facilitate safe, careful use.

Predictive profiling is a data hazard. It’s not ‘safe unless x happens’, it’s ’unsafe unless x, y and z are in place, diligently & consistently adhered to, incentivised and monitored for efficacy’.

Data protection safeguards are not ‘nice to have’ optional extras that can be slotted into place when risk numbers reach the red zone. They have to be the built in default for all data subjects, all the time - otherwise harms are inevitable. 

Framing data harms in terms of risk, is itself a data hazard (and demonstrably ineffective).

It’s not clear how or why these errors occurred - if I had to guess, I’d say it was probably an effect of people assuming that they were working with safe tools, doing safe operations in safe conditions; and neglecting to pay due care and attention as a result. But that’s just speculation.

Enjoy this post?

Buy Rowenna Fielding a Pizza

More from Rowenna Fielding