The lighter your skin, the better AI-powered facial acceptance systems work for you. The UK Home Office knows this, because the government‘s been abreast several times on the problem. And a recent report shows that it knew it was developing a authorization affairs built on biased, racist AI. It just doesn’t care.

The UK’s authorization affairs went live in 2016. It uses an AI-powered facial acceptance affection to actuate whether user-uploaded photos meet the requirements and standards for use as a authorization photo. The system rejects photos that miss the mark.

In the time since its launch, many black users have appear abundant issues using the system that white people don’t appear to have, including the system’s abortion to admit that their eyes are open or their mouths are closed. 

Users can override the AI‘s bounce and submit their images anyway, but they’re also warned that their appliance could be delayed or denied if there’s a botheration with the photo – white users can rely on the AI to make sure they don’t suffer these issues, others have to hope for the best.

This is the very analogue of privilege-based racism. It’s a government-sponsored basic antecedence lane for white people. And, according to a abandon of advice act appeal by apostle alignment medConfidential, Home Office was well aware of this before the system was ever deployed.

Per a report from New Scientist writer Adam Vaughn, Home Office responded to the abstracts by advertence it was aware of the problem, but felt it was adequate to use the system anyway:

User analysis was agitated out with a wide range of ethnic groups and did analyze that people with very light or very dark skin found it difficult to accommodate an adequate authorization photograph. However; the all-embracing achievement was judged acceptable to deploy.

AI is abundantly good at being racist because racism is systemic: small, difficult to see groupings of acutely assorted data associate to create any racist system. Given nearly any botheration that can be solved for the account of white people or to a damage excluding white people, AI’s going to reflect the exact same bias built-in in the data it’s fed.

This may not always be the case, but in 2019 it holds as true as basic arithmetic. Google hasn’t ample it out yet, admitting base abandoned black people in an attack to build a database for study. Amazon hasn’t ample it out, admitting affairs law administration agencies around the US its biased Rekognition software. And you can be assertive that the UK’s government hasn’t ample it out yet either.

What the UK’s government has ample out, however, is how to accomplishment AI’s inherent bias to ensure that white people accept appropriate privileges. The UK’s absolution the entire world know what its priorities are.

Read next: Blizzard banned a Hong Kong beef adherent and all hell broke loose