Facial acceptance technology has run amok across the globe. In the US it continues to bolster at an alarming rate admitting bipartisan push-back from politicians and several bounded bans. Even China’s government has begun to catechism whether there’s enough account to the use of all-over surveillance tech to absolve the utter abolition of public privacy.

The truth of the matter is that facial acceptance technology serves only two accepted purposes: access ascendancy and surveillance. And, far too often, the people developing the technology aren’t the ones who ultimately actuate how it’s used.

Most decent, law-abiding citizens don’t mind being filmed in public and, to a assertive degree, would tend to take no barring to the use of facial acceptance technology in places where it makes sense.

For example, using FaceID to unlock your iPhone makes sense. It doesn’t use a massive database of photos to actuate the character of an individual, it just limits access to the person it’s ahead articular as being the accustomed user.

webrok

Facial acceptance in schools also makes sense. Campuses should be closed to anyone who isn’t accustomed and guests should be flagged upon entry. This use of facial acceptance – at entry and exit points only – relies on people’s up-front accord to having their images added to a database.

However, when facial acceptance is used in public thoroughfares such as airports, libraries, hospitals, and city streets it becomes a surveillance tool – one often bearded as an access ascendancy apparatus or a ‘crime prevention’ technique.

In airports, for example, facial acceptance is often peddled as a means to alter boarding passes. CNN’s Fancesca Street acicular out last year that some airliners were implementing facial acceptance systems after customers’ knowledge.

Airports and other publicly-trafficked areas often apparatus systems from companies that claim their AI can stop, prevent, detect, or adumbrate crimes.

There’s no such thing as an AI that can adumbrate crime. Hundreds of adventure capitalists and AI-startup CEOs out there may beg to differ, but the simple fact of the matter is that no human or apparatus can see in to the future (exception: wacky breakthrough computers).

AI can sometimes detect altar with a fair atom of accurateness – some systems can actuate if a person has a cell phone or firearm in their pockets. It can potentially anticipate a crime from occurring by attached access, such as locking doors if a firearm is detected until a human can actuate if the threat is real or not.

But AI purported to adumbrate crimes are simply surveillance systems built on prestidigitation. When law administration agencies claim they use crime-prediction software, what they really mean is that they have a computer cogent them that places where lots of people have already been arrested are great places to arrest more people. AI relies on the data it’s given to make guesses that will please its developers.

When airports and other public thoroughfares employ facial recognition, those amenable for deploying it almost always claim it will save time and lives. They tell us the system can scan crowds for terrorists, people with ill-intent, and abyss at-large. We’re lead to accept that bags of firearms, bombs, and other kinds of threats will be subverted if we use their technology.

But what real account is there? We’re operating under the acceptance that every second could be our last, that we’re in danger every time we enter into a public space. We’re acutely faced with the life-and-death choice to either have aloofness or live through the acquaintance of advertisement ourselves to the accepted public.

Reason and accepted statistics would tell us this can’t possibly be the case. In fact, you’re more likely to die of disease, a car accident, or a drug balance than you are to be murdered by a drifter or killed by terrorist.

It would seem that the benefit’s assessable success – one aggregation says it found about 5,000 threats while scanning more than 50 actor people – doesn’t outweigh the abeyant risks. We have no way of alive what the accurate after-effects of those 5,000 threats would have been, but we do know absolutely what can happen when government surveillance technology is misused.

TNW’s CEO, Boris Veldhuijzen Van Zanten, had this to say about our aloofness in a post he wrote about people who think they have annihilation to hide:

Before WWII, the city of Amsterdam ample it was nice to keep annal of as much advice as possible. They figured; the more you know about your citizens, the better you’ll be able to help them, and the citizens agreed. Then the Nazis came in attractive for Jewish people, gay people, and anyone they didn’t like, and said ‘Hey, that’s convenient, you have annal on everything!’ They used these annal to very calmly pick up and kill a lot of people.

Today, the idea of the government tracking us all through the use of facial acceptance software doesn’t seem all that scary. If we’re good people, we have annihilation to worry about. But what if bad actors or the government doesn’t think we’re good people? What if we’re LGBTQIA  in a state or country where the government is accustomed to discriminate adjoin us?

What if our government, police, or political rivals create databases of known gays, Muslims, Jews, Christians, Republicans who abutment the 2nd amendment, doctors accommodating to accomplish abortions, “Antifa” or “Alt-right” activists, and uses AI to identify, discriminate against, and track people they deem their enemy. History tells us that these things aren’t just possible, so far they’ve been inevitable.

Read next: Trump's chief technology administrator is alleviative AI adjustment like the net neutrality repeal