When you accomplish a Google search for every day queries, you don’t about expect systemic racism to rear its ugly head. Yet, if you’re a woman analytic for a hairstyle, that’s absolutely what you might find.

A simple Google image search for ‘women’s able hairstyles’ allotment the following:

webrok

Here, you’ll find hairstyles, about done in a able ambience by stylists.

It’s the nature of Google. It allotment what it thinks you’re attractive for based on contextual clues, citations and link data. In general, and after added context, you could apparently pat Google on the back and say ‘job well done.’

That is, until you try analytic for ‘unprofessional women’s hairstyles’ and find this:

webrok

In it, you’ll find a hodge-podge of hairstyles sported by black women, all of which seem, well, rather normal. On a claimed note, I can’t see annihilation amateurish about any of these, yet the fact it alike when I typed in that query proves not anybody sees it that way.

Again, this is the nature of the beast. These images appear because of the ambience in which they’re talked about. In this case, you’ll see a battery of tweets accusatory about being told by bosses, colleagues or others that their hair was unacceptable for the workplace.

It’s not new. In fact, Boing Boing spotted this back in April 2016.

What’s apropos though, is just how much of our lives we’re on the verge of handing over to bogus intelligence. With today’s deep acquirements algorithms, the ‘training’ of this AI is often as much a artefact of our aggregate hive mind as it is programming. Bogus intelligence, in fact, is using our aggregate thoughts to train the next bearing of automation technologies. All the while, it’s acrimonious up our biases and making them more arresting than ever.

As Donald Trump spouts off at the mouth in racist, sexist, xenophobic rants about how to make the country great again, the accent is being used to train Twitter bots that share his views.

Microsoft’s Tay went from an innocent, albeit scatterbrained, teen to a Hitler-quoting nazi fembot in a matter of hours.

An AI-judged beauty challenge absitively that people of color aren’t quite as pretty as those with light skin.

This is just the beginning, and while offensive, the AI mentioned above is mostly harmless. If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same attempt used to train the antecedent examples. In the future, our neighborhoods will see an access or abatement in police attendance based on data that we already know is biased.

Wrap your head around that for a second.

We’re still very much in the adolescence of what bogus intelligence is able of. In five years, 10 years, 25 years, you can brainstorm how much of our lives will be dictated by algorithms.

Read next: Instagram will let businesses add a 'book appointment' button