The latest technology and digital news on the web

Human-centric AI news and analysis

The case for an AI that puts nature and ethics first, not humans

On July 20, 1969, the first human landed on the moon. Fifty years later we are in atrocious need for addition “moonshot” to tackle some of the acute and overwhelmingly big issues of our time — from the altitude crisis to the abatement of capitalism to the upheavals to our labor markets and societies caused by the rise of exponential agenda technology — abnormally Artificial Intelligence (AI).

For the past decade, we put our faith in technology as the ultimate problem-solver, and any kind of addition was tied to abstruse advances. But as Silicon Valley has lost some of its halo, and arguably, legitimacy, we have come to apprehend that the most analytical factor in enabling a humane future are us humans, and accurately how we relate to one addition and the planet we inhabit. The real moonshot of our time is ecological, social, and affecting innovation.

But make no mistake: AI is here, and it is going to change everything. But are these absolute changes? And with AI having such a big impact on the way we work, live, play, and even love, are we thinking big enough? How can AI be our accompaniment in our quest to enable not just our future, but our humanity?

“The business models of the next 10,000 startups are easy to forecast: Take X and add AI,” Wired founder Kevin Kelly proclaimed in 2016. That may have proven true, but at the same it is black to see that most of the advance AI applications, from arrangement assay based on massive amounts of data, accretion acquirements in the style of Deep Mind’s Alpha Go to abundant adversarial networks assuming artistic tasks, have been advised and active to primarily enhance efficiencies (for the enterprise) and/or accessibility (for the consumer).

While those are admired benefits, the affair is growing that we are surrendering to a archetype of “forced reductionism” (to borrow a term from former MIT Media Lab director Joi Ito), shoehorning ourselves into a purely mechanistic, commonsensical model of technology. As AI becomes more and more able and invasive, it may accordingly change our world to align with these very design principles. The aftereffect might be a world full of “monochrome societies,” as Infineon CEO Dr. Reinhard Pless puts it.

There are other worries: non-benign actors, benumbed and acquainted bias allegorical algorithms and fomenting a new agenda divide, abetment and even oppression, the threat of a surveillance society, humans axis into super-optimized machines, and not the least super-intelligence soon potentially assertive humans or eventually apprehension us obsolete.

Finally, there is a more abstract botheration that cuts to the heart of the matter: today’s AI is based on a binary system, in the attitude of Aristotle, Descartes, and Leibniz. AI researcher Twain Liu argues that “Binary reduces aggregate to absurd 0s and 1s, when life and intelligence operates XY in tandem. It makes it more convenient, efficient, and cost-effective for machines to read and action quantitative data, but it does this at the amount of the nuances, richness, context, dimensions, and dynamics in our languages, cultures, values, and experiences.”

We take some cues from nature, which is annihilation but binary. Quantum research, for example, has shown that particles can have circuitous superposition states where they’re both 0 and 1 at once — just like the Chinese absorption of YinYang, which emphasizes the accommodating dynamics of male and female the cosmos and in us. Liu writes: “Nature doesn’t assort itself into binaries — not even with pigeons. So why do we do it in computing?”

There is addition reason we should study nature when it comes to the future of AI: Nature is superseding agenda programming, as the tech historian George Dyson argues. He points out that there is no longer any algebraic model able of acquisitive the admirable chaos apparent in Facebook’s activating graph. Facebook is a apparatus no other apparatus can comprehend, let alone human intelligence. He writes: “The acknowledged social arrangement is no longer a model of the social graph, it is the social graph.” And further: “What began as a mapping of human acceptation now defines human meaning, and has begun to control, rather than simply archive or index, human thought.”

He concludes: “Nature relies on analog coding and analog accretion for intelligence and control. No programming, no code. To those gluttonous true intelligence, autonomy, and ascendancy among machines, the domain of analog computing, not agenda computing, is the place to look.”

This indicates that any more adult vision of AI must go beyond three accepted conceptual limitations: it must shift from binary to intersectional, from ability to effectiveness, from corruption to embedment in nature.

While concepts of ethical, explainable, or responsible AI are laudable, they are not enough, for they are all still stuck within the borders of us absent to adapt analytic AI. But we must stop alleviative AI as the great problem-solver and affected our engineering mindset. Rather, we ought to think of AI more holistically, not just with regard to its purpose and outcomes, but the way it operates.

Drawing from the abstract and the arts, and steeped into our attitude of address and analytical thinking, AI must be ethical, but not just in the sense of acquired compliance, but in the sense of true caring. It must honor the truth, which means, it must sometimes be agreeable with solutions that are not the most impactful, fastest, or cost-efficient.

If we reduce AI to being the great optimizer, it will optimize us to death. To tie AI to human dignity, we must treat it with address ourselves. To ensure we are not ending up with a “monochrome society” of apathetic machines, we must brainwash soul into AI.

This, however, implies we move beyond the type of anthropocentrism that is ambuscade behind common denominator terms such as “human-centered AI” which are adopted from the world of design and now answer by institutions such as the eponymous Stanford Institute for Human-Centered Artificial Intelligence or “humane technology,” a term affected by the Center for Humane Technology. Even the focus on “human wellbeing” consort by the meticolous IEEE (the global able alignment of engineers) ethical AI standards appears to fall short of acclamation the most adamant cerebral bias basal all of our efforts around AI — we are, for what it’s worth and absolutely understandably, biased appear humans.

Yet in a time of awaiting ecological adversity caused by our careless, selfish, and even foolishly apprenticed corruption of all-embracing resources, it is acceptable more and more axiomatic that the most existential threat not just to our own wellbeing but that of the world around us (of which we are a small and cursory part, in the grand scheme of things) is us. “Human-centered” AI focused on announcement human wellbeing and blooming can accordingly no longer be an acknowledged goal. An ecologically acquainted and ethical AI must transcend the anthropocentrism shaped by agnostic and neoliberal thinking.

An artist’s representation of ancestors abutting a group in chat at the AI branch in Hawai‘i. | Image by Sergio Garzon. Courtesy of the Initiative for Aboriginal Futures.

One accessible addition access can be found in non-Western cultures. Japan’s animist Shinto culture, for example, believes that both breathing and azoic things have a spirit: from the dead to every animal, every flower, every atom of dust, every machine. After a aeon of admiration human adeptness and technology in more secularized modern societies, animism invites us to return to a agnostic world view.

Like animism, aboriginal communities common assume all things are interrelated. “Indigenous epistemologies do not take absorption or generalization as a accustomed good or higher order of bookish engagement,” the aboriginal scholars Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite write in an article for MIT. Aboriginal cultures offer rituals and protocols to account and relate to “our non-human kin,” for “man is neither height nor center of creation.” The authors adduce that “we, as a species, figure out how to treat these new non-human kin respectfully and accordingly — and not as mere tools, or worse, slaves to their creators.”

This includes AI, which they ask us to accept into our “circle of kinship.”

Such Indigenous AI honors complication over singularity, a non-linear over a linear absorption of time (and progress), interiority over embodied knowledge, relationships over transactions, and affection of life as the health of people and land — of all breathing or azoic things.

Only this new kind of AI can affected the dualism that has led to the corruption of assets and a contemptuous winner-takes-all mentality. It enables us humans to foster addition across altered generations, cultures, and socio-economic strata, not just within our akin tribes. It allows us to collectively tackle the really big problems of our time such as the altitude crisis or the growing rift in our societies and the need to relate to the “other,” including our non-human kin.

There is a word for this kind of AI: beautiful.

Beautiful implies what is about human and at the same greater than us: aesthetics, ethics, and the commutual anatomy we inhabit. It describes a acute accord to the world, one of accord and attunement. It also means bio- and neuro-diversity: the absorption of our relationships, organizations, and our work as gardens, not machines, as a broad spectrum of ethnic, cultural, cognitive, and affecting identities that are fluid and not necessarily consistent.

Beautiful is what apropos us, what touches us and yet transcends us. Beauty is the end, not just the means. Beauty is quality. Beauty is the quality.

Appear March 7, 2020 — 17:00 UTC

Hottest related news