Five Artificial Intelligence Insiders in Their Own Words

0
10

But now the A.I. engineers are designing machines they say will think, sense, feel, cogitate, and reflect, and even have a sense of self. Bioengineers are contending that bacteria, plants, animals, and even humans can be radically remade and modified. This means that the traditional distinctions between man and machine, as between humans and nature — distinctions that have underpinned Western philosophy, religion, and even political institutions — no longer hold. In sum, A.I. and gene editing promise (or is it threaten?) to redefine what counts as human and what it means to be human, philosophically as well as poetically and politically.

The questions posed by these experiments are the most profound possible. Will we use these technologies to better ourselves or to divide or even destroy humanity? These technologies should allow us to live longer and healthier lives, but will we deploy them in ways that also allow us to live more harmoniously with each other? Will these technologies encourage the play of our better angels or exacerbate our all-too-human tendencies toward greed, jealousy, and social hierarchy? Who should be included in conversations about how these technologies will be developed? Who will have decision rights over how these technologies are distributed and deployed? Just a few people? Just a few countries?

To address these questions, the Berggruen Institute is building transnational networks of philosophers + technologists + policy-makers + artists who are thinking about how A.I. and gene-editing are transfiguring what it means to be human. We seek to develop tools for navigating the most fundamental questions: not just about what sort of world we can build, but what sort of world we should build —— and also avoid building. If A.I. and biotechnology deliver even half of what the visionaries believe is in store, then we can no longer defer the question of what sort of human beings we want to be, both as individuals, and as a collective.

CreditL. Kasimu Harris/Open Society Foundations

STEPHANIE DINKINS

Artist & associate professor of art, Stony Brook University; fellow, Data & Society Research Institute; Soros Equality Fellow; 2018 Resident Artist, Eyebeam

My journey into the world of artificial intelligence began when I befriended Bina48 — an advanced social robot that is black and female, like me. The videotaped results of our meetings form an ongoing project called “Conversations with Bina48.” Our interactions raised many questions about the algorithmically negotiated world now being constructed. They also pushed my art practice into focused thought and advocacy around A.I. as it relates to black people — and other non-dominant cultures — in a world already governed by systems that often offer us both too little and overly focused attention.

Because A.I. is no single thing, it’s difficult to speak to its overarching promise; but questions abound. What happens when an insular subset of society encodes governing systems intended for use by the majority of the planet? What happens when those writing the rules — in this case, we will call it code — might not know, care about, or deliberately consider the needs, desires, or traditions of people their work impacts? What happens if the codemaking decisions are disproportionately informed by biased data, systemic injustice, and misdeeds committed to preserving wealth “for the good of the people?”

LEAVE A REPLY

Please enter your comment!
Please enter your name here