Adrienne LaFrance’s Atlantic article “What Is a Robot?” is one of my favorite pieces thus far in 2016. As the title suggests, the writer tries to define what qualities earns a machine the name “robot,” a term perhaps not as slippery as “existential” but one that’s nebulous nonetheless. The piece does much more, presenting a survey of robotics from ancient to contemporary times and asking many questions about where the sector’s current boom may be leading us.
Two points about the article:
- It quotes numerous roboticists and those analyzing the field who hold the opinion that a robot must be encased, embodied. I think this is a dangerous position. A robot to me is anything that is given instructions and then completes a task. It’s increasingly coming to mean anything that can receive those basic instructions and then grow and learn on its own, not requiring more input. I don’t think it matters if that machine has an anthropomorphic body like C-3PO or if it’s completely invisible. If we spend too much time counting fingers and toes, we may miss the bigger picture.
- Early on, there’s discussion about the master-slave relationship humans now enjoy with their machines, which will only increase in the short term–and may eventually be flipped. The following paragraph speaks to this dynamic: “In the philosopher Georg Wilhelm Friedrich Hegel’s 1807 opus, The Phenomenology of Spirit, there is a passage known as the master-slave dialectic. In it, Hegel argues, among other things, that holding a slave ultimately dehumanizes the master. And though he could not have known it at the time, Hegel was describing our world, too, and aspects of the human relationship with robots.” I believe this statement is true should machines gain consciousness, but it will remain a little hyperbolic as long as they’re not. Holding sway over Weak AI that does our bidding certainly changes the meaning of us and will present dicey ethical questions, but they are very different ones than provoked by actual slavery. Further, the human mission being altered doesn’t necessarily mean we’re being degraded.
From LaFrance:
Making robots appear innocuous is a way of reinforcing the sense that humans are in control—but, as Richards and Smart explain, it’s also a path toward losing it. Which is why so many roboticists say it’s ultimately not important to focus on what a robot is. (Nevertheless, Richards and Smart propose a useful definition: “A robot is a constructed system that displays both physical and mental agency, but is not alive in the biological sense.”)
“I don’t think it really matters if you get the words right,” said Andrew Moore, the dean of the School of Computer Science at Carnegie Mellon. “To me, the most important distinction is whether a technology is designed primarily to be autonomous. To really take care of itself without much guidance from anybody else… The second question—of whether this thing, whatever it is, happens to have legs or eyes or a body—is less important.”
What matters, in other words, is who is in control—and how well humans understand that autonomy occurs along a gradient. Increasingly, people are turning over everyday tasks to machines without necessarily realizing it. “People who are between 20 and 35, basically they’re surrounded by a soup of algorithms telling them everything from where to get Korean barbecue to who to date,” Markoff told me. “That’s a very subtle form of shifting control. It’s sort of soft fascism in a way, all watched over by these machines of loving grace. Why should we trust them to work in our interest? Are they working in our interest? No one thinks about that.”
“A society-wide discussion about autonomy is essential,” he added.•