Considering Artificial Humans February 2014
Want more free featured content?
Subscribe to Insights in Brief
Many science-fiction stories describe android robots so convincing that people believe they are human. Artificial-intelligence pioneer Alan Turing suggested that a machine is intelligent if someone is unable to distinguish it from a human in a text-based conversation. At least online, machines can already trick people into thinking they are human, and some researchers are attempting to create convincing android robots. Machines that masquerade as humans could serve various useful applications, such as aiding in the capture of criminals or providing support to people with dementia. Machines that are humanlike could automate customer services, medical assessments, and other applications that benefit from human interaction; however, such machines also could create new possibilities for criminals.
Technology vendors and governments need to develop policies and guidelines for a future when people and machines become difficult to distinguish.
Child-protection organization Terre Des Hommes Netherlands (The Hague, Netherlands) recently identified 1000 individuals who tried to pay a 10-year-old Filipino girl to commit sex acts on a webcam. The "girl" was a computer-generated avatar controlled by the Terre Des Hommes team. A similar project by researchers at the University of Deusto (Bilbao and San Sebastián, Spain) tricked potential pedophiles with Negobot—an artificial-intelligence software that posed as a teenage girl in text-based chat rooms. Negobot speaks with slang, misspells, and references popular culture. The software begins conversing in "neutral" mode but reacts to sexual comments by referring to a troubled home life and a desire for companionship. Future software will likely be far better at imitating humans in chat rooms and on webcams than current systems are. For example, Jorge Jimenez, a researcher at video-game company Activision Blizzard (Santa Monica, California), recently created a new graphics technique (Separable Subsurface Scattering) that creates highly realistic skin for avatars.
Artificial-intelligence software is getting better at conversational speech. For example, several companies—including bank ANZ (Melbourne, Australia), market-research company Nielson (New York, New York), and mobile-telecommunications company Celcom (Kuala Lumpur, Malaysia)—are testing a version of IBM's (Armonk, New York) Watson (the software that famously won on the game show Jeopardy in 2011) for customer service. Watson Engagement Advisor can support human staff or interact directly with customers via online chat. Unlike most commercially available customer-service chat bots, the Watson software can learn over time and deal with a very wide range of information sources.
In another example of advances in artificial-intelligence, health-care start-up Sense.ly (San Francisco, California) offers Molly—a conversational avatar that asks patients about their level of pain as they perform physical-rehabilitation exercises. Although users are well aware that Molly is artificial, the technology points to several ways that artificial humans could see use in online applications in the future. Apple's (Cupertino, California) Siri and Google's (Mountain View, California) Google Now, two popular intelligent personal assistants, are further examples of software that enables natural conversations between human and computer.
Even if software is imperfect, it can still fool internet users into believing it is human—a far simpler task during text-based chatting than during video conferencing. And as Negobot highlights, language barriers (and perhaps differences in language skills) on the internet make a wide range of conversations seem plausible. For example, if a machine does not understand a question, it can say "sorry, I don't understand" and perhaps fool someone into believing it is just a person with limited English skills.
Social-networking sites, such as Facebook's (Menlo Park, California), create possibilities for interactions that are even simpler than text-based chatting. Software already fools people into believing they are dealing with a human by using nothing more than a fake profile, some friend requests, and some automated "likes." In 2011, researchers at the University of British Columbia (Vancouver and Okanagan Valley, Canada) created a network of about 100 Facebook bots and befriended thousands of genuine Facebook users. A black market now exists for fake accounts and the software that creates them, which people typically use to boost the apparent popularity of Facebook pages.
Although creating a robot capable of convincingly imitating a human in person is a much greater technical challenge than is creating software capable of imitating a human online, some researchers are making progress. Osaka University (Osaka, Japan) and Kokoro Company (Tokyo, Japan) began work on the Actroid android robot in the early 2000s. Researchers at the National Institute of Advanced Industrial Science and Technology (AIST; Toyko, Japan) recently demonstrated male and female versions of Actroid that have eye-mounted cameras, enabling them to make eye contact with and gesture toward people that speak to them. The research team tested people's reactions to the robot by deploying them as observers in a hospital, and it plans to explore Actroid's potential for talking with elderly people and children with learning disabilities.
Although Actroid is capable of moving only its upper body, HRP-4C, another AIST android, combines facial movements with the ability to walk around and dance somewhat like a human does. And researchers at the University of Pisa (Pisa, Italy) have created the FACE android, which creates highly realistic facial expressions. For example, the robot can show happiness, sadness, disgust, amazement, and fear.
For at least the next ten years, robots will be capable of passing for humans only very briefly and under specific circumstances (for example, a robot that welcomes visitors as they enter a dimly lit building), but the emergence of the realistic androids of science fiction is perhaps plausible within 50 to 100 years. Software capable of masquerading as people online—in chat rooms and on voice calls and webcams—will progress far more rapidly (and early examples already exist).
Artificial humans (online or in robot form) could be useful companions for elderly people or some hospital patients. Perhaps the technology will also improve customer service, health assessments, and other services that organizations hope to automate. Researchers have also demonstrated how artificial humans can identify potential sex offenders. But the technology has the potential to do harm as well as good. Already, fraudsters use fake Facebook profiles. Will spammers move from sending email to using artificial humans that call on videoconferencing services? More worryingly, could the same software that identifies potential sex offenders find potential victims? And what if unscrupulous developers start creating artificial children for pedophiles? Artificial humans throw up many questions—no doubt the reason they feature so often in fiction. As fiction becomes a plausible reality, technology vendors and governments need to develop policies and guidelines for a future when people and machines become difficult to distinguish.