It's a Matter of Trust February 2015
Want more free featured content?
Subscribe to Insights in Brief
New dynamics in humans' trust of technology are emerging, creating opportunities for firms to foster more sustainable trust relationships with customers.
Trust is a powerful gatekeeper of relationships.
A recent survey conducted by the Pew Research Center (Washington, DC) reveals that Millennials (people born between the early 1980s and the early 2000s) have significantly lower levels of trust than do members of older generations. Financial-services providers, health-care professionals, schools and child-care providers, dating-service providers, and many other service providers may see increasing customer dissatisfaction as Millennials, who are less trusting than older people are and therefore more difficult to please, come of age and increasingly use their services.
The causes of eroding trust among members of younger generations are complex, but the fact that younger generations grew up with technology in the age of social media may be a partial explanation. Users of Facebook's (Menlo Park, California) social-networking service have long demanded transparency about how the company is using their personal data for purposes other than making the service more relevant and efficient. In general, many users are willing to accept the privacy-efficiency trade-off; however, exploitation of personal data generally undermines trust. For example, Facebook recently revealed the results of a secret experiment during which researchers manipulated the news feeds of almost 700 000 Facebook users and then assessed the emotional effect the altered news feeds had on the users. The results of this research project provide the first experimental evidence for emotional contagion through large-scale social networks. Even though the finding advances science, Facebook's data-use policy did not include human research at the time of the study, which likely eroded many users' trust in the company and its services.
From a trust perspective, even more problematic than secret research studies are the collaborations between major technology firms—including Apple (Cupertino, California), Facebook, Google (Mountain View, California), and Microsoft (Redmond, Washington)—and the US National Security Agency (NSA; Fort George G. Meade, Maryland). Edward Snowden, an information-technology consultant to the NSA, collected documents and disseminated them to the media, revealing that major technology companies have given the NSA authorization to engage in comprehensive surveillance of their customers—many of whom do not reside in the United States. Other documents suggest that the NSA may promote weak encryption standards and maintain backdoors through which it can access various platforms. These revelations greatly undermine the trust that domestic and foreign customers have in US technology companies and may result in a fracturing of the internet into smaller, separately secured networks that are inaccessible to the NSA. For example, in response to discovering that the US government spied on German chancellor Angela Merkel, Germany recently discussed creating a Germany-only internet. The extent to which these developments have harmed trust relationships on an international scale has not yet fully manifested.
Additionally, users of these services may often intentionally or unintentionally mislead one another, potentially eroding a general sense of trust among citizens of technologically advanced societies. To increase the trustworthiness of social-media communications, academics at the University of Sheffield (Sheffield, England) are working on social-media-analytics software capable of determining whether postings on social networks are true. The system can distinguish between speculation (deliberate guessing), controversy (a naturally occurring difference in opinions or beliefs), misinformation (mistakes) and disinformation (malicious intent to provide false information). Once fully commercialized, the system would use various sources (including journalists, experts, members of the public, software, and so on) to develop its judgment and could significantly increase trust in information, create an economy of trust, and quickly discredit users who deliberately spread misinformation.
The relationships that humans have with machines raise other interesting questions—chiefly, the question of how to increase humans' trust in nonhuman actors (robots) across applications. Many human tasks—including flying airplanes, testing for diseases, and auditing records—are seeing automation by machines. Interestingly, humans may increasingly forget how to perform the tasks that are seeing outsourcing to machines, increasing human dependence on, but not necessarily trust in, machines. For example, pilots are forgetting how to fly planes. The US Federal Aviation Administration (Washington, DC) has issued a safety alert to airlines, advising them to let pilots do more manual flying.
Holly Yanco, a roboticist at the University of Massachusetts Lowell (Lowell, Massachusetts), and researchers at Carnegie Mellon University (Pittsburgh, Pennsylvania) conducted a study to gauge how a robot's ability to express self-doubt would affect people's trust in the robot. During the study, the researchers asked participants to navigate a robot through a slalom course as quickly as possible. The participants could use a joystick to control the robot or let the robot navigate autonomously. Although the robot was able to complete the course quickly in autonomous mode, it sometimes made navigation mistakes in the absence of human control. Unbeknownst to study participants, the researchers had programmed the robot to make these mistakes. The researchers told participants that the robot would indicate confidence in an upcoming autonomous navigation decision by illuminating a green light and indicate doubt by illuminating a red light. These indications enabled the participants to rely on autonomous mode when the robot expressed confidence and switch to manual-control mode when the robot expressed doubt, thereby improving both course performance and participants' trust in the robot. In contrast, participants in a control group using a robot that provided no feedback had lower levels of trust in the robot because it seemed to be making mistakes at random. These results indicate that the feedback from the robot put participants more at ease and suggest a crucial need for autonomous systems to mimic the social fabric of communication that occurs naturally between humans engaged in a joint task.
Similarly, research by Robin Read and Tony Belpaeme at Plymouth University (Plymouth, England) found that humans interacting with robots engaged more with robots that had the ability to make sounds (bleeps and chirps), which subtly humanized them. Sean Andrist at the University of Wisconsin–Madison (Madison, Wisconsin) and colleagues found that having a robot mimic the natural human tendency to break eye contact during a conversation increased people's perception that the robot was purposeful and thoughtful and decreased communication blocking and interruptions. Thus, human trust in robots increases the more these machines display seemingly irrelevant behavioral tics that humans routinely exhibit to smooth social interaction.
In sum, trust is a powerful gatekeeper of relationships, and firms that understand the drivers of trust will see fewer disruptions to their business models and more success in commercializing innovations.