business

Humanlike features in automated decision tools build trust

When automation uses human characteristics, patients especially younger ones trust that information more than tools that use only text, says a new study.

By — Posted Aug. 7, 2012

Print  |   Email  |   Respond  |   Reprints  |   Like Facebook  |   Share Twitter  |   Tweet Linkedin

Physicians looking for automated tools to help patients manage chronic diseases or other conditions may want to consider systems that feature humanlike characteristics such as a picture of a doctor.

Researchers from Clemson University in South Carolina examined how person-like features, also called anthropomorphic characteristics, can influence a patient’s trust in an automated decision-support aid. They found that such features could lead to an increased trust in the technology.

Because smartphone apps offer opportunities for health care management, lead study author Richard Pak wanted to find out how visual anthropomorphic characteristics might influence someone’s level of trust in mobile apps designed to help users make health-related decisions. There are many types of such characteristics that can be incorporated into an automated tool, including language, behavior or complex interactions.

For the study, which appeared online July 17 in the monthly journal Ergonomics, researchers examined an anthropomorphic feature that would be easy for app developers to implement: a human face (link).

Researchers developed two mock-up apps for diabetes management. One was an anthropomorphic app that featured a picture of a woman in a white lab coat with a stethoscope around her neck. The other offered the same information, but without the woman’s image.

Study participants were put in two groups, each assigned to a different version of the app. The study found that there was greater trust overall in the humanlike app. But older adults had nearly equal trust in both versions of the app.

On a scale of one to seven (seven being the highest level of trust), researchers found that adults younger than 45 had greater trust and dependence on the anthropomorphic app than those who used the nonanthropomorphic app (trust rating of 5.40 compared with 4.46). Adults older than 45 had a trust rating of 5.12 for the humanlike app compared with a slightly higher 5.2 for the nonanthropomorphic app.

Which technology to trust

Greater trust in technology can be a double-edged sword, said Pak, an associate professor of psychology at Clemson. Problems arise when there is too much trust in unreliable technology or a lack of trust in reliable technology. Pak said a good example is global positioning systems. If too much trust is placed in an unreliable GPS system, a motorist could follow its instructions and drive into a lake.

Previous studies have looked at how humanlike characteristics can build or deflate someone’s confidence in the system’s accuracy, depending on how the characteristics are presented. One such study by Raja Parasuraman and Christopher Miller appeared in the April 2004 issue of Communications of the ACM, the monthly magazine of the Assn. for Computerized Machinery. Their study looked at how the perceived “etiquette” of the technology influenced use of it (link).

The study found that the more courteous (noninterruptive and patient) the technology was, the more trustworthy it was to users. If the automation seemed impatient or interruptive, trust in the technology was lower. Parasuraman and Miller concluded that building reliable technology would not be enough to get people to use it.

“Some may find this result disturbing,” they wrote, “since it suggests that developing robust, sensitive and accurate algorithms for automation ­— a challenging task under the best of conditions — may not be necessary as long as the automation ‘puts on a nice face’ for the user.”

Pak said an example of a technology displaying a favorable personality is Siri, the artificial intelligence personal aid on Apple’s iPhone 4S devices. Siri is seen by users to be trustworthy because of its ability to show personality by displaying humor and attitude, Pak said.

Older adults tend to place a lot of trust in automated aids, Pak’s study found, and that could be an issue if there are accuracy problems with the advice a digital aid provides. Pak and his fellow researchers are studying how humanlike characteristics could be implemented to increase or decrease a user’s trust, depending on the accuracy of the aid. For example, if the app isn’t 100% confident with the advice it is giving, it would display a face with an expression associated with doubt.

Back to top


ADVERTISEMENT

ADVERTISE HERE


Featured
Read story

Confronting bias against obese patients

Medical educators are starting to raise awareness about how weight-related stigma can impair patient-physician communication and the treatment of obesity. Read story


Read story

Goodbye

American Medical News is ceasing publication after 55 years of serving physicians by keeping them informed of their rapidly changing profession. Read story


Read story

Policing medical practice employees after work

Doctors can try to regulate staff actions outside the office, but they must watch what they try to stamp out and how they do it. Read story


Read story

Diabetes prevention: Set on a course for lifestyle change

The YMCA's evidence-based program is helping prediabetic patients eat right, get active and lose weight. Read story


Read story

Medicaid's muddled preventive care picture

The health system reform law promises no-cost coverage of a lengthy list of screenings and other prevention services, but some beneficiaries still might miss out. Read story


Read story

How to get tax breaks for your medical practice

Federal, state and local governments offer doctors incentives because practices are recognized as economic engines. But physicians must know how and where to find them. Read story


Read story

Advance pay ACOs: A down payment on Medicare's future

Accountable care organizations that pay doctors up-front bring practice improvements, but it's unclear yet if program actuaries will see a return on investment. Read story


Read story

Physician liability: Your team, your legal risk

When health care team members drop the ball, it's often doctors who end up in court. How can physicians improve such care and avoid risks? Read story

  • Stay informed
  • Twitter
  • Facebook
  • RSS
  • LinkedIn