Gary Hsieh – 91̽News /news Wed, 17 Nov 2021 22:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 A chatbot can help doctors better understand incoming emergency department patients’ social needs /news/2021/11/17/chatbot-help-doctors-understand-incoming-emergency-department-patients-social-needs/ Wed, 17 Nov 2021 18:38:29 +0000 /news/?p=76564 A row of green chairs in a waiting room
A team led by the 91̽ developed a chatbot that could ask emergency department visitors about social needs, including housing, food, access to medical care and physical safety. Photo:

Americans visit hospital emergency departments nearly . Although the focus of these visits is to address acute illness and injury, doctors are increasingly finding that social needs — such as food and housing insecurity — place many patients at higher risk of getting sick and requiring emergency care.

In order to better serve patients and possibly prevent future emergency department visits, doctors need a way to assess incoming patients to establish a wider context behind their visit.

A team led by the 91̽ developed a chatbot that could ask emergency department visitors about social needs, including housing, food, access to medical care and physical safety. The team tested it on 41 patients in Seattle and Los Angeles emergency departments. Results show that two groups of patients preferred the chatbot: patients who had less than a middle school level of health literacy and patients who appreciated establishing emotional connections.

The team in July at the Conference for Conversational User Interfaces 2021.

“A few years ago there was a huge buzz around chatbots, and then people started realizing that maybe they aren’t meant for everything,” said co-senior author , a 91̽associate professor in the human centered design and engineering department. “We have been trying to figure out opportunities where having a chatbot would actually be meaningful and make sense.”

One good opportunity involved collaborating with emergency department doctors.

“We want to understand the upstream issues that bring people into the emergency department. What are the social needs of the patients that we serve and how can we develop interventions that address these needs?” said co-author Dr. , associate professor of emergency medicine in the 91̽School of Medicine. “For many people, including those with low literacy levels, a chatbot makes so much sense for collecting this information.”

The team designed a chatbot named HarborBot, after the hospitals where it was tested. HarborBot takes patients through a social needs survey that was developed by the Los Angeles County Health Agency. This survey asks patients 36 questions related to demographics, finances, employment, education, housing, food and utilities. It also asks questions related to physical safety, legal needs and access to care.

HarborBot is displayed on a tablet as a typical chat window with the patient’s and bot’s conversation showing up in different colored bubbles. HarborBot’s chat bubble shows animated ellipses when the bot is “typing.”

Based on , the researchers improved the chatbot’s efficiency and social skills.

For efficiency, the researchers:

  • modified the amount of time the bot looked like it was typing to match the length of text the bot displayed. This means that the bot would “type” for a shorter amount of time for a shorter response
  • added a question at the beginning of the interaction that would allow patients to stop HarborBot from reading all of its questions and responses aloud
  • placed the patients’ answer options in the same part of the screen so that patients, who were often tired or in pain, could respond without having to move their hands

To increase the empathy of the interaction, the team changed the bot’s reactions to better match the content of the questions and patient responses.

“Some of the questions are quite sensitive — there are questions about violence and sexual abuse — and the bot’s original responses said ‘Sure,’ ‘Great’ or ‘Thanks for sharing with us,'” said lead author , who completed this project as a doctoral student at the 91̽and is now a postdoctoral fellow at Caltech. “We tried tailoring its responses in a way that made them more appropriate for the content and specific to the patients’ responses, such as ‘That must be stressful, thank you for letting me know.'”

Shown here is a question from the social needs survey as a form (left), in the original chatbot (middle) and in the improved chatbot (right). The improvements are shown here as a) through e). a) The chatbot asked people if they wanted to continue hearing it read questions out loud. b) If they said no, the chatbot gave them an option to turn it back on later. c) The chatbot varied the amount of time it spent “typing” based on the length of its response. d) The team fixed the patient response area to one place on the screen. e) The chatbot’s responses were more specific to the context of the questions and the patient’s answers. Photo: 91̽

After HarborBot received its upgrades, the researchers tested it at two emergency departments: one at Harborview Medical Center in Seattle and the other at the Harbor-UCLA Medical Center in Los Angeles.

For both locations, the researchers worked at night (between 8 p.m. and 1 a.m. in Seattle and between 4 p.m. and 4 a.m. in Los Angeles). The teams collaborated with triage nurses to select potential participants. Then the researchers took participants to a visitor room where they could still hear announcements. After the patients signed a consent form, they completed:

  • two surveys to gauge health literacy. One survey asks patients to pronounce health-related terms and the other asks patients to
  • the social needs survey as both a web form through SurveyGizmo and an interaction with HarborBot. These were given in a randomized order
  • evaluations for both the web form and HarborBot
  • a survey to gauge a patient’s desire for emotional interactions

At the end, the researchers interviewed the participants about the experience.

The team was not surprised to find that many people with low health literacy preferred the HarborBot version of the survey — 17 out of 20 low-literacy participants chose HarborBot, compared to 8 out of 21 high-literacy participants. People who valued emotional connection also liked the chatbot but these two groups didn’t necessarily overlap.

“We thought maybe people with low health literacy would also be more in need of emotional interaction,” Kocielnik said. “But it turns out, the two groups are not strongly correlated.”

For the 23 participants who scored high on the emotional interactions questionnaire, 18 chose HarborBot. Meanwhile only 7 of the 18 participants who scored lower on that questionnaire preferred HarborBot.

This paper at the 2021 Conference for Conversational User Interfaces.

“It’s important to understand that chatbots can benefit people in different ways,” said co-author , a 91̽doctoral student in human centered design and engineering.

In the future, the team plans to design a survey system that could tailor the experience to each user. For example, it could start out as the chatbot, but then based on how a user is answering the questions, it could shift into more of a survey format.

“Our vision would be some sort of kiosk people could use while they are waiting. Or even a QR code that people can scan with their own devices and then answer these questions,” Hsieh said. “Ultimately we want to connect people entering emergency departments as smoothly as possible with the resources that they need.”

in the 91̽biomedical and health informatics department is the other co-senior author on this paper. Additional co-authors are Dr. and Layla Anderson, both in the 91̽emergency medicine department; Amelia Wang and , both of whom completed this research as a 91̽undergraduate students majoring in human centered design and engineering; Darwin Jones, who completed this research as a 91̽undergraduate student majoring in biomedical and health informatics; , Shota Akenaga and Dr. Kabir Yadav at the Harbor-UCLA Medical Center; and Dr. at Contra Costa Health Plan. This research was funded by the National Institutes of Health.

For more information, contact Kocielnik at rafalko@caltech.edu, Langevin at rlangevi@uw.edu and Hsieh at garyhs@uw.edu. To speak to Herbert Duber, please contact Susan Gregg at sghanson@uw.edu.

Grant number: UL1 TR002319

]]>
Researchers develop an app for crowdsourced exercise plans, which rival personal trainers in effectiveness /news/2018/05/02/researchers-develop-an-app-for-crowdsourced-exercise-plans-which-rival-personal-trainers-in-effectiveness/ Wed, 02 May 2018 18:31:59 +0000 /news/?p=57515 Exercise can prevent chronic disease, boost mental health and elevate quality of life. But exercise can also be an expensive undertaking — especially for newcomers.

A personal trainer costs an average of $50 per hour, according to WebMD. Alternatives, such as low-cost or free exercise apps, may yield low-quality workouts that are not adapted to individual preferences or lifestyles — which ultimately dampen their effectiveness.

To address these shortcomings, researchers at the 91̽ and Seattle University created CrowdFit, a platform for exercise planning that relies on crowdsourcing from nonexperts to create workout regimens guided by national exercise recommendations and tailored around user schedules and interests.

As the team reported in presented at the in Montreal, in a field evaluation, nonexperts could create exercise plans as effective as experts under certain conditions. In addition, CrowdFit improved the quality of exercise plans created by nonexperts. Compared to nonexpert exercise programs prepared via Google Docs, nonexpert plans created using CrowdFit featured more appropriate levels of exercise for each user, a better progression of activities from week to week, more appropriate strengthening routines and better compositions.

“Most apps available to the public offer limited ability to customize an exercise plan — criteria like goals, age and weight,” said lead author , a 91̽doctoral student in the Department of Human Centered Design and Engineering. “With CrowdFit, we designed greater flexibility to customize exercise plans to a user’s schedule, constraints and nuanced preferences.”

Screen capture of a portion of a CrowdFit user’s profile. Photo: 91̽

Through CrowdFit, a person who wants an exercise plan creates a personal profile on the app, listing information such as daily work schedule, interests and exercise preferences. A nonexpert then uses the profile — as well as exercise and health guidelines provided by CrowdFit — to craft a week-long exercise plan for the user. In the app, the plan is displayed as a detailed schedule, including suggestions for when to exercise, justification for the exercise choices and other information to both encourage the user and help him or her execute the plan correctly. At the end of the week, the user provides feedback, and the planner crafts an updated schedule for the next week.

“We previously saw that people can craft plans for others that are challenging and interesting, but also had shortcomings with respect to exercise science,” said senior author , a 91̽assistant professor of human centered design and engineering. “In this study, we set out to test whether supporting planners with information on exercise science and feedback from users could help them produce plans that are also high-quality in this respect.”

“By involving nonexperts in the process, there’s also an opportunity to increase these nonexperts’ exercise knowledge, ultimately benefiting not just the users, but also the planners,” said co-author , a 91̽associate professor of human centered design and engineering.

The researchers tested CrowdFit in a study of 46 subjects divided into three groups, each of which received a customized exercise plan based on a CrowdFit profile. Subjects in the first group received exercise plans crafted by nonexperts — volunteers who lacked the formal education and expertise of a personal trainer — using CrowdFit, which also contains information on exercise guidelines. The second group received exercise plans created by personal trainers, who used Google Docs to view the users’ profile information and deliver their plans. The final group received exercise plans crafted by nonexperts, again using profile information and plan delivery via Google Docs. Subjects followed their plans for one to two weeks.

Researchers interviewed the users after they had completed the study, and had exercise scientists evaluate each plan.

Screen capture of the planner interface on CrowdFit. Photo: 91̽

Overall, the exercise plans created by nonexperts were as effective as expert-prepared plans based on:

  • How well they were tailored to individual needs
  • The appropriateness of the intensity and duration of aerobic activity
  • The balance between aerobic and muscle-strengthening activities

In addition, the CrowdFit plans crafted by nonexperts tended to be as effective as the plans crafted by professional trainers, especially for features such as incorporating basic exercise principles, creating plans that were compatible with user preferences and schedules, and incorporating sufficient aerobic activity. CrowdFit plans also were easier to understand than expert plans and met recommended exercise guidelines.

“Our study has demonstrated that nonexperts can be guided through designing an exercise plan that is consistent with national recommendations,” said co-author , an assistant professor of kinesiology at Seattle University. “There may not yet be a substitute for a trainer prompting a person through a routine on the gym floor, but the role of the expert is expanding to become more collaborative with the tech industry in guiding future design choices of apps.”

The researchers also found areas where CrowdFit performance could be improved, such as including more exercises to improve flexibility and encouraging warm-ups and cool-downs during workouts. Future versions of CrowdFit could incorporate more detailed guidelines for plan creators.

“We hope that tools like this will contribute to a common goal: to increase the adoption of lifelong exercise by all,” said Welsh.

91̽team members are part of the “Design, Use, Build” group — or . Additional co-authors are 91̽doctoral student , research associate and Diana Oviedo, a former 91̽graduate student now at Microsoft. The research was funded by the National Science Foundation.

###

For more information, contact Agapie at eagapie@uw.edu, Hsieh at garyhs@uw.edu and Munson at smunson@uw.edu.

Grant number: IIS-1553167.

]]>