Intro
Being one of two members on the care pod means balancing many research requests and responsibilities. Some of the areas of the website and app I have covered in the past include appointment booking, pharmacy, video visits, diabetes center, maternity center and social health. I’m going to talk specifically about my social health study I conducted and the process I went through in planning the research, which is a typical process for me when determining a team’s research needs.
Determining research questions and goals
My first step is always meeting with stakeholders to find out what their research questions are and then I develop a plan that supports these goals. For my social health project, our goal was to understand how we can best support members with social health needs with the digital tools that KP was planning to provide. These included a research library and a chatbot to help direct members to local resources. Some of the research questions we wanted to figure out included
- How will members interact with the library?
- What information is important for members to know about the organizations included in the library?
- How can we administer a chatbot conversation that has a tone that is to the point but also warm?
- How will members interact with the chatbot? What will their typing style be like?
- What information can the bot provide that will be of benefit to members?
Planning the Study
Once the research questions were determined, I planned out multiple studies including an initial interview to understand member’s needs, an interview with the call center who currently helps those with social needs to better comprehend some of the requests members come to them with and how they assist them, a take on Wizard of Oz testing where I pretended to be the chatbot and concept testing of both the chatbot and resource library.
Executing the Study
Discussing social health needs with members was very delicate. Some of the topics covered included housing instability, food insecurity, mental health needs and financial instability. One of our goals when interviewing members was to learn more about their experience facing some of these issues and what they did or wish they had during this time of need. Figuring out what helped them the most drove us towards better solutions. When conducting these interviews, I tried to remind the members that there was no pressure to share anything they did not feel comfortable doing so but also that the information they do share would be kept internally. Making users feel comfortable and supported in what they are sharing is key in creating a safe environment. One of my philosophies of conducting research is the more you can make an interview feel like a conversation instead of reading off of a script, the better. You don’t want the user to feel like they’re being interrogated.
My favorite part of this study was trying something new when doing the Wizard of Oz testing for the chatbot. I put a spin on traditional Wizard of Oz testing which requires a pre-written conversation between you and a user which is read aloud, and instead copied and pasted a pre-written conversation into the chat of our meeting. I had multiple paths defined that the user could take and each had separate responses written by our UX writer. I was responsible for comprehending what the user had written and determining which pre-written response was appropriate. I would pause throughout the conversation to ask for user’s feedback on the chatbot’s response as well as question why the user typed their response a certain way. Our goal of this study was to understand how users would react to the chatbot’s responses and have them focus on that rather than the interface. We also wanted the user to be able to naturally type a response as close to as they would a real chatbot. By mimicking this experience we were able to get feedback on the chatbot’s tone, content and helpfulness in providing assistance.
Sharing out the Findings with the Team
After analyzing the data and presenting the findings to the team, I determined a better flow for the chatbot conversation by minimizing the amount of steps taken for the user to get the information about the social health organizations. I figured out what parts of the conversation to eliminate or move and we were able to iterate on the chatbot’s tone as we’d received feedback that the tone could be friendlier. User’s trust in chatbots was very low as many had had negative past experiences with chatbots understanding them. One of my major findings was that this chatbot must be fully comprehensive, otherwise, users will not use it. If they have one bad experience with it, they will be turned off from it for good, especially, when they have the resource library as an alternative. With this in mind, the team is making sure to take the time needed to roll out the most comprehensive version of a chatbot possible.
Determining Next Steps and Roadmap Impact
The team is working on building a chatbot that allows members to find the resources for any of their social health needs. After talking with the call center, one of their practices is to ask members if they need help with issues related to the one they are calling about e.g. if a member is experiencing housing instability, asking them if they also need help with paying for food. One of the goals of the team is to mimic the conversation they may have with the call center and provide them as much help as they may need. The research provided insight into how this chatbot can be so much more than a robot that sends users links but instead provide help with issues they may not have even thought about or provide a number to the call center when they’re feeling lost. When these users are coming to us in a time of need, it may be hard for them to think straight. Having a user experience that doesn’t just provide users with what they need but also with resources they might not even know they need is what will make this project go above and beyond.