My usability testing responsibilities included the following:
Being the point of contact for our users
I was responsible for reaching out to our users to schedule times with them for their tests, responding to user inquiries before and after the tests, and sending users a post-test survey regarding their testing experience.
Coordinating tests with the feature team and all notetakers/facilitators
The feature team of the software feature being tested was always heavily involved in this process. It was my job to guide them through the usability testing process to ensure they could obtain the feedback they wanted from the test. Future iterations of their designs depended on the results of these UX tests, so accuracy was vital to the team.
Assisting in writing/editing the script
Part of my job in coordinating with the feature team was to assist with the writing and editing of the scripts for the facilitators. The feature team chose which functionality they wanted to test within their feature. As the usability testing head, I helped to refine this script to be more user-friendly and concise, so our users could more easily understand their tasks. I also helped identify functionality they may want to test in the future based on the feedback received from testing.
Running practice tests with all involved parties
Running practice tests was key in helping the process run smoothly. We ran practice tests not only with the team but also with those who were completely unfamiliar with either the test or feature being tested. In this way we could verify that any user could understand the flow, regardless of their familiarity with the software.
Running the preparation session for all participants
Preparation sessions were also important in ensuring that our tests ran smoothly. I ran the prep sessions, with team members assisting in designing the tests. I recruited a volunteer to pretend to be a user so I could familiarize the team with how the test would be run and how to facilitate the test.
Creating an analysis based on the test results
Analyses consisted of a review of the data collected from the tests. It was my job to go through all of the notes and identify the pain points as well as any positive feedback or suggestions from our users. I would then highlight the recurring challenges, as well as the positive feedback to show where we were doing well.
Each analysis included a breakdown for each task and individual user, including the path the user took, quotes and a rating for the usability issues the user may have faced. In the analysis, there was also an overall summary for each task of the most common path taken as well as any recurring pain points or positive feedback.
I enjoyed analyzing the results and making suggestions to adapt the software features to solve the issues identified by the testing.
Presenting this analysis to the stakeholders
After the testing and my report were complete, I presented my analysis to the stakeholders. I walked them through each task and the breakdown of usability issues they faced. These issues were separated into Must Fix and Consider Fixing, based on the severity of the usability issues identified by the testing. After these recommended improvements were agreed to by the team, the stakeholders assigned the recommendations to the corresponding teams.
Improving usability testing methods
Usability testing is something I am passionate about and focused on heavily during my time at Drexel University. In leading usability testing during my internship at Oracle, one of the biggest changes I implemented was a longer initial interaction/interview with our users prior to the test. I created and added a pre-test interview section asking the users more about their background experience with the software. After participating in many tests, I noticed that gathering this background information was not part of our routine testing.
I realized this lack of information was skewing our results, since some users had more experience with our software than others. Knowing the experience level of a user as they completed a task gave us more accurate testing results. We could analyze user confusion based on the user’s previous knowledge of the software and thereby develop a more complete view of usability issues with the software.
Results
Adding in a brief user interview in the beginning of the test gave us more data and accuracy in our results and also helped users feel more comfortable since both the facilitator and user could get more familiar with each other before starting the test. This change led to a more talkative and comfortable environment, allowing users to experience the supportive tone set by the facilitator beforehand and as a result feel comfortable being forthright in their feedback.
Facilitators at Oracle are now required to ask about users’ background in usability testing so this information can be used in the analysis of the test results.
Reflection
Leading usability testing projects was an amazing opportunity for me and helped me grow as a leader and expand my knowledge of how companies like Oracle run their tests.