A product team approached us late in their process. While we typically prefer to be involved in projects from their conception, we will still work with a team even if they are not able to follow our complete process. They had made efforts to redesign an internal product that had previously failed. Through stakeholder interviews, it became clear that it had failed due to several reasons.
1. It tried to do too much
2. It targeted the wrongs users
3. Training was insufficient
They had gone back to the drawing board and redefined the scope of the product, and redesigned it as well. They had done this with little research aside from assuming why it had failed the first time. Before they released it, they wanted us to do some research to make sure their redesign would work. We were also operating on a 4 week deadline.
The Question: Will this new design retain users?
With only a few weeks to work with, we decided to skip doing surveys to gather additional information on features, verbiage, and other factors and go directly to contextual interviews. We wanted to get a sense of the daily operations of the end-users, and determine what role this app would play in those activities. After conducting a number of these remotely, we were able to go back to the product team and trim back on their features some. This helped narrow their scope a bit more, and streamline the application further.
From there we moved to usability testing. These sessions were also conducted remotely, utilizing the virtual meeting and screen share program "Webex". With this, they could interact with the prototype (built in build.me), and it could also record the video and audio. After each round of usability testing, we would summarize our findings and deliver them in a one-page summary. This method breaks away from more traditional UXUI reporting methods, but at this company the vast majority of project managers/owners/BAs/Devs tend to briefly skim (if bothering to read at all) lengthy reports, preferring only to digest key suggestions they could immediately implement (those was proven through our own internal research). As such, we developed a custom report template.
The first screen-shot shows the front of what we called the "Card Report" and features the details of our research (purpose, date, facilitator, participants) o the left hand side, and the findings on the right. The second screen-shot is the bottom of the back of the report that that lists our UX recommendations. On this specific report, we tested a new iteration of the report that included leaving previous usability round information listed and denoting the status of previous recommendations, as well as listing new recommendations that came out of this round. These type of reports received a lot of praise from our customers and saved us from writing lengthy reports that end up going unused.
In regard to the product itself, we did three rounds of usability testing. After each usability test, we would do another iterative design and go back to testing another round.
The three rounds of usability testing served to better the design and hash out nearly all usability issues (some were inherent to the nature of the product). After conducting the usability testing, the developers proceeded with programming the application. It has yet to be completed.