When reporting results from a usability test, you should focus primarily on your findings and recommendations that are differentiated by levels of severity. Include the pertinent information from the test plan and present just enough detail so that the method is identifiable. Keep the sections short, use tables to display the metrics, and use visual examples to demonstrate problem areas, when possible.

Data Analyses:

At the end of usability testing, you will have collected several types of data depending on the metrics you identified in your test plan. When analyzing the data you’ve collected, read through the notes carefully looking for patterns, and be sure to add a description of each of the problems. Looks for trends and keep a count of problems that occurred across participants.

  • Quantitative Data:

    • Enter the data in a spreadsheet to record data or make calculations

    • Add participants' demographic data so that you can sort by demographics to see if any of the data differs based on demographic values

    • Make sure you identify the task scenarios for each of the metrics

  • Qualitative Data:

    • Record data related to observations about pathways participants took, problems experienced, comments/recommendations, answers to open-ended questions

    • Make sure problem statements are exact and concise. A good problem statement is "clicked on the menu item for equipment instead of items". A poor problem statement is "was confused about menu items".

As you are reviewing the data, consider how global the problem is throughout the game and how severe (or serious) the problem is. Your findings may have implications for other parts of the game or other games (global). For example, you may find that participants could not find what they needed in a menu because of text density. You could say that just that menu needed to be fixed but you should also consider how many other menu are equally dense with text.

Reporting Severity Levels of Problems:

Some problems contribute more to participants not being able to complete the scenarios than others. To help differentiate, you should note the severity of the problems on a three- or four-point scale. For example:

  • Critical: If we do not fix this, users will not be able to complete the scenario.

  • Serious: Many users will be frustrated if we do not fix this; they may give up.

  • Minor: Users are annoyed, but this does not keep them from completing the scenario. This should be revisited later.

Writing the Usability Test Report:

In general, your report should include a background summary, your methodology, test results, findings, and recommendations. There are a number of report templates that you may adapt to assist you in reporting your findings. (You can find one here)

  • Background Summary: Include a brief summary including what you tested (website or web application), where and when the test was held, equipment information, what you did during the test (include all testing materials as an appendix), the testing team, and a brief description of the problems encountered as well as what worked well.

  • Methodology: Include the test methodology so that others can recreate the test. Explain how you conducted the test by describing the test sessions, the type of interface tested, metrics collected, and an overview of task scenarios. Describe the participants and provide summary tables of the background/demographic questionnaire responses (e.g., age, professions, internet usage, site visited, etc.). Provide brief summaries of the demographic data, but do not include the full names of the participants

  • Test Results: Include an analysis of what the facilitator and data loggers recorded. Describe the tasks that had the highest and lowest completion rates. Provide a summary of the successful task completion rates by participant, task, and average success rate by task and show the data in a table. Follow the same model for all metrics. Depending on the metrics you collected you may want to show the:

      • Number and percent of participants who completed each scenario, and all scenarios (a bar chart often works well for this)

      • Average time taken to complete each scenario for those who completed the scenario

      • Satisfaction results

      • Participant comments can be included if they are illustrative.

  • Findings and Recommendations: List your findings and recommendations using all your data (quantitative and qualitative, notes and spreadsheets). Each finding should have a basis in data—in what you actually saw and heard. You may want to have just one overall list of findings and recommendations or you may want to have findings and recommendations scenario by scenario, or you may want to have both a list of major findings and recommendations that cut across scenarios as well as a scenario-by-scenario report. Keep in mind:

      • Although most usability test reports focus on problems, it is also useful to report positive findings. What is working well must be maintained through further development.

      • An entirely negative report can be disheartening; it helps the team to know when there is a lot about the Web site that is going well.

      • Each finding should include as specific a statement of the situation as possible.

      • Each finding (or group of related findings) should include recommendations on what to do.

Incorporating Visuals to Illustrate Specific Points:

You can make the report both more informative and more interesting by including visual content. You may consider including:

  • Screenshots to readers visualize what you were testing. Include parts of screens to illustrate specific areas that are working particularly well or that are causing problems for users.

  • Short video clips to illustrate specific points, if you are presenting the report electronically and the readers of the report have the technology available to see video clips. People who did not observe the actual test sessions are often most convinced of problems and the need to fix them by watching and listening to relevant video clips.

Implement and Retest:

For a usability test to have any value, you must use what you learn to improve the site. You may not be able to implement all the recommendations. Developing any product is a series of trade-offs in which you balance schedule, budget, people's availability, and the changes that are needed. If you cannot implement all the recommendations, develop priorities based on fixing the most global and serious problems. As you prioritize, push to get the changes that users need.

Remember that the cost of supporting users of a poorly-designed game is much greater than the cost of fixing the game while it is still being developed.