By Karandeep Dhillon and Tamanna Haider for Team Fridge
We used our prototypes to conduct User testing. User testing allows us to observe how real-life users interact and feel about how our product. It also allows us to know if our product is working in the way we intended, identify usability problems, and how to solve those problems. We had our participants(n=5) answer some questions and complete two tasks using our prototype. We began with background questions such as Do you use any websites or apps designed to keep track of food waste or ingredients? Do you manually keep track of food waste/ingredients? how often do you go grocery shopping? How much do you spend on each grocery trip on average? how many people are in your household? and if reducing food waste and keeping track of ingredients was something they have been wanting to do. After the background questions, the first task our participants had to complete was adding an item to a table and finding an expired item to delete. Our second task was to update the information on an existing food item and find a soon-to-expire food item to delete. Though these tasks seem similar and simple, we gained valuable insights into how real-life users would complete these tasks because there were multiple ways of completing each task. We closed up the user testing with a few wrap-up questions. The wrap-up questions included Were there any moments where the website gave you troubles, and you could not figure out what to do? Were there any features of the website that stood out to you or that you particularly liked? Are there any additional features you wished the website had? and Do you have any other thoughts or comments? Each user test took a maximum of 30 minutes and all participant information was confidential.
From our tasks, we wanted to gain insights on how satisfying, useful, and memorable our application was when adding and removing food items. This is important to us because the main application is users importing data (food items) onto a data table and we know keeping of items can feel like a chore or something that is hard to be consistent with. We wanted to see if the application does a good job at having the users want to continue keep tracking of items and improve on the automatation so it can be less work on the user.
Background questions
Task 1
Task 2
Wrap up
From our user tests and findings, we can conclude that our project/prototypes require a few changes but overall our application is easy to use, understandable, and useful. From our background questions, we were able to gain insights into whether people keep track of their food items either digitally or manually, how much they spent on each trip and how often they go. Most of our participants did want to keep track of their food items and reduce food waste. None of our participants kept track of their food items digitally and most of them believe it would be easier to do digitally. One participant mentioned how they would like to have an app on their phone that keep track of food items. From our tasks, we gained insights on what we could improve and change to make our application easier, more useful, and more compelling to users. To our surprise, we found that all our participants struggled to locate the “add item” button since it took them a minute to add a food item to the table. We plan on fixing this issue by making the button bigger, maybe changing the color, and maybe changing the location of the button. Another insight we gained from our tasks was that participants assumed the “add an image” button within the add food item form was optional when it was actually required. Based on our feedback, we plan on making it optional to add an image to each food item. From our wrap-up questions, we found that a participant wanted the expiration of food items to auto-update either from a database or based on the average expirations the item has. The main takeaway from our user testing was that our participants were very pleased with our application and found it useful. They all could see themselves using this application long term even in the state it is in right now.
Our main caveat was that all participants in the user testing were from the age range and had a similar education and career path. We also only had 5 participants which is the minimum amount a user test should have. If we were to do this again with more accurate results, we would want to have a large group of participants from different age groups, occupations, and lifestyles. Another caveat was that we used our software project as the prototype and in our project, we had things in the application that were not completely implemented yet. This led to our users being a little confused because they had come into the user test assuming that everything in the application was functioning and implemented completely. Next time, we would only present them with a prototype that has all the functionality working. Another caveat relating to the previous one is that our users were only tested on part of what our project is supposed to do. This is reflected by our tasks because the tasks only involved keeping track of ingredients, but another big part of our project is budgeting groceries. At the time we were preparing for this user test, we did not have the dynamic graphs and budgeting functionality implemented. As a result, we only gained insights into some parts of our project and not the whole project.