Interface design for millions of learners
The usability of Quizlet as a learning tool for students
📌 Project Abstract
- Project: Term project on Quizlet for a Human Factors course.
- Timeframe: Six weeks
- My Role: UX researcher & project manager.
- Team: I collaborated with two other students on this project. We all participated in the capacity as a UX researcher and project manager.
- Methods: Agile method, User Requirements Interview, Usability Testing, & User Experience Questionnaires
- Tools: Miro, Keynote, QuickTime Player, Google Sheets, & Google Drive
- Deliverable: Slide deck and written report.
- Impact: Awarded best presentation and project for the course.
Project Overview
🚀 Client kickoff
Learning environments are always accompanied by tools. From the cartesian system to diagrams, various tools have historically afforded various learning processes. Quizlet is undoubtedly one the largest platforms for student online learning. Over 60 million active learners from 130 countries practice and master more than 500 million study sets of content on every subject and topic.
🔎 The problem
My collaboraters and I were motivated to:
- Explore the frequent feature uses and experience of Quizlet users.
- Evaluate three main tasks described from the frequent feature uses.
- Ideate on how Quizlet can improve student’s learning outcomes.
🛠️ The solution: methods, research process, and toolbox
User Requirements Interview
- Participants
- In order to understand the frequent feature uses and experience of experienced Quizlet users, 9 individuals were interviewed. While most participants were juniors in university, 20% were transfer and 20% were graduate students, demonstrating a usage of Quizlet’s interface across academic levels. Their self-reported expertise on a scale of 1 to 10 averaged around 8.11. Even though high-school users are a large portion of Quizlet’s user population, the recruitment limitations for this project resulted in a convenient sample. Thus, all the descriptions and conclusions can only be attributed to the representation of the participants.
- The interview sections:
- On-boarding questions where we introduced ourselfes and gave an overview of the project. We also asked participants to introduce themselves and to indicate whether they are a student, whether they work, and more.
- Utility-based inquiry where we asked participants to describe their use with quizlet. This included questions like “what is the last time you used Quizlet?”
- Subjective-based inquiry where we asked participants about their preferences and attitudes towards Quizlet. This included questions like “what are some frustrations you have while using Quizlet?”
- Demonstration-based inquiry where we asked participants to demonstrate how they would use a Quizlet to accomplish a specific task or goal. This included asking participants to spend a few minutes exploring Quizlet and showing us their flashcard sets that they have.
- 📊 User Requirements Interview Results
-
- Most of the participants reported that they had never made a flashcard set themselves, even though they use Quizlet often.
- Participants often expressed that they would use Quizlet on public transit, in time between work and class, or while eating a meal.
- A frequent use was to use Quizlet as a search engine for answering online quizzes or homework. Users of Quizlet will often upload the questions often generated by the textbook offered to instructors in their flashcard sets. Thus, when instructors post online quizzes or homework assignments based on the textbook material, students who copy-and-paste the question into google, are brought to a Quizlet set.
Usability Testing
- Participants
- A total of 10 participants were recruited for the usability testing. While most participants 90% were undergraduate students and 10% were graduate students. Their self-reported expertise on a scale of 1 to 10 averaged around 7.05. Given the limitations of recruiting for this project, none of the participants were in high school, thus the descriptions and conclusions can only be attributed to the representation of the participants.
- Tasks
- Task 1: Participants were asked to create a set of flashcards from a provided excel spreadsheet by importing the excel spreadsheet to a new set
Task 2: A highly frequent use of Quizlet found from the user requirements interview was to use Quizlet as a search engine to answerquiz and homework questions. Thus, in task 2 we gave participants a question and asked them answer it using Quizlet. The question was predetermined as answerable on Quizlet. The question was presented to participants on an index card, and then the participants will be asked to read aloud the question so that they understand it.
Task 3: Participants were told to imagine that they were enrolled in a specific class at a specific university. Participants were asked to search for the a class flashcard set and make a copy of this set and to customize it for later usage.
- 📊 Usability Testing Results
-
- Participants consistently struggled with task 1, demonstrating a weakness in the accurate connection between the user’s mental model of the interface’s metaphorical use of the term “import” and the system’s conceptual model. The metaphor in this feature is presented as a way to import in analogous with the physical action of importing goods (food, materials, etc. through ships, trains, planes); a form of bringing resources from another system to the current one. Participant’s mental model was evidently similar to many systems that utilize the term “import,” meaning bringing an entire file from one system to the current. Quizlet doesn’t accurately depict that interface metaphor since this feature doesn’t evidently offer that function, deceiving the users until they realize the mechanism required to copy-and-paste the information over.
-
After completing task 2, none of the participants were entirely confident with their understanding of the concept of interest (gestalt grouping principles). Since this task was implemented to understand the frequented use of Quizlet as a search engine for questions, many participants seemed overwhelmed by how information was being presented to them. Ultimately many of them were demonstrating a level of fatigue from an overload of information. Participants consistently suggested they always looked for simpler sets of information, relative to the superfluous study sets offered to them. This is an issue of the mission of Quizlet versus the user experience of the system: Quizlet’s search engine ultimately matched the user with the title of the set-in regard to what string they searched. The presented search engine results depict that Quizlet prioritizes the user finding sets, rather than information. This then leads an interface weakness of mapping between the system and the real world. User’s expected to be able to use the search engine to get a layout familiar to them of information, but they were provided with flashcard sets that were difficult to used. In the real world, when looking for an answer to information, people default to mediums with a layout of text (i.e. textbooks, dictionaries, etc.), users don’t often look through flash cards to find an answer. In support of this flaw in mapping between system and the real world, participant Z stated he expected to find information from the class, not the sets, since the class feature matches his mental model of class, “where people learn new information.” Participant Z, and others, were surprised after struggling to find the information buried in a set. All of the users expected to find the information and have to weave through sets to get there.
-
In task 3, participants were generally successful in finding the course of interest, indicating that the user’s expectation of finding a course was met. The system generally offered leeway for exploration of customizing a set, and most of the participants were able to do so in this manner. While it took a bit of time for users to complete this task, in observations this task seemed to be overall the most errorless.
User Experience Questionnaires
- Participants:
- Participants were the same from the User Task Evaluations.
- Interview:
- Participants were asked to rate on a scale of 1 - 7 how much the give word fits as a descriptive word for Quizlet.
- 📊 User Experience Questionnaires Results
-
After all participants completed each task, they reported on a scale of 1-to-7, an average positive valanced word rating of 5.44 with a standard deviation of 0.37. Indicating a lot of positive valanced words like fast, organized, and expected were strongly fitting to the interface. On the same scale, they rated negative valanced words as not being fit around 2.7 with a standard deviation of 0.49. This is in congruency with the indication that a lot of negative valanced words like complex, annoying, inconsistent as not fitting well to the interface (See Table 3 and Appendix D for more information on this User Experience Questionnaire). After everyone completed the tasks, some common words to describe Quizlet was “helpful, massive, content, and confused.”
💭Project Takeaways
User Expectations and Mental Models: The project highlighted the importance of aligning a system’s interface with users’ mental models and expectations. Task 1 revealed that participants struggled with the term “import” due to a mismatch between the system’s metaphorical use and the user’s mental model. Understanding and adapting to user expectations is critical for a seamless user experience.
Information Accessibility: Task 2 exposed a challenge where users felt overwhelmed by the presentation of information. Participants expected to find information laid out in a familiar text format, similar to textbooks, rather than within flashcard sets. This underscores the need to prioritize information accessibility and layout, especially when using an application as a learning tool.
Task Success Rates: The project’s usability testing revealed varying task success rates. While Task 1 had challenges, Task 3 was relatively error-free. This suggests that certain features may need more user-friendly design improvements to enhance task completion times and overall user satisfaction.
User Experience Ratings: The User Experience Questionnaires demonstrated that participants associated Quizlet with positive valanced words like “fast,” “organized,” and “expected.” However, negative valanced words like “complex” and “annoying” also emerged, albeit less frequently. These ratings reflect users’ perceptions of Quizlet’s strengths and weaknesses.
Project Impact: The project achieved recognition as the best presentation and project for the course, indicating the significance of the findings and their potential to drive improvements in Quizlet’s usability and user experience. This success highlights the value of human factors research and the potential for its application in real-world contexts.
These takeaways serve as valuable insights for enhancing Quizlet’s usability and effectiveness as a learning tool for millions of students worldwide.