GameRec
Upgrading a product with mainly small tweaks, yet vastly improving KPIs.
My role
UX/UI Designer
Timeline
5 months (November 2020 - March 2021)
Team size
5 members
This project was initiated by the Chronic Coder founders (Joey Wong and Christopher Kok).
Important note: This page presents a summary of the project. Specific details are excluded to protect my work from potential plagiarism. The full case study/project brief can be made available.
What is GameRec?
GameRec is a video game recommendation app providing users with accurate and personalized game recommendations based on their preferences and gaming level. The app addresses two main issues with existing recommendation tools: inaccuracies in recommendations and lack of support for different types of gamers.
What are our competitors lacking?
  • There’s little existing game recommendation engines, especially ones that consider multiple platforms.
  • For regular and experienced gamers recommendations are mostly popular games that they already know, there is a lack of niche recommendations (for more “casual” gamers, however, these mainstream recommendations can be useful, as they are less likely to know about current popular games).
What will we do differently?
Our solution is twofold, focusing on both backend and frontend. Thanks to user insights gathered through user research we aim to identify key gaps and opportunities for improvement.
The backend
Provide an API that takes user input and delivers recommendations using machine learning. Here my focus will be scraping data, getting user input and help develop recommendations that can be sent back out through the API.
Database is linked to the API, which is linked to GameRec.
The frontend
Provide an interface where users can input information that helps drive the recommendations. The UI will also be the place we present recommendations in an easily consumable manner for any user type.
GameRec is linked to the user interface elements, which are linked to the user.
Meet the individuals we’re supporting
For the recruitment process, I ensured a balanced representation across all experience levels and genders, ranging from amateur to hardcore gamers. Having a large variety of users would allow us to get a better overview - to understand where other recommendation tools fail.

After completing the recruitment and interview processes, I identified the following 5 personas.
Persona 1 the follower.Person 2: the newbie.Persona 3: the selective gamer.Persona 4: the experienced gamer.Persona 5: the dedicated gamer.
Uncovering insights
Following a competitive audit to identify common patterns, I developed these sketches and low-fidelity wireframes, observing that:
  • High importance placed on the search bar and button across similar apps and tools, both centered placement and large depiction.
  • Most game recommendation platforms didn’t provide copy explaining the value of their tool.
  • Most platforms use the same method of generating recommendations: an unnecessarily lengthy process of adding games and filtering the result, resulting in detours that cause delay in reaching the user goal.
UI sketch for brainstorming.
Low fidelity wireframe.
Low fidelity wireframe for mobile.
Sketch during brainstorming session.
Low fidelity wireframe.
Sketch showing the product unique value.
Sketch of the main product feature.
Shaving off detours
Many competitors split the process of getting recommendations into several sub-steps, by asking for excessive user input. While such input is helpful as it assists the user, these steps greatly increased the time to receive actual game suggestions. Such detours also discouraged some users from continuing on the website.

After noting those described detours, which negatively impacted the user flow, the goal was to keep the user journey as short as possible by avoiding such patterns. Hence, I created an improved model. My goal was to find a balance between assisting the user sufficiently, while also giving them a prompt recommendation.
Control - Main mechanism
Initially, we followed a common pattern seen in many recommendation tools, allowing users to input as many games as they wanted. This resulted in a potentially endless loop where users, without clear instructions, were left wondering, 'Should I add more titles?' This intentional lack of restrictions often led to confusion. In the following sections, we will explore how we addressed this issue.
Flowchart with 'Add game' button leading to 'Add more games?' decision and looping back on 'Yes.'
Control - Chips and Home Page Restriction
This version has extra customization capabilities with chips, which were added because other platforms allow users to filter and view games by categories, genres. However, I minimized the amount of filtering and kept the chips to a minimum. Additionally, users can only start the process from the home page, a pattern that we found on other platforms and that we adopted.
Flowcharts showing decisions to filter a game list or restart the process, with possible outcomes like viewing the recommended games page or using filter chips.
Control - Onboarding and Loading Screens
Finally, the onboarding and loading screens give users quick tips to fill transition times. These screens are placed between actions, such as between inputting games and receiving the recommended list. The reasoning behind including them was based on early user interviews, in which I thought users had mixed experiences with recommendation tools and showed some confusion when asked about some of our competitors.
Flowcharts showing steps to use the main feature.
Minor changes, major rewards
Main mechanism
During the early prototyping stage we reduced the number of titles the user can input. The users’ repeated expression of disapproval with the competition’s lengthy method of generating recommendations made a shortening of the process and avoidance of repetitive actions crucial to us (Hick’s Law), while still allowing for sufficient accuracy.
Control
This early iteration had no cap on user input. This resulted in an unnecessary amount of titles, especially when one of our initial goals was to optimize our API to perform with fewer user inputs.
Excessive inputs highlighted in red.
Variation
Fixed cap set at 5 inputs per recommendation. This quick fix resolved the issue of users being caught in an endless loop (fig 1.0), and it yielded the highest reward-to-effort ratio.
Better version with fewer inputs.
Chips
User feedback indicated that too many filtering options could be overwhelming. To address this, we focused on keeping the filtering options simple and intuitive. This approach ensured that users could easily find relevant recommendations without feeling bombarded by too many choices.
Control
In this early iteration, we implemented chips for filtering and sorting recommended titles, following a common pattern used by competitors (fig 2.1). Slight distinction however, I kept the amount of filtering to a minimum.
Many options to filter inputs.
Variation
Despite minimizing the filters, we eventually removed the chips completely. While competitors' products feature advanced filters, these can overwhelm users unfamiliar with the titles. By eliminating these options, we reduced both interaction and mental friction, aligning with our goal of highlighting more niche titles.
Only the most important options are shown.
Homepage restriction
Feedback revealed that restricting the recommendation process to the home page was causing significant user frustration. Users wanted more flexibility in how and when they could get recommendations, without the hassle of restarting from the beginning each time.
Control
Users could only start the process from the home page like seen on other platforms (fig 2.2). This forced users to restart the entire process each time they wanted to get recommendations.
One unique way to get recommendations.
Variation
We added the ability to get recommendations at different stages during the recommendation flow. This change addressed user frustration with having to restart from the beginning and re-enter all titles. Switching a single title without having to remove and re-enter all five titles is now possible.
Multiple ways at different steps allow using the main feature.
Onboarding and Loading Screens
Initial user testing highlighted that the onboarding and loading screens were seen as unnecessary.
Control
This early iteration had unnecessary screens (fig 3.0), early user feedback showed high level of frustration.
Pages that didn't provide any value.
Variation
This new version has simplified steps by also removing the onboarding and loading screens it provides a shortened completion time. This iteration increased our ratings across different user tests.
Unnecessary pages are removed.
Control
Although removing the loading screens sped up the search process and earned us good ratings in user tests, it introduced a new issue: users were uncertain whether the search process had started due to occasional API lags or timeouts.
Technical issues during search.
Variation
To address this, we added a small loading animation in a container just below the search button as a compromise, this was a cheaper and quicker way to hide how the system couldn’t provide instant or near instant recommendations.
Using UX to solve the technical issue.
Let’s prove the positive impact in numbers!
Control
First medium/high fidelity prototype, used for all initial interviews and tests.
Early prototype highlights.
Variation
High fidelity prototype after a round of usability testing, interviews and two rounds of SUS.
Newer prototype highlights.
Feedback
+ 150%
150% increase in the number of tasks easily completed in variation compared to control during the TCR (or TSR).
+ 31%
While SUS has many limits, we saw a significant improvement using it. First a 8% improvement (control → variation 1), followed with a 31% improvement (variation 1 → variation 2).
+ 214%
Qualitative analysis showed a 214% increase in positive quotes
- 76%
Qualitative analysis also showed a 76% decrease in negative quotes
Moving closer to viability: shaping our style
Visual Design Impact: Survey Results
In the previous tests we had negative feedback regarding our visual style and the lack of imagery across the platform. I ran a survey to target these areas. Following all insights, the following decisions were made to finalize our MVP:
  • Colors: using yellow and orange shades to evoke happiness and entertainment. Using blue and green shades to evoke calmness, peacefulness and comfort.
  • Fonts: Teko features angular designs reminiscent of classic games predating the 128-bit console gaming systems era, aiming to build trust with users. Paired with Poppins to maintain a calm effect, these fonts, being open source and curated by Google, ensure optimal performance and accessibility.
  • Logo: Inspired by pixel art style and the ambiance of arcade game venues to evoke a sense of nostalgia and reinforce feelings of familiarity and comfort.
Style guide color scales.Style guide color scales, semantic.Typography, Teko and Poppins family fonts.
Control
Main feature appearance (first high fidelity prototype).
Search bar and results with a score for accuracy.
Variation
Our MVP’s appearance, crucial issues found during the research cycle are fixed now.
New search bar and results with filtering options.
+ 76%
We observed an 76% increase in positive responses.
- 66%
We noted a 66% decrease in negative responses.
Accessibility
Some small improvements (that competitors overlooked), which lead to significant QoL improvements.
Contrast
WCAG AAA and WCAG AA contrast are achieved across the web app.
List of buttons with good contrast.
Keyboard navigation
Keyboard only and voice navigation are now possible.
Keyboard directional keys and tab key next to a list.
Button clickability
Small spacing adjustments improved user interaction while reducing user frustration.
Old versus new button size comparison.
Alt text
Alt text was implemented as much as possible across the interface.
Picture of a game and alt text code.
The product in action
Allowing users to get recommendations from various sources, eliminating the need to return to the home menu to start the process.
Removing the onboarding reduced friction, making it easier to jump right into using our product.
Making the recommendation process in our MVP quicker and more accurate by reducing the number of inputs. Various loading/splash screens were removed as well.
Full demo of how the recommendation tool works in just under 2 minutes. It shows how we reduced navigation to two clicks (return → open) between games, eliminated the need for external links, added a loading animation which masks small time-outs and finally added more in-platform game details.
Reflecting on the process
By keeping it simple and listening closely to user feedback, we managed to achieve measurable results without excessive API tweaks or completely revamping the entire core mechanism.

This suggests that multiple game recommendation tools could be easily improved, provided user testing is implemented.

While all recommendation tools have their strengths and weaknesses, adding a feedback feature to validate the quality of recommendations could improve their accuracy over time.

Considering that our testing and alterations were low-budget, GameRec has potential for further improvements on a higher scale of development.
What could have been done better?
The following areas could have been dealt with more efficiently:
  • Competitive audit: More time could have been spent on analyzing other features.
  • Underestimating rapid Iteration: I could have a created a bigger variety of versions to test more of our ideas.
  • Post-deployment period: Due to our small team size and the project's nature, we were unable to find a replacement for one of our team members. This stopped our progress from advancing to the next stage. We could have reached out to contractors or alternative solutions, for hands-on support.
Go back to the top.