Run Buddy Hero-01.png

Run Buddy Design Sprint

 

Challenge Brief

RunBuddy is an app that helps runners in cities find great routes for their next run. After a run, users have the option to rate it. Other users can view this rating to help them find a route. The current version of the screen uses a 5- star rating system. Many users have complained that this rating system is not really helpful for them to find a new route because it isn’t specific enough. My challenge is to design a better solution for the rating system.

**This design sprint was done through Bite Size UX on a 5 day timeline.

My Role

My role in this challenge is lead product designer. Joe of Bitesize UX provided me with the research needed to analyze an effective solution. I also designed the illustrations and UI using the color palette provided by RunBuddy.


 

Synthesizing Research


User Interviews

What factors do you consider when deciding where you’re going to run?
interviews-03.png
 

Affinity Mapping

Based on the responses in the user interviews and groupings via affinity mapping, I found that the runner had three main areas of interest when choosing a run in their neighborhood: Safety, Crowds (whether it was cars or other runners), and Difficulty. Other areas of interest that weren’t as common but interesting to note included proximity to a bike lane and proximity to coffeeshops/places for a post-run snack.

Affinity Mapping.png
 

User Task Flow


 

I mapped out the user task flow. I realized here I would need two different screen options for a rating of three stars or more versus a rating of less than three stars. I later added the cancel ”x” option after usability testing (see below). This helped outline the pages I would need to design through the user’s decision process.

 

Competitive Research


Lyft has a simple star system to show overall ratings and below gives users options in button format. RunBuddy could use this sort of idea to give users the option to give more information on their run. So user is seeing overall ratings and more spe…

Lyft has a simple star system to show overall ratings and below gives users options in button format. RunBuddy could use this sort of idea to give users the option to give more information on their run. So user is seeing overall ratings and more specific ratings below.

Airbnb’s rating screen includes bubbles with personalized responses. This could be a great way to showcase RunBuddy’s additional ratings to give users options to voice their opinion on how safe or how crowded their run was. I think Airbnb’s multiple…

Airbnb’s rating screen includes bubbles with personalized responses. This could be a great way to showcase RunBuddy’s additional ratings to give users options to voice their opinion on how safe or how crowded their run was. I think Airbnb’s multiple screen ratings make it easy to navigate without asking too much of the user. We can avoid text comment boxes while still getting lots of information.

Uber does something similar to Airbnb with the compliments bubbles. The combination of a star rating with the bubbles gives all the information one would need to choose a great run and rate it.

Uber does something similar to Airbnb with the compliments bubbles. The combination of a star rating with the bubbles gives all the information one would need to choose a great run and rate it.

 

Initial Sketches


Crazy 8’s

To get some initial ideas down, I sketched out quick solutions via the Crazy 8s method. When reviewing my quick sketches, I found the following:

  • For overall ratings, I could use a 1-5 star method I would need an alternative method for gauging difficulty, crowds and safety.

  • I would need two different outcomes for a 1-2 star rating vs a 3-5 star rating.

  • I would like to do an A/B test on a multiple screen review vs a single screen review to see what the user preferred. Are multiple screens cumbersome or do they make the info easier to digest?

 
crazy8s-01.jpg
 
crazy8s-01.jpg
 

High Fidelity Wireframes


  • Option A breaks down the ratings into two screens. Why? Keeps information visually simple for the user.

  • After choosing a rating, user is prompted with quick choice buttons based on their response Why? This addresses their review without asking alot of the user.

  • The next screen asks give us more detail on the route itself. Why? This addresses our interview responses on what the runners wanted to know about the route.

Option A - Multiple Screen Rating

lowfidelity-02.png

Option B - Single Screen Rating

  • Option B is simplifies the rating to a single screen versus multiple screens.Why? A one-stop shop for a review versus multiple screens might be easier overall

  • The comments section below expands only if clicked on Why? Gives user the option to add more info, only if they want to.

 
 
 

Usability A/B Testing


3/5 users prefer less screen swiping and a quick one screen response over multiple screens

I ran a usability test of Version A (one screen solution) and Version B (multiple screen solution) and watching 5 users interact with the paper prototypes. 3/5 users preferred Version A, citing simplicity in less screens versus multiple screened option that would take more time. Users that liked Version B cited that the space made the information easier to digest. I decided I would go with Version A. See notes below on Version A.

 
usability test-01.png
 
 

Final Prototypes


The final prototypes were made in Figma. I added simple line graphics to style the pages, and kept the iconography consistent throughout the form. Below are the two versions of the prototype. The first shows a user input of 3-5 stars (positive review), the second shows a user input of 1-2 stars (negative review).

 

 
FINAL SCREENS1-01.png
FINAL SCREENS 2-01.png
 

Next Steps


This was a great project in that I had a clear goal to achieve and could focus on one simple interaction. I was most surprised that the usability test resulted in a single screen solution versus a multi-screen solution. I figured users would like the simplicity of the multi-screen solution, but when I actually went to the next step of designing it, I found ways to use “space” through guiding the user through each response without the need of a “next” button. I think its success is that gives the illusion of space without using a page-by-page sequence. Go forward, I would run another usability test on the final prototype and I would do further testing on the quick response buttons, making sure they’re the best options or most frequently used for ease of rating the run without having to add a comment.