In January 2020, I had the pleasure of being selected as one of 11 students for Fi @ UCSD, a student-run UX agency that partners with local clients to help them improve the quality of the services they provide. My task would be the work on improving Get It Done, an app run by the City of San Diego’s Performance and Analytics department that allows city residents to report issues in their communities.
The project was divided into 3 main stages: user research, solution research, and design + client presentation. But because of the onset of COVID-19 and abrupt shift to working from home, the project completely stalemated during the designing phase (the last 5 weeks of the project). This is when I took the initiative to lead all project groups and bring the project to completion.
The Get It Done (GID) platform is a phone app and desktop website that allows San Diego residents to create reports on issues (such as potholes, graffiti, etc) to the city, which in turn tries to resolve these issues. The city had noticed that many residents tend to make reports on the same issues, and had received complaints from users over not hearing back about the issues that they had reported. They had reached out to Fi to do a deeper dive into the app to discover the extent of these problems.
Our solution was a complete redesign of the app experience that helped solve issues with duplicate reporting, allowed for more detailed communication between the city and its residents, and addressed other usability issues to create a more efficient and intuitive experience. We presented our solution to the City of San Diego, and they were excited to see how they could use our recommendations to make changes to the current app.
Roles & Responsibilities
UX research (discovery + validation)
UI design (iOS)
Pen and paper
Zoom (remote usability testing)
Timeline: 14 weeks
Timeline: 14 weeks
Discovery user research (3 weeks)
Expert review (2 weeks)
Paper prototyping and testing (2 weeks)
Hi-fidelity designing (3 weeks)
Remote usability testing (2 weeks)
Client presentation (2 weeks)
Designing for Community-Based Reporting
I then worked with 2 other consultants to audit the current app using heuristics to guide us. This allowed us to understand how specific design problems had contributed to the use patterns we had discovered in our research. We mostly looked at the mobile app since we learned that most users used Get It Done on their phones. But, we compared the app with the Get It Done desktop website to see if there was anything one did better than the other.
Below are some of the findings related to the experience of looking at recent reports, with the usability issues we had identified and our suggested solutions. The analysis allowed us to better understand why users didn't look at recent reports before making a new report. The full report with all the findings can be found here.
A lack of clear information hierarchy in the information shown on a report page makes it seem cluttered, and prevents users from being able to efficiently scan through details.
An example of an illegal dumping report.
Typography improvements to reduce the amount of time it takes users to understand the specific details. This helps create more scannable content and allows users to quickly determine whether their issue has already been reported.
A lack of user control prevents users from being able to search or filter through reports to see the categories of reports they care about (ex. potholes).
While the app doesn't have a filter, the "view reports" feature on the website allows users to select specific categories of issues.
A filter or search bar that allows users to filter all reports by the categories that users already use when making a new report (which creates a consistent mental model). A search bar can also be valuable by giving users flexibility in terms of how they view existing reports.
Our research was divided into 3 different teams looking at 3 different parts of the app experience. I was assigned to the pre-reporting team, the experience users had before reporting an issue. Since I had the most experience on my team in creating research studies and plans in past projects, I guided the team as to what research methodologies to use and why.
The platform’s “recent reports” feature allowed users to see reports that had been recently created by other residents. The main thing we wanted to look into:
How do users know if an issue has already been reported? Do users check recent reports before making a new one, and why or why not?
The "Recent" reports feature in the current app experience.
To gain insight into the demographics and general usage patterns of the users of GID, we created and sent out a survey. In total, we got responses from 32 users. We also conducted 3 usability tests, and I managed all the participant recruitment. Our goal was to move past our quantitative survey responses and instead observe how users actually interacted with the app.
With the survey and usability testing results together, we converged on three main takeaways:
Prototyping & Testing
Once we were able to resume the design process, we spent 3 weeks prototyping, followed by 2 weeks of user testing. I managed the different teams as we all worked on designing different parts of the experience, and helped our primary visual designer Katie in creating the design system. Once we had created our initial prototype, I scheduled and led 4 remote usability testing sessions, which helped us validate our designs and iterate further.
Consultant Irene and I conducting a remote usability test with one of our participants.
Below is the evolution and brief explanations for the core aspects of the experience that we aimed to redesign.
Creating a new report
Evolution of the flow for creating a new report.
Original: In the original "new reports" experience, the user selects a category for their issue, and then enters details to submit their report. The process is simple, but there is no prompting of any similar reports that are already made.
Explorations: We explored prompting users with similar reports at different parts of their journey in the new report creation process.
What we learned: Users first response when they see an issue is to take a photo and submit it to Get It Done. We should have users do this first, and then only add the rest of the details to the report in the case that there are no similar existing issues.
After user testing
Before user testing
In the first version, users would input their address when starting to make a new report. But after participants told us that they tend to immediately take a photo and that the address is often pre-filled (using their current location), we combined these two steps.
Viewing existing reports
Evolution of the features for the redesigned All Reports tool.
Original: In the original "recent" reports experience, all of the recently-made reports are clumped together in a list, map, and picture view. The labels to switch views were unclear, and there was no way to only see the reports that users care about.
Explorations: We thought a filter and searching system would help give users more control over how they see the issues. We explored what the different filters should be, and how they should be laid out to establish the best hierarchy in accordance with what users care about the most when scanning through reports.
What we learned: Most users would rather search than filter, so it's important to make it clear to users what they can search for. When filtering, users care most about filtering by specific categories of issues, so this should be prioritized.
After user testing
Before user testing
Participants eyes were drawn to filtering by status, due to the large radio buttons. The issue, however, was the most important filter, so we decided to use a dropdown filter for both the issue and the status.
Keeping track of reports
Evolution of the features for the redesigned My Reports tool.
Original: In the current "My Reports" tab, users can see the reports they made, but there is no sense of communication on the progress of the report. The status label can change without any update or reason.
Explorations: We explored how to integrate reports that users had indicated they had also encountered (when making a new report) with the reports they had submitted. We also debated between navigation elements (tab menu vs toggle button), and copy to help separate between the two categories of reports. We also tried adding a timeline to make it clear to users where their report was it.
What we learned: When introducing a new concept, such as giving users the ability to indicate that they "also encountered" an issue, it's important for terminology to be used consistently throughout the experience to help solidify the user's new mental model. We also learned that users really liked the timeline idea to help improve communication.
After user testing
Before user testing
Users didn't understand the terminology behind "shared reports", so we changed it to "also encountered" to reflect the language used in the new reporting process. Most users also missed the toggle bar button when asked to switch between the two categories, so we opted to use a tab menu controller instead.
Final Design Proposals
Once the user has entered the address and picture of the issue they are reporting, as well as selected the category, they will be prompted with any similar issues that already exist.
A full modal sheet with an alert was used to direct the user’s attention to determine if the existing reports shown were on the same issue that the user had encountered.
If the user clicks on one of the reports, they will see it in more detail. They will then be able to work together with other residents and add additional details to the report. This solves the pain point of users having to make a whole new report if they find one that is missing details or not up-to-date.
The blue label on the right side shows users how many people had indicated that they’ve encountered this issue, and this concept was something that mostly made sense to users in user testing.
Giving users control when looking at existing reports
Even though we wanted to bring in similar existing reports at the time of making a new report, the client still wanted to allow users to see all the past reports in one place. To improve the usefulness and efficiency of this feature, we designed a search and filtering feature.
Users can search or filter by certain categories of reports, status, or address. A date feature allows them to look at reports within a certain time period that they are interested in.
The cards containing each individual report preview were also redesigned to make it easier for users to scan through and process the information. The status labels were made bigger. The blue icon on the right shows users how many residents have indicated that they’ve encountered that particular issue.
The status labels were made bigger to draw more attention to them. Different colors were also used to help users distinguish between the different states better.
Staying updated with report progress
When a user’s report changes status, the user will be notified in the updates section of My Reports. The reports that users have "also encountered", but not actually created themselves, are also included in My Reports to build a sense of shared ownership.
The tab menu allows users to switch between seeing the reports they've submitted, and the reports that other users have made but they've "also encountered".
Users are also able to add any additional details to their own reports. This is useful since users had expressed an inability to provide the city with new information in case anything about their issue had changed.
Adding more updating information provides more context to the city on the issue. The city’s timeline for the issue is also shown, with the current state of the issue highlighted in yellow.
Converging onto the Problem
While I worked on auditing the app with my group, the second group worked on conducting a competitive analysis with other reporting and listing apps (NYC 311, Zillow), and the third “empathy” group created personas and storyboards using the initial user research insights.
We know had a lot of insights, but what exactly was the core problem we wanted to address? As a whole team, we met back together to try and position ourselves around this problem by looking at the original user flow of the app.
The app's current user flow allows the user to go down two seperate pathways.
After visualzing the current flow, it was easier to see how the segregation of creating a new report from viewing existing reports led to users not looking at existing reports first. We could also see the "dead end" that happens after a user makes a new report but doesn't get any sort of update. This leads to a sense of ambigiouty, which in turn leads to user frustration over not hearing back on reports they created.
We this bigger-picture understanding, we then discussed a new user flow for the app.
In this optimal high-level user flow, users would now be prompted to see similar reports to the report that they were making, if the system was able to determine that one existed.
The key change that was being explored was creating a way to show similar existing reports to users at the time of making a new report. By integrating these existing reports, we would be able to remove the segregation between “recent reports” and “new reports” as two distinct workflows. We also knew that showing users an update, whether they made their own report or added to a new report, would be key to improving satisfaction with the app.
With that, we converged onto the core problem statement we wanted to address in our redesign:
How might we integrate the process of looking at existing reports with the process of making a new report, as well as improve communication between community members and the city?
Ideation & Concept feedback
Sketches from our Crazy 8's fun.
After we used the Crazy 8’s ideation method to brainstorm as many concepts as possible, we created a site map to get a better picture of how all the features of the app could be integrated better in our redesign. This would also give us a tangible artifact that could be communicated with the project managers and the client.
The page hierarchy of our proposed redesign, as shown by this site map.
We agreed that it would make sense to bring in existing reports similar to the one a user is reporting, after they enter the address and category of their report. That way, if an already existing report with the same category and nearby address was detected, the user would be prompted to see that report and nudged to explore it, and then indicate this to the app that this is what they meant to report.
We wanted to quickly test this new flow with users to see if it made sense before we took it to higher-fidelity. I reached out and conducted user testing on a paper prototype with 2 users. Apart from providing minor suggestions, they both liked the idea of showing similar exciting reports when making a new report, so that if they were in the case they wouldn’t have to make their own entire report.
A picture of myself conducting user testing on our paper prototypes.
Presentation & IMpact
With all the design proposals complete, we compiled all our assets to make the handoff as easy as possible for the City of SD. One subgroup worked on creating a sheet of all the technical integrations needed to help make our design ideas actually feasible. Another worked on documenting all the changes and their explanations in a compiled Excel sheet. I led the third subgroup working to create a final report of the entire project process. Check it out here!
We then presented our solution to key stakeholders from the City in a Zoom meeting. They liked that a lot of our ideas helped solve the issues they had approached us with. They recognized that communication between residents had been difficult, and that they could use our timeline idea as a framework for how they could integrate a more step-by-step method of communication with residents. They also thought the system of matching similar reports to new reports based on address and category was a unique solution they had never thought of, but saw how it could help reduce the number of duplicate reports.
Some screenshots from the final presentation. Check it out here.
The biggest priority for users is to get their issues resolved, so they need to have a better awareness of the status of the issues they report.
Looking at recent reports and making a new report are completely seperate processes, which explains why 71.9% of users don’t check recent reports before creating new reports.
There is no way for community members to solve issues together - if a user sees an existing report that is missing details, their only option is to make a new one.
A Sudden Halt Transitions into Project Leadership
Just as we were about to start working in Figma and create wireframes based on the positive feedback, the COVID-19 pandemic hit in full storm, and the majority of students at UCSD moved back home. For about 2 weeks the project completely stalemated, and there was a lot of uncertainty on whether it would even be completed.
I knew we had worked hard on our project, and was upset that we had to stop just as we finally started designing. So I decided to take the initiative to ask who was still able to work on the project, and then scheduled a meeting to discuss what we still needed to do. From there I became the unofficial project lead, as I set and planned meetings, delegated tasks, and listened to and worked with the rest of the team members to deliver a solution that would bring value to both users and our client.
In our app, we wanted to design for familiarity, and try leveraging the existing style guide that the current app used. However, we had noticed some examples of inconsistencies in terms of how certain fonts or colors were used, so we knew we had to optimize this in our redesign.
The lead visual designer, Katie, led the way in creating the style guide and making sure that the final screens were using all the right components, colors, and fonts. I had taken a design systems course on LinkedIn, so I helped create and organize all the different components as swappable instances. I also took the initiative to understand the iOS Human-Interface Guidelines to make sure we were sticking with best practices for iOS app design, such as when to use pop-up versus full-screen modals.
The final version of the style guide used in our redesign.
Auditing the platform
I then worked with 2 other consultants to audit the current app, which allowed us to understand how specific design problems had contributed to the use patterns we had discovered in our research.
Below are some of the findings related to the experience of looking at recent reports, with the usability issues we had identified and our suggested solutions. The full report with all the findings can be found here.