Understanding the user
The goal of this project was to build an information synthesis tool for internal users. There was no good way to start without understanding the users' job responsibilities, typical workflow, and information needs. With the help of my stakeholders, I identified main user types and I conducted contextual inquiries with each type. This allowed me to understand their key tasks, pain points with current workflow, challenges in searching and synthesizing information, and how these differ for each type of users.
Explore opportunities
After summarizing information needs of the users, I worked with a cross-functional team to brainstorm opportunities. In a few work sessions, we listed ideas to address each need and ranked the potential solutions based on value to users and difficulty of implementation.
Once the team aligned on the most promising ideas, we brought them back to users. We showed the participants example task scenarios along with various types of information support according to team's ideas in order to gauge their reaction. No interface was involved at this stage because our goal was to understand the value of the information itself. This lightweight research helped the team further narrow down our ideas and decided to start the tool with 4 most valuable, yet relatively easy to implement, features.
Prototype, test, and iterate
Partnered with an engineer and a designer, we went through iterations to bring the features to live. We started by making wireframes to explore potential layouts of information display. We showed different layouts to users to gather their feedback. We also asked the users to rearrange the information layout based on their preference and explain their rationale. The exercise revealed how important each type of information support was and how frequently users would need to reference them. This learning was critical for designing the layout of the information synthesis tool.
Once the basic information layout was determined, the team moved to high fidelity prototypes to flesh out the visual and interaction design. Later on, the Engineer built a working prototype that works with real data. This allows users to feel how the tool would work in reality. At each step, we tested the prototypes with users to understand what works well and what doesn't and iterated accordingly.
Release and onboarding
The value of the information synthesis tool was supported by positive feedback from the users. However, it was also clear that V1 of this tool was only helpful for certain types of users and certain types tasks. The team decided to release V1 so that some users could benefit from the tool as early as possible. This will also allow the team to gather more feedback based on the user's real-world experience.
Partnered with the product manager, I developed onboarding material for the V1 information synthesis tool. In the onboarding material, I summarized key functionalities of the tool. Some features were powered by AI models, meaning occasional inaccurate predictions was inevitable. Therefore, it was also critical to set proper expectations for the users.
Monitor and keep learning
With the help from data analytics team, I built a dashboard to monitor the total usage of the tool and usage of each feature. The data revealed that many users tried the tool initially, but few remained active users.
I conducted a survey to understand the level of satisfaction and investigate reasons of churn. The results indicated a few key areas of improvement. For example, the V1 tool was not easy to access from the main software the internal users use for their daily tasks. As a result, instead of reaching to the new tool we developed, users fell back to their muscle memory and continued to use the workarounds they were familiar with. Currently, the team is working on resolving this barrier for the users and the make tool easier to access.
My takeaways?
Building a new product is challenging, but also fun and fulfilling! Leading the project from ambiguous ideas to concrete solutions to tangible product was such a journey. There were many challenges that I had never expected, from technical feasibility, business resources, to cross-functional collaboration. Not every step was a success but we kept learning and growing - and that's the most important thing.
The goal of this project was to build an information synthesis tool for internal users. There was no good way to start without understanding the users' job responsibilities, typical workflow, and information needs. With the help of my stakeholders, I identified main user types and I conducted contextual inquiries with each type. This allowed me to understand their key tasks, pain points with current workflow, challenges in searching and synthesizing information, and how these differ for each type of users.
Explore opportunities
After summarizing information needs of the users, I worked with a cross-functional team to brainstorm opportunities. In a few work sessions, we listed ideas to address each need and ranked the potential solutions based on value to users and difficulty of implementation.
Once the team aligned on the most promising ideas, we brought them back to users. We showed the participants example task scenarios along with various types of information support according to team's ideas in order to gauge their reaction. No interface was involved at this stage because our goal was to understand the value of the information itself. This lightweight research helped the team further narrow down our ideas and decided to start the tool with 4 most valuable, yet relatively easy to implement, features.
Prototype, test, and iterate
Partnered with an engineer and a designer, we went through iterations to bring the features to live. We started by making wireframes to explore potential layouts of information display. We showed different layouts to users to gather their feedback. We also asked the users to rearrange the information layout based on their preference and explain their rationale. The exercise revealed how important each type of information support was and how frequently users would need to reference them. This learning was critical for designing the layout of the information synthesis tool.
Once the basic information layout was determined, the team moved to high fidelity prototypes to flesh out the visual and interaction design. Later on, the Engineer built a working prototype that works with real data. This allows users to feel how the tool would work in reality. At each step, we tested the prototypes with users to understand what works well and what doesn't and iterated accordingly.
Release and onboarding
The value of the information synthesis tool was supported by positive feedback from the users. However, it was also clear that V1 of this tool was only helpful for certain types of users and certain types tasks. The team decided to release V1 so that some users could benefit from the tool as early as possible. This will also allow the team to gather more feedback based on the user's real-world experience.
Partnered with the product manager, I developed onboarding material for the V1 information synthesis tool. In the onboarding material, I summarized key functionalities of the tool. Some features were powered by AI models, meaning occasional inaccurate predictions was inevitable. Therefore, it was also critical to set proper expectations for the users.
Monitor and keep learning
With the help from data analytics team, I built a dashboard to monitor the total usage of the tool and usage of each feature. The data revealed that many users tried the tool initially, but few remained active users.
I conducted a survey to understand the level of satisfaction and investigate reasons of churn. The results indicated a few key areas of improvement. For example, the V1 tool was not easy to access from the main software the internal users use for their daily tasks. As a result, instead of reaching to the new tool we developed, users fell back to their muscle memory and continued to use the workarounds they were familiar with. Currently, the team is working on resolving this barrier for the users and the make tool easier to access.
My takeaways?
Building a new product is challenging, but also fun and fulfilling! Leading the project from ambiguous ideas to concrete solutions to tangible product was such a journey. There were many challenges that I had never expected, from technical feasibility, business resources, to cross-functional collaboration. Not every step was a success but we kept learning and growing - and that's the most important thing.
Notifications in the car?
Android Automotive is a variation of Google's Android operating system, tailored for its use in vehicle dashboards. To help users stay connected while driving safely, it's critical to display notifications that are important to users and suppress the rest. But which notifications are the most important? What do users think about receiving notifications in the car? How is it different from receiving notifications outside of driving?
A multi-method study
To answer the above questions, I conducted a multi-method study. In a week-long diary study, participants followed a template to document their experience receiving notifications while driving every day. To capture their natural behavior and minimize safety risks, we told participants they did not need to pay additional attention to notifications during driving. Instead, they only need to report on notifications that they noticed during or after driving. When reporting about their experience with the notifications, participants were asked what notifications they received, how important or not the notification was, why they considered it important or not important, and what they did after receiving the notification. I kept a close eye on participants' daily submissions so that I could answer their questions, if any, and follow up with them when relevant. This diary study allowed me to have a basic understanding about what notifications users might receive while driving and how they reacted to them.
Following the diary study, I invited selected participants to a remote interview. Speaking with participants remotely helped us break the geographical barrier and reach users outside of the Bay Area, where the population tends to have different attitudes and habits towards technology. In the remote interviews, I dug deeper about the experience they shared in the diary study to understand the underlying emotions and rationales. I also partnered with a designer to prepare a few notification examples, covering different type of notifications (e.g., phone call from family, navigation app directions, updates from food delivery app). I presented these examples to the participants to understand how important each type of notification is and how they would like them to behave (e.g., stays on the screen until user dismiss it, group notifications from the same app and same contact, showing number of notifications in a badge on app icon).
Research synthesis workshop
The team was keen about deep diving into the rich data from the diary and interview studies. I organized a research synthesis workshop with the core team, including two designers and two engineers (unfortunately, we did not have a project manager at that point).
To make the workshop productive, I cleaned up typos in the raw notes and organized the data by participant and relevant research questions. During the workshop, the team did affinity diagraming to identify levels of importance for notifications in the car, categorized different types of notifications to each level, and discussed ideal behavior for each importance level and user type. These learnings provided a framework to guide the notification design down the road.
My takeaways?
Bringing everyone along the journey made research more powerful. I felt grateful and happy to see my team paying attention to what the users said and caring what was important to the users. After all, users are our guides to better products.
Android Automotive is a variation of Google's Android operating system, tailored for its use in vehicle dashboards. To help users stay connected while driving safely, it's critical to display notifications that are important to users and suppress the rest. But which notifications are the most important? What do users think about receiving notifications in the car? How is it different from receiving notifications outside of driving?
A multi-method study
To answer the above questions, I conducted a multi-method study. In a week-long diary study, participants followed a template to document their experience receiving notifications while driving every day. To capture their natural behavior and minimize safety risks, we told participants they did not need to pay additional attention to notifications during driving. Instead, they only need to report on notifications that they noticed during or after driving. When reporting about their experience with the notifications, participants were asked what notifications they received, how important or not the notification was, why they considered it important or not important, and what they did after receiving the notification. I kept a close eye on participants' daily submissions so that I could answer their questions, if any, and follow up with them when relevant. This diary study allowed me to have a basic understanding about what notifications users might receive while driving and how they reacted to them.
Following the diary study, I invited selected participants to a remote interview. Speaking with participants remotely helped us break the geographical barrier and reach users outside of the Bay Area, where the population tends to have different attitudes and habits towards technology. In the remote interviews, I dug deeper about the experience they shared in the diary study to understand the underlying emotions and rationales. I also partnered with a designer to prepare a few notification examples, covering different type of notifications (e.g., phone call from family, navigation app directions, updates from food delivery app). I presented these examples to the participants to understand how important each type of notification is and how they would like them to behave (e.g., stays on the screen until user dismiss it, group notifications from the same app and same contact, showing number of notifications in a badge on app icon).
Research synthesis workshop
The team was keen about deep diving into the rich data from the diary and interview studies. I organized a research synthesis workshop with the core team, including two designers and two engineers (unfortunately, we did not have a project manager at that point).
To make the workshop productive, I cleaned up typos in the raw notes and organized the data by participant and relevant research questions. During the workshop, the team did affinity diagraming to identify levels of importance for notifications in the car, categorized different types of notifications to each level, and discussed ideal behavior for each importance level and user type. These learnings provided a framework to guide the notification design down the road.
My takeaways?
Bringing everyone along the journey made research more powerful. I felt grateful and happy to see my team paying attention to what the users said and caring what was important to the users. After all, users are our guides to better products.
Developing UAV Control Interface Evaluation Tool
The original goal of this project was to understand how UAV interface design impact operator performance and cognitive workload. To do so, we need a tool that would allow us to comprehensively and objectively evaluate an UAV interface. We searched the literature, but failed to find one. The research team I led decided to develop such a tool, later named M-GEDIS-UAV, by aggregating industrial and design guidelines relevant to UAV interfaces. Using the M-GEDIS-UAV, one can compare an interface with design best practices and obtain an evaluation score for the interface from 0 to 1, with 0 being the worst case scenario and 1 being the optimal interface. The development process of this tool was published in ACM Transactions on Human-Robot Interaction.
Prototyping UAV interfaces
We selected an open source UAV control interface and made variations of it using a prototyping software called JustinMind. In addition to a baseline interface that imitate the software, we prototyped an enhanced interface by following design best practices from the M-GEDIS-UAV and a degraded interface by violating some guidelines. Note that the guideline violations in the degraded interface were not arbitrary - they were intentionally selected to replicate common UAV interface design issues reported in the literature. The goal of making these variations was to have UAV interface designs at different levels, as reflected by their M-GEDIS-UAV evaluation score.
Investigating how UAV interface design impacts operator performance and cognitive workload
With three interface variations, we then designed an experiment. Participants were assigned to use one of the interface variations to perform a set of typical UAV control tasks. For each task, we measured task completion time and accuracy whenever relevant. We used a number of techniques to assess the participant's cognitive workload, including physiological responses (blink duration, blink rate, heart rate, heart rate variability) and subjective ratings (NASA-TLX). The amount of data made the analysis more challenging but allowed us to triangulate the evidence and better understand the results.
Better interface, lower cognitive workload
Results showed that the enhanced interface led to the lowest operator workload. It was also more robust to higher task demands than the baseline and degraded interface. In addition, we were able to show a correlation between the M-GEDIS-UAV evaluation score and certain cognitive workload metrics. While "better interface, lower cognitive workload" seems intuitive, an evaluation tool that could potentially predict operator workload level was a novel contribution to the scientific community. We hope more researchers try our evaluation tool and provide further validation, or invalidation :)
My takeaways?
When things don't go as planned, finding another way is better than waiting. The research team initially expected to receive functional UAV control simulations from the project sponsor. However, it was significantly delayed. I investigated prototyping as an alternative option. Although not functional, the interactive components and animation features of the prototyping software helped us find an effective workaround.
The original goal of this project was to understand how UAV interface design impact operator performance and cognitive workload. To do so, we need a tool that would allow us to comprehensively and objectively evaluate an UAV interface. We searched the literature, but failed to find one. The research team I led decided to develop such a tool, later named M-GEDIS-UAV, by aggregating industrial and design guidelines relevant to UAV interfaces. Using the M-GEDIS-UAV, one can compare an interface with design best practices and obtain an evaluation score for the interface from 0 to 1, with 0 being the worst case scenario and 1 being the optimal interface. The development process of this tool was published in ACM Transactions on Human-Robot Interaction.
Prototyping UAV interfaces
We selected an open source UAV control interface and made variations of it using a prototyping software called JustinMind. In addition to a baseline interface that imitate the software, we prototyped an enhanced interface by following design best practices from the M-GEDIS-UAV and a degraded interface by violating some guidelines. Note that the guideline violations in the degraded interface were not arbitrary - they were intentionally selected to replicate common UAV interface design issues reported in the literature. The goal of making these variations was to have UAV interface designs at different levels, as reflected by their M-GEDIS-UAV evaluation score.
Investigating how UAV interface design impacts operator performance and cognitive workload
With three interface variations, we then designed an experiment. Participants were assigned to use one of the interface variations to perform a set of typical UAV control tasks. For each task, we measured task completion time and accuracy whenever relevant. We used a number of techniques to assess the participant's cognitive workload, including physiological responses (blink duration, blink rate, heart rate, heart rate variability) and subjective ratings (NASA-TLX). The amount of data made the analysis more challenging but allowed us to triangulate the evidence and better understand the results.
Better interface, lower cognitive workload
Results showed that the enhanced interface led to the lowest operator workload. It was also more robust to higher task demands than the baseline and degraded interface. In addition, we were able to show a correlation between the M-GEDIS-UAV evaluation score and certain cognitive workload metrics. While "better interface, lower cognitive workload" seems intuitive, an evaluation tool that could potentially predict operator workload level was a novel contribution to the scientific community. We hope more researchers try our evaluation tool and provide further validation, or invalidation :)
My takeaways?
When things don't go as planned, finding another way is better than waiting. The research team initially expected to receive functional UAV control simulations from the project sponsor. However, it was significantly delayed. I investigated prototyping as an alternative option. Although not functional, the interactive components and animation features of the prototyping software helped us find an effective workaround.