02 — 07
ux/ui
research
animation
transport systems
Hi! I'm Alex, an Art Director at Anthracite Design Studio. Over the year, we collaborated closely with our client to create an interface for a transportation monitoring system that utilizes a network of cameras powered by AI and managed by operators. After the MVP release, we began introducing new features as the database grew, and the AI model improved. Today, we'd like to share a story about how we added a seemingly small feature that had an unexpectedly significant impact.
Imagine a scenario in which a vehicle is involved in an accident, and ten cameras capture the incident from various angles. Until recently, operators were tasked with manually combine these footage streams into a coherent incident report. Without this, the system becomes cluttered with unclassified incidents, affecting operator efficiency. Even as AI improved in its ability to identify and consolidate related incidents, it still required operator validation.
We had clear objectives, such as reducing the time spent on task and minimizing error rates in the incident combination process. However, our primary concern was to ensure that operators could seamlessly integrate the new feature into their workflow. Since the system operates 24/7, our utmost priority was to avoid disrupting the existing efficiency in maintaining order in the city.
We started by observing the current operators' workflow to identify its challenges that would help us to generate hypotheses of how we can integrate the new feature. We closely monitored the operators' activity during the work day and conducted in-depth interviews, allowing us to view the problem from various angles. Here are the key issues we identified during the observation phase:
These are key insights we obtained during in-depth interviews:
We recognised that operators might be overloaded with events happening on the screen especially during rush hour when incidents tend to happen the most. Adding more cognitive load to the current interface might have a negative impact on overall efficiency.
To overcome this problem we based our null hypothesis on the mental models theory, which suggests that consistent use of widely adopted applications shapes user behaviour patterns and establishes expectations for similar interfaces, sounded promising in this case. Thus taking it into consideration we formulated it the following way:
Null hypothesis: Leveraging familiar interaction patterns from widely-used applications, might enable operators to more seamlessly integrate the new feature into their workflow
Our primary users, the operators, are real people who regularly use popular daily apps. To identify common apps, we conducted a quick survey among operators. The research into their existing practices guided the development of hypotheses for creating prototypes.
Hypothesis 1: Attention towards the new feature could be drawn by color dot, badge or notification.
Hypothesis 2: To show operator that there is a multiple entities a visual metaphor, such as a "stack" of camera footage, reminiscent of group calls or chats.
As an alternative approach, we also considered to give operator a more manual interaction with the new feature in case they are used to such workflow.
Hypothethis 3: Highlighting rows to suggest their relation and encourage users to merge them into a single accident.
To move forward, we consulted with our developers and prioritized the execution for each hypothesis, selecting prototypes that promised effectiveness and cost-efficiency in their development.
We run moderated usability testing of users interacting with our prototypes. We measured the effectiveness of our designs using multiple metrics such as task success rate, task time rate, and learnability as task success rate over time.
The learnability metric was particularly crucial for our client as operators needed to become accustomed to the new feature fast as possible. Our goal was to ensure that the transition to the new workflow would be seamless, and that operators could leverage the feature intuitively. We selected the prototype with the highest scores in each test with the learnability test results having more x2 weight. That is why despite of the prototype V1 showed better avg time rate, the learnability rate for V3 prototype was higher making it our go-to solution.
Through testing and refinement, we confirmed our central hypothesis: utilizing familiar interaction patterns outperformed alternative prototypes with a more manual approach despite its alignment with operators' existing workflows. For the final production version, we adopted a prototype featuring a notification format akin to a red bubble and a stack of photos, effectively signalling combined incidents and capturing operators' attention.
Following its release, we noted a remarkable 34% improvement in operator workflow efficiency. This feature proved particularly valuable in resolving corner cases, where operators occasionally faced delays of 5 to 9 minutes, significantly affecting task completion times.
Furthermore, post-release observations revealed that this implementation not only accelerated accident processing but also reduced operators' extra work by 2 hours per week. This reduction in additional tasks, such as clearing the system of accidental omissions, contributed to heightened operator satisfaction. It alsoo increased our own happiness as we made positive impact on both efficiency and overall work experience.