top of page

Technician communication & troubleshooting

Uncovered through user research and service design suggestions, I found ways to save Bell $200,000 per annum, significantly reduce training time, and most importantly increase happiness!

Case study

1 UX/UI Designer

1 UX Researcher




Google Sheets

Bell business case


4 months

User research

UX/UI design

User testing

Hi-fi & lo-fi prototypes

Interaction design

Project manager

The problem space

Technicians have too many tools (15+ individual applications) to remember how to use for a variety of purposes in order to complete a repair/installation. One issue that stems from this is troubleshooting and communicating with our agents requires them to swivel-chair to 5 other applications in scenarios where they can’t afford to lose time.

Project goals

My team is currently working on an all-encompassing application to absorb a large number of tools. I was tasked to work on a feature combining our comms/troubleshooting tool for technicians into our “mega tool”. The question I needed to answer was: “how do we serendipitously introduce this feature to our users?”.

Here is the ideal scenario we want; trade using 4 different applications for 1 application


The issues that we anticipated were:

  • The mixture of systems supporting the current application would be inflexible

  • The complexity of handling all teams that touched this legacy tool would be high

  • Technicians would be looking to encourage discouraged behaviour (defined by the business)

Overview of the game plan

Considering the complexity of this feature, here is the research/design plan I put together:

  • Understanding the problem space

    • Understand processes by meeting with process primes to determine constraints

    • Find and study documentation of CMO systems and processes

    • Understand business goals

  • Defining user needs

    • Get to know the users through user interviews/workshops

    • Map out the lifecycle of typical and atypical flows

    • Cumulate thoughts and feelings of users; what they want us to stop, start, and keep doing

  • Ideate compatible solutions

    • Understand where direct communication/automation should or shouldn’t happen

    • Weed out options considering constraints

    • Determine where process changes should be recommended

    • Keep iterating based on multiple rounds of user feedback and testing

    • Go back and check with process primes periodically to make sure the solution is possible

Research process

I met with process primes to gather existing documentation to speed up my understanding. However, it turned out that there were no existing product sheets or conclusive materials. In order to make things clear for myself, I interviewed a few process folk to create a map of the general happy/sad path scenarios and any outliers, tracking interactions with different systems. This was also where I determined the design/service constraints.

Figjam board: (Left) notes from conversations and demos of the app, (2) Map of the happy/sad path scenarios and any outliers

I also took this time to gather and analyze data encompassing the volumes of request types coming from technicians and how long it typically takes to resolve them. This helped me determine what request types I should be looking at in order to save the technicians as much time as possible, and the business as many resources as possible.

Figjam board: Business data on the volume and length of requests divided by region, highlighted are low-hanging fruit opportunities (2022 so far)

Defining user needs

I conducted user research alongside the Design Thinking team to get as much feedback as possible. We interviewed a total of 15 technicians in 3 days through 4 workshops. Some helpful tech insights we received were:

  • Certain teams are more difficult to deal with than others

  • Complex issues require more communication but its discouraged

  • Automation of certain actions and entire flows are widely desired

  • The buckets of request forms did not match what they needed help with

  • There are no haptics or update notifications; the user must keep refreshing the page

  • An agent can close a request without actually resolving it, requiring the technician to submit another request and start over again

Miro board: Workshop activities documented –  CMO map with technician deviation notes, questions about a technicians day in the life with the CMO tool, opinions on a mock-up

After the workshops, we consolidated all of the insights into a map diagram that organized all of the users’ thoughts. We then translated everything into solutions in the form of features (see below).

Solution ideation

We put together a more detailed map of the CMO to encompass the technician reality not accounted for by the process folk. Afterwards, we created an ideal FMO (future mode of operations) and presented it to our technicians and process teams for their opinions.

Miro board: CMO vs. FMO map –  with constraints and desired features more clearly defined, I was able to weed out some design options. From here, we were able to determine the most optimal area to inject automation and open communication.

I now had enough information to begin wireframing. I conducted 7 rounds of user testing, with iterations for each. Listed below are all the wireframe versions + a couple of crucial feedback points:

I now had enough information to begin wireframing. I conducted 7 rounds of user testing, with iterations for each. Listed below are all the wireframe versions + a couple of crucial feedback points:

  • Iteration 1: It's easier for users to see and action requests that are grouped together

  • Iteration 2 & 3: Having CTAs clearly written vs. system names makes decisions easier

  • Iteration 4: Implement categorization and automation for certain requests, predictive models are better

  • Iteration 5: Optimize the layout to bring attention to important updates and create space to prep the user's mental model for related future features

In addition to user feedback sessions and prime check-ins, these wireframes were validated through timed tests with several users of varying experience levels to determine the average task completion duration. This was done to ensure that this would solve business needs (saving resources) on top of increasing the usability for users.

Key takeaways

  • Communication windows at the right times are the key to balancing business and user needs

  • Notifications that include haptics are pivotal, content also needs to be relevant outlining only certain points

  • Automation and AI predictive models will be huge time savers and cut down app usage time, therefore saving company resources

  • It’s important to have 1 location to send requests, receive updates, and do anything related to the requests so as to not confuse the user

See next page:
Decreasing errors
bottom of page