Anton Stout
UX Strategist | Product Designer | Product Manager


A library of design activities combined with workflow management to help designers quickly identify, organise, share, and track their creative tasks and team's overall workflow.

What is Experiences?

Mission: New Product - Minimum Viable Product

Summary: A web app for experience designers and design teams to identify the best methods and activities to plan for and manage their project workflow.

How is it unique?, but with all the design tasks expertly pre-defined.

Unlike other project management software, all design-related tasks are assembled in pre-defined checklists. Designers need only to select and add methods to their workflow to start a project. The library contains over 250 professionally written, co-contributed methods.

Roles & Responsibilities

Role: Sr. UX Designer > Design Manager

Responsibilities: Key activities include:

  • Undertake and compile user research

  • Interactive prototyping

  • User testing - concept stage

  • Usability testing - refining stage

  • Deliver all visual assets

  • Create functional specs

  • Assemble and manage the design team

  • Lead production team

  • Lead content team

Identify and build user requirements into a MVP


Understanding how designers approach active learning, the challenges experienced, and the ways in which they begin to find cadence in their work.

Prototype to improve retention and referral metrics.

Validated Mobile and Desktop UI.


Approximately 10 weeks - Inception to hand-over.


Research, validate, design, test prototype and create MVP desktop, tablet and mobile responsive views.

Early adopters include designers and design managers from Apple, Google, Ebay, IBM, Amazon, Lloyds Bank, just to name a few.


Sixteen designers interviewed. Junior (0-2 year or less), Med (2-5 years) and Senior (5+ years) and Design Managers.

Empathy building

The first task is always to understand context, suspend assumptions, and gather user requirements.

To better understand learning journeys, we undertook:

  • Contextual interviews: 1:1 interviews undertaken with design practitioners over three weeks in a "Research Sprint".

  • Storyboards: Mapping out the journey, touch-points, and the key moments of friction/opportunity in the learning journey. Reframing themes into use cases using the Jobs-to-be-done framework. Producing 'minimise' and 'increase' customer statements as an overlay.

  • Personas: Using our research and JTBD customer statements to form personas around primary pain-points and prioritised themes for each segment.

Key takeaways

  • Junior designers struggle to identify which problem(s) to work on.

  • Designers apply the wrong methods and don't get the best 'problem-solution' fit as a result.

  • Juniors apply text book processes and limited adaptability, remaining highly dependent on observation and specific direction from senior designers.

  • Seniors' challenges are at a stakeholder management level, thus the focus is on greater business acumen.

  • It was difficult to find 'mid-weight' design research participants. We opted for junior and seniors only.


Same problem different lens: Differing mental models between junior, med-weight and senior designers

Key insights

Junior designers, struggle with:

  • Being consumers, not distributors within a team

  • Poor capture and synthesis of research information

  • Limited ability to select, assess, and apply key design activities

  • Impeded and delayed decision-making, affecting both momentum and quality of insights

  • Heightened sense of anxiety, confusion around adaptive design processes

  • Not leading though actively seeking learning opportunities

  • Tooling - quick to adopt and quick to drop

Senior designers:

  • Information distributors

  • Adaptive focus in response to stakeholder focus

  • Revered as a source of wisdom within a team

  • Strong ability to select and guide key activities

  • Actively challenges team and stakeholders

  • Daily quasi project management type duties

  • Looking for opportunities to be a beginner again

  • Actively scans. Team in mind.

  • Concern over strategy, quick to tactical decisions


A high level userflow based on the criteria from interviews and JTBD survey. Further refine was undertaken in the competitive analysis stage.

Stakeholder workshops

Assembling stakeholders into a Discovery Workshop around journey-building helped us unearth and reframe the key assumption. We needed to see how our results gelled with the assumptions of the broader team and what new approaches we could muster from these insights.

The first workshop was to present findings and define next steps. It involved nine key team members plus stakeholders - local and remote.

  1. Identify our known problems
    Pulling from primary and secondary research, the agenda was to address priority user challenges and journey pain-points.

  2. Reframe problems into assumptions
    What are the problems worth solving? What evidence do we have that these are the right problems?

  3. Journey line + refine problems
    What does learning look and feel like for designers? How do they manage their time with so much to do?


Method libraries exist with limited practical application

Competitive analysis

From our workshop I was able to hone in a set of ‘must-have’ learning criteria that designers currently cannot find online.

Through further analysis we identified several comparable models. Our closest competition to satisfy these needs was Luma Institute.

Armed with the research reports, I merged any redundancies with opportunities through Feature and S.W.O.T analysis. This criteria was used to update user stories.

Key takeaways

  • Several key opportunities exist otherwise lacking from within the current competitive portfolio

  • Weaknesses were in terms of method diversity and applicability

  • Strengths were clearly overlooked use cases within the current competitive flow


High-level exploratory journey line of competitor's website

Comparative user-testing

I ran moderated usability tests on competitors' websites with 8 designers.

We covered both exploratory (scenarios) and task-specifics tests, collected quantifiable data from our SEQ, plus tracking completion rate, ToT, TLS, error rates (SUM). This gave clear insights needed and a journey line across key pages.

We undertook another round of updating stories and assembled a survey to assist with prioritising said stories.

  • Survey: These were validated with participants. We used the survey to quantifiably validate and prioritise our updated user stories.

  • Backlog: Armed with prioritised user stories, I began to define the high-level backlog with use cases as an early signal to the Engineering team of would be coming down the tube.


High-level view. Next, added a confidence overlay score from granular task estimations against each item.


Armed with the customer/user problems sourced from our research and validated against survey and competitive data, we prioritised the user problems.

I looped in the engineers, BA and QA to discuss and scope HL requirements, communicated timelines, and provide HL backlog estimates. More detail was required to aid the engineers.

We scheduled several workshops, as the design team would need to deliver within engineering time constraints in order to meet deadlines.

I undertook feature prioritisation using various methods starting with a HL MoSoCoW and ending with an Impact/Effort matrix once we had estimates from engineering. This involved getting into the detail of each feature, key tasks and providing estimates.

Key takeaways

  • Prioritised feature list against research data for our MVP (exact match)

  • Horizon 1 build (high fuzzy match requiring user/business validation)

  • Horizon 2 build (low fuzzy match requiring considerable research).


Scenario: Paid user accessing content library


Identifying primary and secondary user cases achieved, I merge 22 scenarios into 5 userflows.

  1. Product purchasers - those that had purchase off-site products

  2. Off-page referral traffic - search engines etc

  3. Account upgrade/flows - free to premium users

  4. Login/reg/password recovery - fundamental flows

  5. In-product experience - notification triggers


I'm scrollable! (Whoop!)


Based on the prioritised list, validated with the team and cross-checked against our research I was ready to start working on concepts.

I set up a repository for our interface ideas for the team to expand upon key scenarios and use cases to create concepts (wireframes) or sort existing examples and add to each userflow folder.

Process and tracking

  • Balsamiq for lo-fi responsive mobile, tablet and desktop

  • User testing with junior and senior designers

  • Per 3 tests we would make concept updates around key themes (tracked in our usability report repo)

  • UI ideas had to be capped as the design team were going into abstract (and completely unusable ideas). I had to vote and move forward.


Example: SEQ results of 5 participants (Cohort 1: Week 2)

User-to-usability testing

We undertook multiple rounds of user testing (concept stage) and usability testing (mid-to-hi fidelity stage).

Numerous errors were found when testing the concepts from confusion around what the product was (regardless of entry point/user flow tested) requiring overhaul on the copy and onboarding flow.

As the quantifiability structure of the testing increased, I modified flows/features and content to improve key error rates, introducing SEQs intercepts surveys into both task and exploratory scenarios to give more dimension to the Think Aloud test structure and observational testing format.

Key takeaway

The aggregated output of the SEQs was strong indicator of which flows to prioritise and aided in flow modification decisions.

User tests
Average SEQ

Desktop: Hi-fi Workspace activity view

UI Design

With the research, concepts and prototypes validated, along with key features and primary userflows prioritised, I begin UI design. Starting with mobile and desktop.

  • Mood board was created (every team member contributed)

  • I used Figma with other elements in Sketch and Adobe Illustrator.

  • Material design and iOS guidelines followed.

  • Minor creative licence needed due to the robust testing of the prototypes.

  • Component library built.

Usability Tests
Major Iterations
Happy Alpha Users


  • Comprehension: Junior designers didn’t know what they didn’t know.

  • Identity: It was challenging to find ‘mid-weight designers’ as many considered themselves either junior or senior in their careers.

  • Expertise: Variance in tasks and job challenges between juniors and senior designers was considerable. Making alignment/themes broader and therefore requiring extensive user and usability testing.

  • Team: Hosting remote, international team collaboration via meetings and workshops resulted in significant task delegation and varied response times, reducing overall team cadence.

  • Costs: Reduced labour costs via offshoring, but extra expense due to project extension.

What failed?


  • Outliers: While highly articulate; interviewing design managers, mentors, and teachers was a distraction. They were too far removed from the ‘beginners mind’ and unable to give insights worth exploring.

  • Target: Needed to focus on a segment of users based on the characteristics and attributes of existing users or those within a the target market, particularly in the case of a new product. Participants were loosely defined as 'junior' or 'senior'.

  • Participation: Scheduling interviews, user and usability test participants with mid-weight designers didn't work.

  • Proximity: Seniors lacked definition and recall around the learning journey as they were occupied more strategic pursuits.

  • No analytics data: One of the first things I prefer in an existing project - virtual page view, events and goal/funnel setup data was absent.

  • Lacklustre commitment: Post-discovery, several team members weren't pulling their weight around deliverables, moved to granular task management and accountability/impact coaching to achieve outcomes.

What worked?

We delivered!

  • Final outputs included mobile, tablet and desktop UIs - validated with end users.

  • Functional specifications documentation as Trello cards with UI PNG, engineering estimates, use case and acceptance criteria spelled out and co-created with the engineers.

  • Tech-stack and estimations workshop to unify engineering and design team prior to handover.

Major iterations
On budget