Atlassian pricing page layout and calculator

Summary

Existing plan tiers. In this model, customers were charged $75 for 11-15 users, $150 for 16-25 users, $300 for 26-50 users, and $450 for 51-100 users. This felt unfair to customers who sat at the bottom of a tier. A team of 25 users, for example, would add a single user and their price would jump from $150 to $300 per month.

In July 2017, Atlassian rolled out a new pricing plan. The intention was to address customers' concerns with the existing plan that they were often paying for more users than they ended up using. The price jumps of $150 between tiers also made expanding team usage between tiers painful (see image and caption, right). Our purchasing and pricing teams had researched and developed a plan for a per-user pricing model to help alleviate this pain. We now needed to update the existing pricing page to communicate the new plan when it rolled out.

My role

My role was to work with project stakeholders from the pricing team and our development team to design an update to the existing pricing pages for Confluence, Jira Software, Jira Service Desk, and Jira Core to reflect the new model. See pricing page before redesign.

Goal and hypothesis

To kick off the project, I met with our stakeholders—representatives from the pricing team, product teams, and project manager—to align on the project goals.

Goal:

Communicate the pricing plan so that visitors can accurately calculate the price for their own team without being overwhelmed.

Success criteria:

  • Usability testing success (users can accurately calculate their size team’s price)

  • A/B experiment with a do-no-harm result

Hypothesis:

If we communicate the new pricing structure clearly and simply in the design, users will be able to accurately calculate the price for their team and we won’t experience drops in try-intents or Wk2WAU because we have made the new pricing structure at least as easy to understand as the previous one.

While different pricing can have an effect on users' willingness to purchase, we didn't have control in this design over the pricing model itself, just whether or not it was communicated well, so we used a qualitative usability metric to ensure clarity and then planned to run an A/B experiment with a goal of do-no-harm (no drops in the user try-intents, which is basically clicking a “try it free” button, nor drops in what we call Wk2WAU, which is the number of users who stuck with the product past the trial week into week 2) to validate that no purchasing behavior catastrophically changed with the new page release.

Note that we made the choice in this effort to not explicitly compare the old and new model since that was a separate effort targeting existing customers in which the marketing team explained the benefit of the new only-pay-for-the-users-you-need model over the previous model.

 

Iterations and investigation

Now that we were aligned on our goal and measures of success, I started to dig in, iterating quickly through lots of rough ideas to help me wrap my head around advantages and disadvantages of different approaches, bouncing ideas off teammates, and asking for more information from our team.

In my iterating, feedback gathering, and investigation process, I uncovered some new challenges and useful pieces of information.

  • Challenge 1: It’s an extremely complicated plan to explain.

    While the new plan was indeed more fair, it was too complicated for most people to understand quickly. One of the stakeholders inadvertently summed it up perfectly — "It’s actually really fair, it works just like taxes!”


    Here’s how it works:

    • For 10 users or less, there is a flat fee of $10, no matter how many users within that range use the software

    • For users 11-100 user, it’s $7 per user (including those first 10)
      11 users is $77, and 100 users is $700
      There’s a noticeable price jump between levels here, but it’s the only one in this model

    • The next 150 users (users 101-250) are $4 per user
      101 users is…
      $700 for the first 100
      + $4 for that 101st user,
      for a total of $704 per month

    • All users 251 and above are $1 per user
      253 users is…
      $700 for the first 100 users
      + $600 for users 101-250
      + $3 for users 251-253
      for a total of $1303 per month.

    As a designer, the scary part of all of this was that, when I explained the model to other colleagues, no matter how sharp or senior, or what approach I took to explain the model, a common pattern was “Oh I get it, easy, just multiply 253 [for example] by $1.” …which would be $1,050 less than the actual price per month…

    This kind of simplified initial assumption by users would lead to an expectation of a much lower monthly price (by often hundreds of dollars) per month. One of Atlassian’s values is “Don’t F*&% the Customer” and there was a high risk here to accidentally do just that if we didn’t design this right.

    Add to the complexity that the first pricing level, $10 for up to 10 users, is really an apple to oranges comparison to the rest of the pricing levels. You can’t break it down to per-user pricing because it’s not really $1 per user — if you have 2 users it’s $10 ($5/user), but if you have 5 users it’s also $10 dollars ($2/user).

    Finally, above 100 users, we couldn’t display a simple per user price, or even one average price, because that calculation was different for 101 ($6.97/user, averaged), 102 users ($6.94/user, averaged), 103 users ($6.91/user, averaged price) and on up.

  • Challenge 2: Most other companies are offering simpler pricing models.
    Users coming to our page would likely be familiar with a similar card layout that expresses much simpler math and feature trade-offs, like this…

…but our model isn’t that simple (and features are the same for every tier), adding to the potential impact of the accidental oversimplification outlined in Challenge #1.

  • Challenge 3: Limited development time and hard deadline.
    While the team was maybe open to exploring a more interactive option if it was necessary, they preferred to change the layout as little as possible—we were on an immovable and fast-approaching deadline for the release of the new plan, and the more we got “creative” with the design, the more pressure and risk we put on our development team.

  • Useful information: Most of our current pool just needs the simple math
    Looking at the data about current users and behavior, we saw that for all four products…

    • the vast majority of our cloud users (~98%) were in the first pricing tier (1-10 users)

    • a small amount (~1.5%) were in the second pricing tier (11-100 users)

    • and an even smaller amount were in the rest of the tiers (~0.5%).

      This was really exciting because the math really only starts to get complicated above 100 users, so this was an opportunity to simplify things for most of our visitors (based admittedly on the assumption that our new visitors were mostly a proportional reflection of our existing users, at least in the short term)

98% of users land in the simple $10 flat range, 1.5% can do simple 7 x users calculation, and only 0.5% have to figure out the trickier math

Breaking down the problem

With the above complexity in hand, how could we simplify? I decided to narrow focus, using our hypothesis and goal, to the following user objectives.

  • Users can accurately calculate the price that matches their team size without trouble.

  • Users understand that Atlassian offers price breaks for larger teams.
    Do users really need to know the details of how the model works in upper tiers? Or do they just need to understand that their average user price drops once they hit 101+ and then more sharply at 251+? We aligned on the latter assumption.

  • Users should have at least some kind of plan overview without being obligated to engage with a calculator.
    While it was tempting to just put a calculator at the top and call it done, we made an assumption that users would expect at least something that gave them an overview of the model (even if it was a simplified overview).

Early signal testing

We decided to put three concepts, as Invision prototypes, through some high-level early signal usability testing.

Option 1:

Minimal approach. This was closest to the existing design and the lowest development effort cost. It was also the most extreme hide-all-the-info-that’s-not-absolutely-necessary approach. It emphasized the plans that would match 99.5% of our (assumed) visitors, which also happened to be the easiest-to-explain math. For everyone else (assumed ~0.5%), we would jump users to an anchor link in the FAQ with a simple calculator where they could type in their team’s number of users and accurately calculate their monthly total and average per user price.

The “Calculate” link jumps to a calculator in the FAQ like this:

 
 

Option 2:

All-the-information approach. Our stakeholders were unconvinced that we couldn’t just show users a table that explained the pricing model. “Just give them all the information. We have smart users, they can figure it out!” Coming from a UX perspective, I assumed that it would be too much information for users and they would either be overwhelmed by the information or inaccurately calculate their price, but I admitted that was just my hunch at that point, so it was worth testing to find out.

 

Option 3:

Interactive approach. This would incur the highest dev effort, but could be worthwhile if it fared significantly better in our users’ understanding of the model. The hunch here was that if they were allowed to play around with the model, it would increase their understanding.

Note that sliders can be tricky to manipulate precisely for anyone, and can be especially difficult to code with accessibility/mobility issues in mind, so we went with a hash-marked approach. Users could get a general idea of price by jumping betwen the hash marks, but to get an exact number like 253, they’d need to use the calculator linked below.


Testing method

I ran 10 users through usability testing on usertesting.com with the screener question “Do you have experience working on teams in software, web, engineering, marketing, or related industries?” They were asked in their tasks to calculate their price for a variety of team sizes (25, 250, 2000). At the end I used UMUX-Lite questions to get a sense of their general comfort and ease with the calculation tasks in each option.

Findings

Option 1

  • All the users were able to accurately calculate their teams’ prices

  • They understood where to go to calculate for 101+ users (the calculator)

  • Unlike the other two options, though, the price discounts for teams of 101+ was not visible at all, so users did not know about the discounts

  • They called out some potentially confusing wording. For example, does “introductory price” in the $10 tier mean that after some period of time they will start getting charged more even if they stay within 10 users? This wording made some users uneasy.

Option 2

  • Only a two of the users felt overwhelmed by the amount of information presented…

  • …but most of them miscalculated their price by reading quickly and oversimplifying the model, thereby expecting a sometimes hundreds of dollars lower price per month than the accurate one (usability showstopper —abort!)

  • Users did, however, understand that in this model there is a price break above 101 users

  • Some users requested a calculator in this option

Option 3

  • This option seemed to lead to the best clarity of all

  • Playing with the sliding scale quickly surfaced the relationship of the average price dropping above 101 and 251 users

  • They were able to accurately determine their price for numbers marked on the sliding scale.

  • For numbers not marked, they knew where to go to get a more exact price (the calculator)

Refining a direction

Based on the testing results, the team decided to go with Option 1 with some edits. Even though Option 3 performed a bit better, Option 1 met our goals, we felt we could address the issues in that design with copy updates, and of course it was the least risky and time intensive option for dev. We decided we’d reconsider a slider in the next round.

Addressing feedback and some other issues, this is where the design landed:

Usability testing

We did another round of usability testing on the refined iteration of Option 1 and, no new issues surfacing, we moved forward with the A/B experiment.


A/B testing

We ran an A/B experiment with a do-no-harm goal to make sure that our try-intents and week 2 active users didn’t drop. The experiment met these goals, and so we moved on to production and release.


Afterword

What I would have done differently if I were to do this again

  • I think we could have done a better job (any job, actually) messaging the benefit of the new plan, even for new users — you’re never paying for users you don’t have on the product (this was the whole point of the new plan)!

  • We made a few assumptions without talking to users that I’m still not totally confident about, especially:

    • That our new visitors would reflect our current ratio of team sizes, therefore we de-emphasized 101+ sized teams. While this design was a fast and short-term solution, so okay since we planned to revisit it soon, maybe we could have addressed this better from the beginning. Also, aren’t we planning to attract larger teams?

    • That our users really just needed to understand their own price accurately, and then some basics about how the pricing model works (e.g. you get some breaks for super-large teams), and we could hide the rest behind a calculator and FAQ. We still get a good number of questions to our customer advocates about our pricing model. Maybe they get what their personal price would be accurately, but they still don’t get how the pricing model works. See also the Clicktale heatmap below of the new released version. User spend a LOT of their time playing around with that calculator. Maybe we hid too much away from them about the pricing model.

Clicktale hover activity heatmap. Visitors spend the very most activity on this page playing with the the calculator, entering a wide range of values. Are they trying to understand how the model works? Are they expecting their team to scale? Hard to tell!

 

Influence on the pricing plan itself

Since it came up in every conversation, it seemed worth raising the bigger question “Does the plan itself it really need to be this complicated? Why are very few of our competitors’ plans this complicated?”

I asked my manager, the Head of Design for Go to Market experience if she had more information or wanted to follow up on the question, which she did. She concluded after talking to the team that they made a well-informed choice and it we didn’t need to push back at that point on the model itself, which I respected.

Digging more into this project and the results, we learned that there was no design leadership representation in the development of the pricing model. A good lesson for our organization has been to include design leadership in the next round of the development of a pricing plan since part of selling it to our customers is how we will be able to communicate it!


← Back to portfolio home