Logo Help Center
Chat with us
Open a ticket
Sign in
  1. aytm Help Center
  2. Research Tests
  3. Advanced MaxDiff

Articles in this section

  • Overview: MaxDiff Research Test
  • Building an Advanced MaxDiff
  • Analyzing an Advanced MaxDiff
  • Advanced MaxDiff Aggregate Methodology
  • Advanced MaxDiff HB Methodology
  • Advanced MaxDiff HB TURF Analysis

Overview: MaxDiff Research Test

The Advanced MaxDiff test is a great way to compare many alternatives without overwhelming respondents by asking them to read and consider all items at once. It takes a list of your items to be compared, and shows them in a balanced order to each respondent 3, 4, or 5 at a time.

 

 
Learn more about the Advanced MaxDiff in the Lighthouse Academy!
Lighthouse_CTA.png
 

About Advanced MaxDiff

Dependent on the data resolution requirements, the model will ask a number of ranking, best/worst, or preference tasks for respondents to complete. When sufficient data are collected, we'll complete advanced statistical analysis on the back end and show you the hierarchy of your alternatives, as well as the distance between each item. If Hierarchical Bayesian (HB) analysis was specified in the settings, the model will also provide individual-level estimates, making it possible to draw inferences on sub-sections of the population without significant decrease in predictive power. Using this technology, you can find out which features/qualities of your product are most important to consumers, or rank a long list of slogans you are considering in the order of preference by your target audience. The Advanced MaxDiff test is quite sensitive to the number of completes, so we recommend ordering 400+ responses.

The Advanced MaxDiff test is more accurate than the classical MaxDiff test, because it uses an adaptive real-time randomization algorithm that works while the survey is being fulfilled to provide the greatest possible efficiency of item distribution per group and per respondent, rather than relying on a predetermined map of the entire test.

It is also much more user-friendly than classical symmetrical tables used in the past. Instead of asking respondents to find an item on the grid and put a checkbox in the correct column, we present them with 3, 4, or 5 items at a time and ask them to rank order, select Best/Worst from a text list, or select best and worst from an Image grid. It conserves respondents' energy and focuses their attention on identifying the winner and loser for each quad with minimal effort.

 

 

Aggregate or HB?

The Advanced Maxdiff comes in two varieties: Aggregate (formerly MaxDiff Express) and HB. If Aggregate is chosen, the method focuses on collecting general aggregate information, without the intention to obtain individual-level estimates. In typical settings respondents would see 3-5 screens. While it will still be possible to see results for a subset of respondents, please keep in mind that the model only covers the subset of respondents, and does not take into account the rest of responses.

In HB mode, the method is focused on collecting high-resolution individual-level data to be analyzed by Hierarchical Bayesian model. In typical settings respondents would see 10-20 screens. Besides the additional option to extract individual logistic coefficients, it also becomes possible to look at analysis done on a subset of respondents, knowing that the results are more robust, as the model allows for individual-level estimation. 

For more details on these methodologies, please refer to our knowledge base articles: Advanced MaxDiff  Aggregate and Advanced MaxDiff HB.

 


 

Aggregate/HB Comparison

 

Aggregate

HB

Number of items that can be tested

7 to 200

7 to 200

Maximum screens/trials limit

12

20

Maximum items evaluated by any given respondent

3 items/screen: 36

4 items/screen: 48

5 items/screen: 60

3 items/screen: 60

4 items/screen: 80

5 items/screen: 100

Average evaluations per item seen
*among those assigned

~1 times

~2.5 times when 7-40 items

~1 time when 41+ items

Tipping point of full  item evaluation

7-[max] items: evaluate all

[max]-200 items: evaluate subset

7-40 items: evaluate all (~2.5 times)

41-[max] items: evaluate all (~1 time)

[max]-200 items: evaluate subset (~1 time)

Analysis type

Aggregate – treats all respondents as one big data pool

Individual – creates utility scores for each item for each respondent

Additional analyses available

None

Significance testing

TURF Analysis (Learn more)

 


 

Things to note when using MaxDiff

  • While Max Diff appears as a single question in the survey editor, it takes several questions from the respondent's point of view. Please pay attention to the note at the bottom of the question: Experiment is using #Qs to keep an eye on the length of the survey. You may have to remove some other questions in order to run a larger Max Diff experiment.
  • The order in which compared items are presented in each quad for each respondent cannot be anchored, since it's governed by an adaptive efficiency algorithm.
  • Question text will be repeated for each pair. We recommend including a short instruction for it, such as "Please rank the following items in the order of your preference: from the most preferred on top, to the least preferred on the bottom."
  • The main question and each compared item can have an image associated with it. These images can be expanded to the full width of the survey widget, or appear as a thumbnail and pop up on mouse rollover as a reference.

 

 

Additional Resources

  • Build a MaxDiff
  • Analyze a MaxDiff
  • Advanced MaxDiff HB Methodology
  • Facebook
  • Twitter
  • LinkedIn
Was this article helpful?
2 out of 2 found this helpful
Have more questions? Submit a request
Return to top
  • Our platform
  • Solutions
  • Pricing
  • Our panels
  • Help center
  • Contact us
  • Blog
  • Privacy
  • TOU
  • About
  • Careers
  • Innovation lab
  • Demo
© 2022, Umongous, Inc. All rights reserved.

zendesk theme design by aytm c/o diziana