Overview: MaxDiff Research Test
The Advanced MaxDiff test is a great way to compare many alternatives without overwhelming respondents by asking them to read and consider all items at once. It takes a list of your items to be compared, and shows them in a balanced order to each respondent 4 at a time.
Dependent on the data resolution requirements, the model will ask a number of ranking tasks for respondents to complete. When sufficient data are collected, we'll complete advanced statistical analysis on the back end and show you the hierarchy of your alternatives, as well as the distance between each item. If Hierarchical Bayesian (HB) analysis was specified in the settings, the model will also provide individual-level estimates, making it possible to draw inferences on sub-sections of the population without significant decrease in predictive power. Using this technology, you can find out which features/qualities of your product are most important to consumers, or rank a long list of slogans you are considering in the order of preference by your target audience. The Advanced MaxDiff test is quite sensitive to the number of completes, so we recommend ordering 400+ responses.
The Advanced MaxDiff test is more accurate than the classical MaxDiff test, because it uses an adaptive real-time randomization algorithm that works while the survey is being fulfilled to provide the greatest possible efficiency of item distribution per quads and per respondent, rather than relying on a predetermined map of the entire test.
It is also much more user-friendly than classical symmetrical tables used in the past. Instead of asking respondents to find an item on the grid and put a checkbox in the correct column, we present them with 4 items at a time and ask them to rank order the items using our drag and drop interface. It conserves respondents' energy and focuses their attention on identifying the winner and loser for each quad with minimal effort.
Aggregate or HB?
The Advanced Maxdiff comes in two varieties: Aggregate (formerly MaxDiff Express) and HB. If Aggregate is chosen, the method focuses on collecting general aggregate information, without the intention to obtain individual-level estimates. In typical settings respondents would see 3-5 screens. While it will still be possible to see results for a subset of respondents, please keep in mind that the model only covers the subset of respondents, and does not take into account the rest of responses.
In HB mode, the method is focused on collecting high-resolution individual-level data to be analyzed by Hierarchical Bayesian model. In typical settings respondents would see 10-20 screens. Besides the additional option to extract individual logistic coefficients, it also becomes possible to look at analysis done on a subset of respondents, knowing that the results are more robust, as the model allows for individual-level estimation.
|
Aggregate |
HB |
Number of items that can be tested |
7 to 200 |
7 to 200 |
Maximum screens/trials limit |
12 |
20 |
Maximum items evaluated by any given respondent |
3 items/screen: 36 4 items/screen: 48 5 items/screen: 60 |
3 items/screen: 60 4 items/screen: 80 5 items/screen: 100 |
Average evaluations per item seen |
~1 times |
~2.5 times when 7-40 items ~1 time when 41+ items |
Tipping point of full item evaluation |
7-[max] items: evaluate all [max]-200 items: evaluate subset |
7-40 items: evaluate all (~2.5 times) 41-[max] items: evaluate all (~1 time) [max]-200 items: evaluate subset (~1 time) |
Analysis type |
Aggregate – treats all respondents as one big data pool |
Individual – creates utility scores for each item for each respondent |
Additional analyses available |
None |
Significance testing TURF Analysis (Learn more) |
Note: for more details on the methods, please refer to our knowledge base articles: Advanced MaxDiff Aggregate and Advanced MaxDiff HB.
Other Info
While Max Diff appears as a single question in the survey editor, it'll take several questions from the respondent's point of view. Please pay attention to the note at the bottom of the question: "Experiment is using XQs" to keep an eye on the length of the survey. You may have to remove some other questions in order to run a larger Max Diff experiment.
The order in which compared items are presented in each quad for each respondent cannot be anchored, since it's governed by our special adaptive efficiency algorithm.
Question text will be repeated for each pair. We recommend including a short instruction for it, such as "Please rank the following items in the order of your preference: from the most preferred on top, to the least preferred on the bottom."
We offer breaking the list of items into quads (groups of 4 presented at a time). This is the only way you can run MaxDiff on the platform in DIY. If it's important to present the alternatives in groups of 3 or 5 at a time, please reach out and we'll set it up for you on the back end.
The main question and each compared item can have an image associated with it. These images can be expanded to the full width of the survey widget, or appear as a thumbnail and pop up on mouse rollover as a reference.