The Thorny Problem of Measuring Sentiment

Most of us have completed a survey which contained a question on how much we “preferred” one brand over another or our likelihood of choosing one product or political candidate over another. Regardless of the exact wording of these questions, these are attempts to directly measure “sentiment,” the relative strength of the motivation to make one future choice more likely than the other. These questions are asked so that the researcher can statistically link responses to questions from other parts of the questionnaire so that age, income, beliefs about the product or candidate, etc. can become predictors of future behavior. Unfortunately, these direct measures of motivation are not particularly accurate predictors of future behavior. Conclusions using research results based on these faulty measures will result in ineffective or cost-inefficient marketing adventures simply because the sentiment about the desired behavior was incorrectly measured.

So why are these questions asked in the way they are if they are not predictive? Well, the researcher asks, “How else would you construct questions about future behavior other than asking directly? In other words, don’t we always ask ourselves and each other what we are going to do in the future and don’t we always get an answer?” This kind of thinking is characteristic of the problem – it’s not whether we get an answer, but whether the answer is predictive of something which actually occurs in the future. This is the reason why all of those fancy regression models generate a high level of statistical association with a future intended action but fail to predict actual behavior. Polling for the 2016 U.S. presidential and British Brexit campaigns are good examples. In these cases, estimates of intended voting based on the strength of the voter’s sentiment did not match actual voting behavior. What happened? Why aren’t these sentiment measurements valid predictors of future behavior? We have evidence of at least three sources of error, as follow:

Don’t Know – Many respondents will not have “made up their mind,” but are being asked to make a decision to complete their survey. Rather than admit that they “don’t know” and therefore, “stupid” to the researcher, they select a choice which they believe will be acceptable regardless of their true sentiment. Many respondents simply don’t think about their commitment to a future choice until they make it. In the meantime, they will not devote the “energy” they believe is needed to make such a thoughtful choice but will definitely provide one to the researcher to complete the survey or focus group.

Don’t Want to Tell – Many respondents will perceive a social risk in telling a stranger, the researcher, what they actually feel, for example, “Vote for Brexit, you think I’m crazy?” They will pick a future behavior which appears acceptable to the researcher, and therefore, less risky to themselves.

Don’t Want to Admit – Still other respondents don’t want to tell themselves, i.e. admit, that they are likely to take on a social risk by not buying from the insurance company dad swears by or the PC brand that you told everyone was the best. Again, a less risky choice is the one more likely to be selected.

How then are we to accurately predict future choices if we can’t measure sentiment or motivation correctly? The answer, for which we have a wealth of evidence, is to uncover and model the respondent’s own (perhaps hidden) bases of decision-making and, then apply the same calculus as that used by the respondent. This approach allows us to predict both preference and future choice at levels of precision exceeding all other approaches. Let us know if this improvement in predictability is something about which you would like to know more.

Tim Gohmann Co-founder and Chief Science Officer | 805.405.5420 | tim@behavioralsciencelab.com

Photo by Evangeline Shaw on Unsplash

Tim Gohmann, Ph.D.