How to set up a CX Measurement

How to Set Up a CX Measurement

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 11, 2021 * 9 min read

It was back in 2017 when David wrote me this email. He was heading insights for SONOS, and the company was bravely acting on their CX Insights. The problem: no improvement after all.

How to Set Up a Customer Experience Measurement

Measuring customer experience has never been more crucial than it is today. Nevertheless, most CX teams continue to face intimidating CX challenges. Meaning that not all CX professionals know how to advance in customer experience measurement and how they can leverage a customer-journey-based approach in order to optimize the CX measurement program.

Before we start with the nits and grits of measuring CX, it is important to understand how vital it is for the businesses to improve the customer experience. Firstly, having an exceptional CX management can instruct a lot of businesses to understand who their customers are and how to capture their sincere feedback in real time.

In this post, I will provide you with guidance on how you can build up and/or improve the CX measurement to be able to shape CX measurement for impact.


Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

How to Choose the Right CX Survey Type for Your Business?

The first lesson is what kind of customer experience service do we see? What kind of surveys are possible and meaningful first thing is other the general customer experience survey that you reach out to your audience, to your customers, without a certain trigger, without a certain touch point. You reach out because you want to have a status report.

What is a status? Across all customers of the loyalty, no matter whether or not there have been touch points or not, that’s a general CX survey. And of course it’s very different what you get , compared to touch point customer survey. Because the touch point CX survey is after a certain touch point, after a phone conversation, after you’ve met a customer on the a website or a in a shop or whatever. Whenever the touch point was right after or shortly after you reach out and interview them with specifically towards the experience of this touchpoint. The third is the CX journey survey, which is actually the same as a touchpoint CX survey, basically differentiates because you want to connect those different touch points to map out a holistic journey.

But this journey actually is different for every single customer. So basically you do in the CX journey customer survey,  you make sure that you are measuring where, what was the touch point before the actual touch point… So with this information, you can build a chain for every single customer and also, analyze how those touch points in its sequence influencing each other. The fourth CX survey typically is the competitive survey. So you not only reach out to customers, but also to your non-customers to the customers of your competitors. So these companies do that for benchmarking purpose but also you want to learn, why are they still with this supplier or with this company?

What are they doing better than us? What can we learn? Where are the weaknesses of the competitor? So all this, you can learn with those competitive CX surveys. So these are different types and typically, companies do all of them.

How to Choose the Right CX rating question?

The next question is  what do we ask? What are we measuring? The core thing you want to measure is the measurement of the customer experience and the most used measurement of the customer experience is the resulting loyalty and the resulting loyalty is very well measured by the question whether or not the customer is likely to recommend you. It is one indicator of customer loyalty. That’s the whole basis of the NPS.

And the question is how likely are you to recommend our brand to a friend? In the B2B context you will add colleague or something like that. Always the same question. And the scale is always the same.

It’s always from zero to 10, many suppliers, although do one to 10. Which gives similar results, but it’s a little bit different, right? You have a 10% bias here.  Keep in mind, the original is from zero to 10 and only the extremes are labeled zero, not at all likely and 10 extremely likely.

And then, the NPS score is computed out of this by taking the percentage of promoters who mentioned 9 and 10, you subtract the percentage of detractors which are all from six to zero. If you have 50% promoters, 30% detractors your NPS is 20. That’s a summary, simple question scale from zero to 10. And  you probably know, the formula to compute the NPS score. It is not a mean, it is presenter scores subtracted which is also a weakness. It is made that way because you can explain it to everyone and everyone will understand it.

But the weakness is that if you don’t have so many promoters or detractors, and if your sample size is not so big it will be a matter of luck if someone more or less will be part of the promoters or detractors. The numbers are very much changing by noise. And this will make the whole score fluctuating.

When the sample size is low or when you are operating in the extreme, not very much promoters or not very much detractors. That’s NPS but you can of course alter or use a different one. Typically the market researcher loves Likert scales, more stable and also comparable across different regions. It calculates the mean, and every scale point has a label. With NPS, nobody knows what five means. It’s something in the middle. But then what does seven mean? You don’t know. In different cultures, the seven is interpreted in a different way. This can be eliminated if you put a certain wording on every scale point, that’s what you do in Likert scales. This has advantages of course to use it. On the other hand, it’s harder to understand the mean of 4.1. Other than that both are correlating highly and you can choose either of them.

There are other scores around loyalty Likert scale measures, loyalty, NPS measures loyalty, but you could also decide to measure the customer effort score. How much effort does it take to deal with us which is very different. In some businesses, it’s not so much about the loyalty or the satisfaction, it’s just taking the pain away.

That’s what it can decide for if you found out that it drives your business. And that’s where we come later in the following blog posts, which we will discuss, how you can find out which score is actually best for you, which is connected to your bottom line. Many companies even measure customer satisfaction.

And this is all where it began some decades ago, the whole movement of customer experience began with measuring customer satisfaction, but businesses realize that of course satisfaction, especially when it comes to the touch point, very much measures the moment which is fair. And you may want to know that, but it is very different than the loyalty, because the loyalty is driving bottom line and which is a long-term indicator. It is an attitude. The satisfaction is not so much an attitude. It’s more of a judgment in the certain situation and your satisfaction with that.

What Channels Are Being Used for CX Measurement?

You need to reach out to your customers.  You need to decide and need to think of all the channels. And there are two different approaches. One approach is to ask them, interview them in the moment, that’s what you want to do when it comes to touch point customer experience. You want to use the medium where you meet your customers.

It could be on the phone. In this call, you can ask them two questions. It could be on the website, it could be pop up, etc. or it could be face-to-face. Different forms of getting feedback, but the idea is to structure in the moment. The other way is to reach out to your customers and even with touch point customer experience surveys, many companies reach out after the touch point or you get an email the next day, which is not ideal, but sometimes it’s the only practical solution. And the channels are of course telephone, email which are the most often used, nowadays even text.

You can imagine people get an SMS asking… Hey, we had this touch point please rate us from zero to 10. How likely would you recommend us… The customers are answering five, sending it out. And then you send another SMS. Thank you. Why did you rate this way? And he can text back, an open-end and that’s part of standard survey platforms. You can choose those things very easily. There are even more options which are seldom use still a social sampling. You reach out to customers over ads. You can do that if you don’t have emails or you’re not allowed to email them you can still do ads. You can also direct messaging if they follow your profile or you can even initiate Alexa interviews. But the last three are certainly not too widely used, but they may become in the future. There are different ways of reaching out every  way has pros and cons.

If you really focus on one way, your results will be more consistent because every way of reaching out has a bias. Someone who picks up the phone is a different person then someone who answers or clicks on emails or even read the emails. Actually if you mix everything if you do all of them, you have the biggest reach, you get most feedback, but you need to make sure that you control the method of reaching out the channels, because every channel has bias and you need to make sure that you control them. You will know, that the change in the NPS or the customer experience score is not due to different open rates or response rates in these channels.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

What Are the Main advantages of Open-Ended Questions?

After you’ve measured your customer experience with a rating, you would like to know why did they rate this way? Very powerful… You should ask that directly after the rating. And do ask everyone the same question. Do not split in different questions for the promoters.

This is what we see often that companies ask detractors – what can we improve and ask promoters – what do you love the most in our service? And they do it because in this way you can more easily analyze the responses. But there are multiple problems with it. The first problem is you bias the response because you implicitly assume that there is a reason for being promoted.

They like something and there is not something they don’t like. You implicitly assume that detractor doesn’t like anything. But most importantly, you will not be able to spot differences.  Many things that the detractors say the promoters would say as well.

The other way around promoters may also be criticizing certain things, but it doesn’t impact the rating so much because of being not so important. By asking different questions, you will never learn what’s important. You get answers, but you have no chance to prove whether this makes any sense or not.

Don’t ask the questions, ask one question and do not mark it as optional. That’s what we often see “optional” here you can put some texts in it. No, encourage your customers to talk, ask them for a favor. Of course customers can leave it blank but you should not make it too obvious because the feedback is important and also it is a value for customers as well to give their opinions. The benefits of open ends are quite clear. Customers love this way of giving feedback because it’s the most natural thing. It also is the shortest way to give opinion. We know that these quantitative surveys where you need to read alternatives and you need to decide, but you don’t want to click on anything because you either don’t understand that, or you disagree to the wording. The open ends don’t have that problem.

It’s straightaway. It’s super easy. You can use your words. That’s customer centricity. Customers want that. The only thing we need to make sure is we make the draw the most insights out of this.  And then another benefit is, it is not only describing what people think but also it helps you discover things you have not thought about.

It’s qualitative as well. Also you do not bias here. If you have quantitative surveys, you always bias towards a certain answer. You bias if you ask about delivery time, you assume that delivery time is something in the relevant set or which is relevant to them, which is a bias. There are of course limits of open ends.

You do not learn everything about what customers have in mind. What customers write is they just tell you something that pops in their minds which is very important to them because this will send also top of mind, but basically that’s it. They will not talk about things that are with a need to think a lot about. That’s a limit that  you don’t see everything, especially when it comes to very subconscious things – branding stuff. That’s where you also can get a lot of open ends, but then you need to ask different questions. Also asking different questions is not standardized that’s why it’s unstructured feedback. But we will see there are advanced tech that help us getting standardized measurements out of this information.

What Are the Pros & Cons of Asking More Closed-Ended Questions?

After the open-ended question, you can ask close-ended questions where customers can choose from a scale from one to five or multiple choice. They can choose whatever which segments they belong to and so forth.

It’s a close-ended question and important is always ask this after the open-ended question. Why? Because first and bias the results of the open-ended question. And second, also the input customers give and the open-end will be more sparse if they already have answered everything in close-ended questions and they will only write something which has missed in the close-ended question.

You get the richest feedback when you start open-ended question and then close-ended questions.

These are things you may want to consider for close-ended questions, which is the first source of response. If you do a touch point analysis, what touch point was the touch point you had before that? How did you arrive to our website? Did you Google it? Did you click on an ad? And so forth.  This kind of source information you may want to capture as well as customer segment information, because you want to measure the texts that very much influence the customer experience. If you interviewing a luxury car customer, he has different expectations. He has different customer experience level than an entry level car customer. You want to know what kind of product he is talking about or what kind of segment he is in. That’s something very useful.

You could also measure some core aspects of your service, quality, service, price, brand, USP, whatever. That option when you want to measure things what customers may leave out you want to track that. But the limits here are not very specific.

What do I mean with specific? When the quality score drops from four to 3.8, what actions would you take? You need to know what should I do to drive quality? What if quality decreases you still don’t know what, but with unstructured feedback you typically get more specific feedback.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

In a nutshell…

There are three types of customer experience surveys, which is general CX, where you reach out to all your customers without the reason, second touchpoint CX, which is a survey after having a certain touch point and competitive CX, where you reach out  to your customers and the customers of your competitors to learn what customers or what competitors do better than you. Then the NPS question is the most used form of measuring customer experience. There are other ways of doing it. You want to measure the customer loyalty as a long-term attitude and long-term measurement of the impact of your customer experience.

But you may also consider customer satisfaction as a short-term measurement of your performance. There are different channels you want to use for reaching out emails, texts, phone. Each of them will reach different people. The ideal situation is to use all of them because this can maximize your reach. When you do that, be sure to track this channel. So you can make sure you don’t have a bias when the ratio of those feedback is changing. In any way use the power of open-ends because that’s the most customer-centric way of getting feedback. It’s super intuitive.

It’s hard and it’s super rich. This discovers new topics you may not be aware of. As an option, you can also get a close-ended question. Most interestingly, where does your customer come from for touchpoint survey? And what is the context? What is the customer segment? What is the product he is talking about?

These are things you want to measure to control it. This is needed to find later on what drives customer experience…

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

"CX Analytics Masters" Course

“Solve key challenges in CX analytics”

  • CEO of Success Drivers, Founder of CX.AI
  • Inventor of a methodology to discover success drivers from CX data
  • Consults on Customer Experience for over 20 years for the world’s biggest brands like Allianz, BMW, Intel, Facebook, Metlife, T-Mobile…
  • Member of CXPA
  • Official ESOMAR Instructor

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


Predictive Qual


How to Turn The Art of Qual Into a Science of Impact?

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 7, 2021 * 9 min read

It was back in 2017 when David wrote me this email. He was heading insights for SONOS, and the company was bravely acting on their CX Insights. The problem: no improvement after all.

It was a mystery. Nearly 50% of speaker owners were reasoning their loyalty to the great sound. Every other topic was around or way below 10%. So the company was tweaking the sound experience for years. Result: None.

There was no problem in measurement, no serious bias, no categorization mistakes. The data was great. It was telling a clear story.

Still: the story was wrong. Because data is not insight, qualitative feedback is not insight either.


Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

Our hidden assumption behind this feedback is that customers can answer the question correctly. We assume they are involved and interested enough to dig deep into what is moving their behaviors.

This assumption is like believing humans are rational behaving beings: True only in rare exceptions.

The reality is sobering: customers write something that just comes to their minds. They do not lie, but certainly, they also do not tell the truth either.

As a consequence, what you see across industries that customers are mentioning top-of-mind topics. A speaker owner will say “because of the sound”, a washing machine owner will say “washing powder”, a restaurant customer will reply “great taste of the food”, an insurance customer mentions the service, and so on.

What’s really moving the needle stays hidden.

Is QUAL useless then?

Not at all. Qual is indispensable. It allows customers to use their own language, words, and expressions. It helps us to discover what we have not thought about.

But here is the question: If this qual feedback -as described above- gives wrong signals. How can this be useful in any sense?

Some would argue that if an argumentation is plausible, it may then be valid. If you can find yourself in this statement, please listen up: this is another GREAT misbelief.

(This article here is explaining exactly why we get fooled by plausibility)

Something plausible has some likelihood to be correct, but everything not plausible can very well be right too.

Qual is a great way of collecting data, perception, customer views, and it is helpful to synthesize new hypotheses. But there is no reliable mechanism to validate it.

The mechanism in use is to check plausibility. In in-depth interviews, you can collect many qualitative data points to check for coherence. But plausibility is a check for coherence that only leverages existing beliefs.

The reason why we do research is that our knowledge is not good enough. We want to learn. Everyone who wants to learn is better off not relying on plausibility.

QUAL is broken.

Qual is excellent and indispensable. But it is NOT ENOUGH. Even it alone is MISLEADING and can be dangerous.

Why does it seem that I am the first telling you this? Because this is counter-intuitive. A dose of “qual feedback” is like a heroin shot.

You read it; you have thousands of associations with it, you link it with other things you know, and your brain forms a story. Instantly you get “high”. The rush of the eureka moment into your brain is hard to resist.

We are all on the needle. My rational me is screaming “noooooooooo”, but my heart whispers “yeaaaa” – it automatically beliefs everything that feels plausible.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Predictive Qual – is the Qual of the AI Age.

Imagine a world where we can harness the power of qualitative feedback, but NOW would validate if a mentioned topic is truly causally accountable to explain a specific outcome.

Imagine we could still explore new things that go on in the world of customers and learn about the words they use to express themselves.

But now we would be able to filter out what of those things are just “cheap talk”, verbalize the same underlying phenomenon, and the one killer topic that crushes outcomes.

What would this mean to CX research, and what would this mean to the entire customer insights and market research discipline?

Wouldn’t this mean that suddenly the two worlds of qual and quant do not collide anymore but unit into a coherent approach?

Wouldn’t this mean that research becomes even much easier because you can do all -qual and quant and modeling- in one study? Wouldn’t this mean that the research process not only becomes BETTER, but also FASTER, and CHEAPER?

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

How Predictive Qual Works.

Sonos was eager to understand why nothing was working. So we did a deep dive and ran our causal machine learning methodology. It turned out that the sound of SONOS was already good enough, and improving it a waste of resources.

Instead, other good things about the system (e.g. the reliability of the service) were not often mentioned. But when they have been mentioned, it was the killer reason for customer enthusiasm.

With the predictive system behind these insights we even dare to make a prediction. “when you double mentionings of this topic, NPS will increase by 8 points.”

Was it a risk? Not if you have a system in place that can explain NPS changes also backward.

Fast forward 6 months of implementation work, suddenly the NPS jumped, precisely by the predicted amount”… a pinch of luck that this was so precise.

Can you imagine how Leadership got impressed by that?

The magic follows a simple 3 STEP-framework that everyone can do themselves

Use AI to automate text categorization. Train a supervised AI to achieve optimal accuracy and granularity.

Use a key driver analysis (KDA) approach and explain your CX outcome (Satisfaction or Likelihood-to-recommend) to the topics mentioned and the verbatim’s sentiment. For better results, use causal machine learning (instead of KDA) for doubling predictive and prescriptive power.

Visualized results in an interactive dashboard so everyone can play with it. Use the driver model formula to predict simulated changes in drivers onto outcomes.

The Predictive Qual Trend.

The trend towards “predictive qual” is apparent. Many CX software and insights platforms integrated not only text analytics (step 1) but also a driver analysis (step 2). It’s a clear sign that pioneers among insights leaders are already adopting this methodology.

It fills me with a little bit of pride, having started this movement back then in 2017. In truth, David from Sonos was the “stone” who got the line started.

Soon after, big brands like Microsoft followed and helped us to build with the best Predictive Qual method available – providing 4X impact of actions compared to DIY solutions.

In a nutshell.

Qualitative feedback is indispensable. It gets us unfiltered feedback and enables us to discover new things.

But just reading it can be largely misleading. It’s even a proven fact that just counting the number of mentioned topics (as a reason for satisfaction or loyalty) is not related to its importance.

To draw adequate conclusions from unstructured feedback, it takes a Predictive Qual approach.

The approach can be filled with a three-step framework: text analytics, driver analytics, and predictive dashboarding.

With state-of-the-art excellence in each step, you can 4X the impact of the action that will be derived from the data.

This means it matters HOW you implement it.

Your next steps.

What can be your next steps in better exploiting your qualitative feedback data?

Do you feel the need to build knowledge around this for you or your team? Then you might consider the “CX Analytics Masters” Course – it’s free for Enterprise Insights professionals.

Do you feel some more urgency, and you want to assess your status quo better? Then it might be wise to schedule a complimentary strategy session with an expert – it’s free for Enterprise Insights professionals.

In case you have not yet subscribed to my standpoint newsletter. Then it would be best if you did this now – because there is more exciting stuff coming very soon.



"CX Standpoint" Newsletter

“It’s short, sweet, and practical.”


Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


TRUTH-HACKING: 3 Rules To Not Get Fooled by Data

3 Rules To Not Get Fooled by Data

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.

Published on: May 18, 2021 * 9 min read

Everyone communicates with facts and data to support a certain message. Politics is doing it, Media is doing it, Businesses are doing it. Lying with data has become a shady art, perfectionated by politics, cultivated by managers.

Even worse, business leaders fool themselves day in day out by drawing “obvious” conclusions from data.

Imagine we could make sure we do not fall for fake news, alternative facts, spurious correlations, and alike. Imagine we would have a checklist to see if an insight is legit?

How many trillions of dollars could be saved? How many lives could be saved? How much smarter would people guide political decision-makers? How much better would this world become if we all get a little bit data-smart every day?

This article explains the way…

So now, how can we separate the wheat from the chef? How do you check if a finding is flawed?

These three simple rules are your guide:.

  1. Control How Your Data Is Sampled
  2. Understand What Your Data Really Mean
  3. Be Aware How of You Infer Truth

Here is why…


Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Control how your data was sampled

In 2016, Donald Trump held his inauguration speech. Those media who were critical towards Trump, were highlighting the fact that the present audience was significantly smaller than ever before.

However, Trump’s spokesperson Katrina Pierson mentioned that “the peak subway passengers had been extremely high.” This comment became world-famous as the “Alternative Facts”-Quote.

It stands for cherry-picking examples to prove the point you want to make.

Politicians of all parties are doing it all the time. But not just them. Business leaders do it too.

The moon landing conspiracy is built on using selected facts that would make this project questionable – like the waving flag, although the moon has no wind. It is neglecting the available facts that would explain the phenomenon.

Beware of Cherry-Picking

In the 2000s, when I still used to read business books, I had this eureka moment. I was reading the book “Simply Better” which was mentioning Cardinal Health as an example of bad management. Then I read “Close to the Core” which used Cardinal Health as the case study for how to do it.

Even business book authors do cherry-picking. Every single business book on this mother earth is a cherry-picking selection of cases that prove one single theory.

What’s wrong with getting some inspiration from business books?

It’s a dangerous dance at the edge. Getting inspired by wrong ideas will derail your mind.

The problem: the books hinder you to build your own opinion and finding the truth. They just design to make you believe the theory.

Reading business books will less likely make you smarter or more successful. It will more likely make you become a business fashion victim.

Cherry-Picking is the practice of selecting results that fit your claim and excluding those that don’t.

Why is this strategy so successful of fooling us all?  We are drawing conclusions from any experience no matter if it is representative or not.

What can you do about it? Do not conclude from data, without checking that if it represents the matter of interest.

There are other forms of “cherry-picking”…

Sampling Bias – the unintended cherrypicking

In 1948 when The Chicago Tribune mistakenly predicted, based on a phone surveythat Thomas E. Dewey would become the next US president. They hadn’t considered that only a certain demographic could afford telephones, excluding entire segments of the population from their survey.

This cherrypicking can sneak into decision-making easily. Have you ever done a churner survey?

Survival Bias – another unintended cherry picking

In world war two the US army checked bombers for damages thru gunfire and applied ammunition to those spots. It did not help at all.

Why? They needed to check those who did NOT survive gunfire too in order to find spots that required further protection.

I have never met a client who is doing a churner survey and who realized he fools himself with the survivorship bias. To find out what leads to churn, you need to survey customers, NOT ex-customers, and follow the churn on them.

We believe in data and facts. For us, “facts” are a synonym for “truth”. But cause-effect relationships can not be observed but must be inferred (and a “reason” IS a cause-effect relationship),. This article has more NEVER take facts to decide about reasons.  

ALWAYS be sure to have cases with different outcomes in your sample – successful and not successful, churner and non-churner, winner and loser.

Understand What Your Data Really Mean

When working with an automotive brand, I was astonished at how incredibly high the customer satisfaction was at nearly all of their car dealers.

The client took me aside and explained: Car dealers are incentivized by customer satisfaction. They get millions in cashback from the manufacturer if customers are satisfied. He further explained: sure, larger  car dealers hired personnel just to call those who give lower ratings to take this rating back. They also implemented all kinds of other measures to make sure the rating was excellent.

When I shortly after bought a car, it became apparent when the dealer smiled at me with a huge basket of flowers in his hand and said: “hope you’ll enjoy this car – if someone from Ford calls you, we would be delighted if you say ‘extremely satisfied’ – if you are not, please tell me beforehand.”

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Beware the Hawthorne Effect

In the 1920s at Hawthorne Works, an Illinois factory, a social sciences experiment hypothesized that workers would become more productive following various changes to their environment such as working hours, lighting levels, and break times. However, it turned out that what motivated the workers’ productivity was someone taking an interest in them.

When you try to measure customer satisfaction or the likelihood to recommend, asking alone may increase or decrease the outcome. In CX this is sometimes used in “cuddle calls”. With this you reach out to customers just to show that you care and improve satisfaction.

WHY is this fooling us all?

We are not aware that data is just a representation of a real-world phenomenon. We take the label of the data and take this as the truth. Only when we understand how the data was generated can we understand the resulting data analysis.

WHAT can you do about it?

  • When interpreting analysis results, also consider that data might not be generated the way you believe it was
  • Track the context of data generation (e.g. as a binary variable as 1 for “with observation” and “0” without observation) and include this information in the analysis
  • Make sure you really have understood which piece of reality the data really stands for.
Be Aware of How You Infer Truth

True facts: global warming correlates highly negatively with the number of pirates. The number of people drowning by falling in pools correlates with Niclas Cage appearance in movies. And the shoe size correlates with carrier success.

We all heard about it: “Correlation is not causation“. The intuition to take correlation as causation is hard-wired in our brains. It is tough to resist this conclusion.

Correlation works great where just one or two things impact an outcome, AND the effect happens shortly after the action. Beyond these cases, correlation is largely misleading.

Beware the Cobra Effect

If marketers around the world need to hit their sales numbers for the month, they do price promotions. It’s inevitable that this works, as sales number immediately reacts.

Still, it causes more harm than good: The Cobra Effect.

The good share of the additional sales is simply sales that would have happened anyway, but later, and at a higher price.

The net sales effect is much lower, and the profit effect questionable as margin suffers.

On top of this, competition reacts to defend its market share and pushes its own price promotions. This harms your sales, overall market price level and leading you to the next price promotion. It’s a vicious cycle.

In the 1800s, it was said that the British Empire wanted to reduce cobra bite deaths in India. They offered a financial incentive for every cobra skin brought to them to motivate cobra hunting. But instead, people began farming them. When the government realized the incentive wasn’t working, they removed it, so cobra farmers released their snakes, increasing the population.

This is WHY the Cobra Effect is successfully fooling us?

We see the immediate effect e.g. of a price promotion. The indirect effect as well as the long-term effects are not that obvious because on the long-term other factors influence the outcome as well. Also, the effect can spread over time.

When people do not know a solution to an obvious problem, they take the obvious solution: price promotion.

Actionism is always a “good” strategy in complex environments. Nobody can accuse you not to do something. Also, nobody can easily prove that you are wrong.

Managing a complex system takes complex analytics to understand it and self-organizing measures to address it.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Beware Assumptions

There is a joke among data scientists: “if you shoot past the deer on the left and on the right, on average, it is dead”. Believing that an average well represents all, can be misleading.

It’s all around…

Once, we ran a marketing mix modeling for a pharma sales force to determine which marketing and sales actions drive prescriptions. Conventional (linear) modeling “found” that “giving product samples” to doctors will drive prescriptions.

When applied a more flexible machine learning methodology, it turned out that at some point, more samples REDUCE prescriptions.

After the fact, this is clear. The doctors give the samples away. If they have too many, they will first use samples, not prescribe.

Summarizing always comes with assumptions. Those assumptions are in many (if not most) cases WRONG.

To demonstrate the effect, statistician Francis Anscombe put together four-example-data sets in the 1970s known as Anscombe’s Quartet. Each data set has the same mean, variance and correlation.

However, when graphed, it became clear that each of the data sets are totally different. Anscombe wanted to make clear that the shape of the data is as important as the summary metrics and cannot be ignored in the analysis.

It can be misleading to only look at the summary metrics of data sets. This applies to parametric statistical modeling as well. Their parameters are summarizing a preassumed property (primarily “a linear relationship”).

WHY is this strategy so successfully fooling us all? Our world is complicated enough. We have a desire to make it simple. Simple is beautiful to us. We believe what we want to believe: a simple, plausible explanation.

Confounder at work

When you take NPS ratings of customers and then correlate this with the later development of customers (whether or not they churn or even buy more), repeatedly, you will be surprised.

What we see is that it often hardly correlates for some reasons. One reason is the so-called Simson’s Paradox.

When customer segments that have a higher potential to upsell, give at the same time more critical ratings, it will mess up your correlation.

In the 1970s, Berkeley University was accused of sexism because female applicants were less likely to be accepted than male ones. However, when they tried to identify the source of the problem, it was found that for individual subjects, the acceptance rates were generally better for women than men.

The paradox was caused by a difference in what subjects men and women were applying for. A greater proportion of female applicants applied to highly competitive subjects, where acceptance rates were much lower for both genders.

The Simpson’s Paradox is a phenomenon in which a trend appears in different groups of data but disappears or reverses when the groups are combined.

It works because humans are hardwired to believe in correlations “When something is consistently happening along with something else (correlation) there must be a cause-effect relationship of some kind.”

Correlation leads us astray for several reasons. One reason is highlighted in the Simson Paradox: The influence of a confounding effect.

If there is something that influences the cause (e.g. the NPS rating) and the effect (e.g. customer value, churn, upsell), at the same time then correlation (as well as modeling that excluded the confounder) can be wrong.

WHAT can we do about it?

In a business context, whenever possible, avoid jump from correlation to the conclusion.

Instead, use methods designed to infer causality. They are coined “causal analysis” and the latest tech “causal machine learning”.

In such situations not possible to run proper analytics: at least make yourself aware of how fragile your learning is. Try to hypothesize other explanations for the correlation. Evaluate possible A/B testing options.

A warning sign is always when not only the correlation but also a non-correlation, can be wrapped in a nice story.

My advice on “Indirect (Cobra) Effects”: Beware actionism. If you are not sure, doing something can be more harmful than “wait & see”. The latter is an established strategy in medicine and should be in business too.

There are well-established methods in place that can bring light in the darkness. If you educate decision-makers,it needs proper analytics to see root causes of effects, then the causal models will become a standard practice.

My advice on “Beware Assumptions”: Take a look at raw data first. It will not answer your overall question but can quickly spotlight wrong assumptions you are making.

Practice humbleness. Humans tend to overestimate the validity of what they know – big times. Be aware that most stuff we know about business will turn out to be wrong (or oversimplified) in the future.

Machinelearning is made to model input-output relations with the least amount of assumption. Causal machinelearning has the framework and algorithms to get the insights you are looking for.

My advice on “Confounder under control”:  Berkley University’s example suggests that it is enough to split up the KPI comparison in a two-dimensional table. But this is deceptive.

It needs a good amount of fortune to find the hidden confounder this way. Mostly, you don’t know what you don’t know.

There can be dozens of variables that may turn out to be a confounder. That’s why causal machinelearning is the way to go.

This article further elaborates on how you can spot causation

Truth-hacking – the art & science of the 21st century

Being a Truth-hacker can be hard. Don’t become one if you can not handle uncertainty. Don’t become one if you do not have a passion for truth.

My passion is based on my conviction that it’s unethical and unfair not to aim for truth.

It’s unfair to your colleagues to have hidden agendas, it’s unfair to shareholders who invest hard-earned money, it’s unfair to customers who are those who pay your check.

Truth-hacking can be learned. It is a “simple” three-step process:

  1. Control How Your Data Is Sampled
  2. Understand What Your Data Really Mean
  3. Be Aware of How You Infer Truth

Think about how this would make the world a better place!

What if you don’t have to fool yourself anymore with ludicrous fact-based stories? Wouldn’t it feel better too?

What if middle management can’t trick Company leaders and investors anymore with questionable fact-based explanations? Investments would find uses that thrive instead of making the shady rich.

What if politics can’t blind voters anymore with cherry-picking facts? Voters would elect politicians that truly drive prosperity.

If you are now passionate about truth-hacking too, please spread the word.

Share this article not just to your friends but to those who you really want to adopt this art too.

Share this article and sign up to get my upcoming articles elaborating on these topics.

– Frank

"CX Standpoint" Newsletter

“It’s short, sweet, and practical.”


Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


Beware Storytelling

Beware Storytelling, Practice Truth-telling

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on:  29.04.2021 * 9 min read

Storytelling is a crucial skill to “sell” insights internally, but it comes with a risk. Storytelling makes it irrelevant to produce true insights.

“It was a hot summer day when I got a call from David….” This is how I usually start my conference presentation and keynotes, simply because stories suck the audience’s attention. You merely want to know what’s next.

Like a pleasant song, it feels painful when it stops. Your brain sings the song along even then. Same with stories.

It was 2008 when I was traveling to Russia, and I was fortunate enough to join someone’s class reunion. What I was witnessing was unexpectedly amazing. Many times during the evening, someone stood up, raised his glass of vodka, and started telling a story. It always begins with a random occasion like…

“This morning I went to the shower, and it was hard to calibrate the water temperature …(then interpreting this into an analogy) … Isn’t it like in life?… You need time … But once you find an optimal mix, it’s such a pleasure. It’s like with people, once I find you — my friends, I don’t want to miss you anymore “.

I learned that the Russians are pure naturals in storytelling. It was so emotionally intense … And the perfect validation of the point — Storytelling is so powerful.


Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

BECAUSE storytelling is so powerful, it is dangerous.

Please check the following statements and figure out if you agree with them:

  • Differentiating our brand is a vital marketing task
  • Loyalty metrics reflect the strength not the size of our brand
  • Retention is cheaper than acquisition
  • Price promotion boosts penetration, not loyalty

I bet you agree since the trillions of great success stories have been written about those statements in the last decades. Still, none of them is true. Please bear with me for more!

In my college days, I made a strange observation. Students studying language majors were (not surprisingly) very eloquent. But their whole argumentation and the flow of reasoning when talking about what matters in life were odd to me and full of noncongruent thought and explanation.

This was astonishing to me, as I learned that language is the operating system that runs the thought process. Like math runs on numbers and variables, rational thinking runs on words and language. Fewer language skills or words = less elaborate thinking possible.

Although this might be true, it turned out that the eloquence’s wealth can simply be misused to camouflage not existing logical stringency and non-existence of proper meaning.

People who are trained to judge their own comments on “how it sounds” as opposed to “what it exactly means” are not able to produce potentially true content.

The same is true with storytelling. Its key for someone with great insights is to transform the insights into actions because it is needed to win peers and business partners.

At the same time, storytelling is excellent to camouflage non-sense.

Storytelling is like nuclear power.

Nuclear power is able to generate electricity for the prosperity of our society. But in the wrong hands, it can kill billions of lives.

Have you ever thought about how to save your organization from the “BS army”?

Storytelling has become art on its own. Besides the story structure, the sequencing of a Hollywood movie, the use of metaphors to link back to existing memory structures, it’s built on this simple yet super powerful trick: “plausibility.”

I served as a Sales & Marketing Director in my former business life, and I had monthly performance reviews with my sales reps. It was a rainy Friday which I will never forget…

Karl showed me his dashboard, and I asked him. “Mmh. Volume for X is down, why is this?”. Quickly he started giving me a super plausible story as an explanation. Suddenly, I realized we were starring all the time at past years’ data.

Switching that, suddenly, the volume went up. Karl again had a remarkable story at hand.

“How worthwhile are explanations really?” — I suddenly realized.

Storytelling gives holistic examples that illustrate the theory (= the insight). With this, the theory (=the insight) feels plausible.

Here is the catch…

Plausibility is USELESS

“Targeting always improves ROI” — right or wrong?

Sure, this is a plausible statement. The more of the addressed people are likely to respond positively, the better the effect will be.

But still, it’s plain wrong, as you can read in the widely published work at the Ehrenberg-Bass Institute.

We (at Success Drivers) once did a Marketing Mix Modeling for a mobile carrier and included the ad channel of digital affiliate ads to the mix. The client could not believe its eyes. This channel that everyone was bullying as “junk ads”- showed further the most considerable ROI.

Sure, it was junk because affiliate ads are not targeted. But it is super cheap. Furthermore, nearly every internet user needs a mobile carrier. Affiliate ad reaches not ideal but relevant targets. The low ad cost overcompensates the lack of targeting.

The win of targeting needs to be traded by the rise of costs. If everyone rushes for targeting, it will have a lower ROI than non-targeting.

Reality is complicated… 😉

End of the game: the mobile carrier stopped working with us and switched to providers that are happy to produce “plausible results”.

Checking for plausibility means checking for existing beliefs.

It is helpful in the operational and tactical contexts. It’s valuable if you don’t have time to search for the truth, but you need to make decisions fast.

In the context of customer insights, plausibility can be DEVASTATING.

The role of customer insights is to create new knowledge, to challenge and change existing beliefs.

If your new insights only pass the test, when it complies with existing beliefs (=it is plausible), the wealth of mutual insights you are going to learn will be poor.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Plausibility is the end of the insights.

It is not to blame anybody, this is to open our eyes. The “plausibility” superstition has a long history that actually roots in social sciences.

When it comes to applied statistics, still today, students learn to proceed theory-led. It will take another article to clearly prove that this whole research approach is more harmful than helpful in today’s world.

It’s practical to publish scientific papers. But it is not helpful to make a relevant practical impact in real life nor gain genuinely unique, thus valuable insights.

“Nothing is more practical than a good theory” (Kurt Levin) was the professor’s mantra who thought me marketing and statistics. While certainly, the point is valid, it is abused by science and applied researchers.

The problem that we have is NOT that our “good theories” are not used. The problem that we have  is that we do not use proper methods to DISCOVER good theories (=insights).

I know. This now violates the existing beliefs of most of you. You are skeptical. That’s fine. Make up your own mind. (And challenge me if needed to write a more in-depth article about this)

Today’s statistical practice of causal modeling is built on this unpractical theory-led approach and works like this:

Collect all hypotheses that are backed up by theory. If it’s just speculation, leave it out from the model. Then test the model with a statistical modeling approach (can range from regression, econometric modeling, to structural equation modeling) to validate the relationships where you already have the theories.

The only meat on the bone you are getting is the linear strength coefficient of the relationships. No wonder that researchers are starving for richer insights.

And they roam to those who promise “richer insights” — the story and fortune-tellers.

The reason why not many are using conventional causal modeling in practice is not mainly because it is clunky and complicated. They don’t use it because it simply validates what you already know it is not that helpful.

Now, if a plausibility check is a numb sword, how can we test theories and potentially new insights?

The 2 types of insights: There are just facts (descriptives) and relationships between facts (cause-effect relations). The latter is what businesses are unknowingly asking for: “What action A will lead to outcome B”.

The art and science to gain those insights can be labeled as “causal modeling”. This talk here discusses this in more detail.

… and this article explains it in layman’s terms.

JUST SIMPLY TRYING to be more causal will drive huge bottom line impacts.

Every baby step to better causal modeling will bring you closer to the truth. It’s a journey. You can’t do it wrong just more or less good.

The only mistake you can make is not doing it and reverting to the usual “plausible” procedure of looking at and comparing facts.

We need to accept that in truth, we might be wrong. Actually, we are all the time kind of wrong. With fancy stories, we make ourselves feel better and camouflage the fact that we are living in a big bubble of false beliefs.

Endless examples pop up if you take your “funnel glasses” off. Do you remember these statements from the beginning of the article?

  • Differentiating our brand is a vital marketing task
  • Loyalty metrics reflect the strength, not the size of our brand
  • Retention is cheaper than acquisition
  • Price promotion boosts penetration, not loyalty

All very plausible statements, right? Most backed up by “theory” and all are elaborated in established marketing books from Kotler & Co.

The world’s largest marketing science institute “Ehrenberg-Bass” has access to the most comprehensive datasets spanning all industries from the largest brands and corporations in this world. They found no support for those statements and often found proof for the opposite.

Just because it’s plausible, just because marketing textbooks site it, doesn’t mean it is true.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Up Your Storytelling Game: Practice TRUTH-TELLING.

Storytelling is the art of wrapping a theory in a way it feels easily understandable and true. It feels like the natural way of proving a point.

But it’s not. It’s an illusion.

Now, what do we do with this new insight? Depending on your role in a company, the learning will be different.


· Continue the art of storytelling but add the art of proving the truth to the mix

· If you want to be an ethical leader, you should align your research with what creates the best stories. Remember, if you wish you can make ANYTHING fueling a great story.

· Instead, focus on creating true insights. (sounds self-evident, but it’s the exception)

· Educate business partners on truthtelling instead of simply selling them what they ask for (plausible stories)


· Challenge your insights leaders to provide evidence of (causal) truth.

· Every time they come up with facts, comparing or correlating them, stand up and shout “BS”. Or if you are a nice guy with manners, tell them you don’t buy it because of the apparent risk for spurious findings.

· Embrace insights that VIOLATE existing beliefs. Take them as an opportunity to learn and grow.

I like to close with three simple takeaways that will guide you on your way from storytelling towards truth-telling:

1. Be suspicious of plausible stories

2. Be aware that most of what you know is wrong


“If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.”

– Rene Descartes

Stay suspicious,

Stay aware,

Stay curious,


"CX Standpoint" Newsletter

“It’s short, sweet, and practical.”


Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


Correlation is not causation — but what is causal?

Correlation is not causation — but what is causal?

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: 15.04.2021 * 9 min read

Correlation, causation, statistics — all this sounds boring, complicated, and not practical. I’ll prove in this article that NONE OF THIS IS TRUE.

Since the beginning of humanity, we have roamed through savannahs and ancient forests and gained causal insights day in day out.

One tried to light a fire with sandstone — it didn’t work. One used a sharp stone to open the Mammut — worked. One tried these red berries — died within one hour.

Correlation works excellent in simple environments. It works great if you have only a handful of possible causes, AND the effect is following shortly after.

Fast forward, one million years: Day in day out, we are roaming through leadership zoom meetings and business dashboards.

“David did this, next year sales dropped. Let’s fire him.”. “NPS increased, great job our strategy is working”.

Is it really that easy?

We still use our stone-age methods. We use them to hunt for causal insights and to justify the next best actions. Action that costs millions or billions in budget.


Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Business still operates like Neanderthals

If you invest today in customer service training, you will not see results right away. It may even get worse for a while. Later dozens of other things will impact the overall outcome — new competitors, new staff, new products, new customers, new virus mutation, or even a new president.

You cannot see -just by looking at it- that an insight is wrong or right. Even if you put the insight into action and try it out, you will not witness if it works or not.

Dozens or hundreds of other factors influence outcomes. Even worse, activities take weeks, months, or years to culminate into effects.

I believe people know this. But they don’t have a tool to cope with it. This is why everyone goes back to Neanderthal modes — like a fly, hitting the window over and over again, just because it knows no better way.

Businesses live on Mars, Science on Venus

It was a sunny September day in 1998. I was sitting in my final oral exam of my master diploma with Professor Trommsdorff — THE leading Marketing scientist in Germany at that time.

He was asking me, “What are the prerequisites for causality?” I answered what I had learned from his textbook:

  1. Correlation: effect happens regularly after cause.
  2. Time: cause happens before the effect.
  3. No third causes: no obvious external reasons why it correlates
  4. Supported by other theory

Even during this exam, I knew that this definition is useless for real life.

Here is why…

Point #1 — Correlation: most NPS ratings do NOT correlate with resulting customer value. We can still prove a significant causal effect. Below you will find a great example of why it is. Correlation is NOT a prerequisite of causality. This is only true in controllable laboratory experiments.

Point #2 Theory: How can you unearth new causal insights if you always need to have a supporting theory? This is just useless for business applications. Actually, it’s also holding back progress for academia too.

One underlying reason for this useless definition is that academia has different goals than businesses. Academia aims to find the ultimate truth. As such, it wants to set more rigid criteria (spoiler: this helps for testing but not exploring causality).

For businesses, the ultimate truth is not relevant. Instead, what you want is to choose actions that are more likely to be successful and less likely costly.

Because today “Causality” is associated with “ultimate truth”. Academia is avoiding this word like the devil in the holy waterfrom statistics all the way through marketing science.

Because science is largely neglecting causality, it is not correctly taught in universities and business schools.

This then is why businesses around the world are still in a Neanderthal mode of decision-making.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Causality in business equals better outcomes

Question: What are the most crucial business questions that need research? Is it like how large a segment or market is (descriptive facts), or is it which action will lead most effectively to business outcomes?

Exactly, this is the №1 misconception in customer insights. Everyone expects that “insights” are unknown facts that we need to discover.

In truth, these crucial insights are mostly not facts but the relationship BETWEEN the facts that a business is looking for. It’s the hunt for cause-effect insights.

But how can we unearth such insights?

Here is a practical causality understanding that enables the exploration of causal insights from data. At its core, it relies on the work of Clive Granger. In 2000 he was awarded the Nobel Prize for his work.

In 2013 we took a look at brand tracking data of the US mobile carrier market. T-Mobile was interested to find out why its new strategy was working. The question was: is it the elimination of contract terms, the flat fee plan or the iphone on top that attract customers?

Causal machine learning found that NONE of the many well-correlating factors had been the primary reason. It was the Robin-Hood-like positioning as the revolutionary brand “kicking AT&Ts bud for screwing customers”.

A “driver” is causing an “outcome” directly if it is mutually “predictive”. It means that when looking at all available drivers and context data, this particular driver data improves the ability of a predictive model to predict the outcome. So did the new positioning perception for T-Mobile.

If every driver correlates with outcomes, the model may need just one of all drivers to predict the outcome. This one driver is -proven by Granger- most likely the direct cause.

Machine Learning revolutionizes causal insights

95% of new product launches in grocery do not survive the first year — although brands have professional market research departments.

We let causal machine learning run wild on a dataset with all US product launches, its initial perception, ingredients, pricing, brand, repurchase rate, and then the effect to survival and sales success.

Our client was desperate as nothing was correlating and classical statistical regression had no explanatory power.

It turned out that reality violates rigid assumptions that conventional statistical models require. Machine Learning suddenly could very well predict launch success with 80% accuracy. It even could explain it causally. What it takes to launch success is to bring ALL success factors in good shape. You cannot compromise on any of them.

The product needs to be in many stores (1), the pricing must be acceptable (2), the initial perception must be intriguing (3) and the product must be good to cause repurchases (4). Only if all comes together, the product will fly.

A driver is causal if it is predictive. Now Machine Learning enables us to build much more flexible predictive models. We don’t need to assume anymore that those factors add up (like in regression).

We can have Machine Learning find out how exactly the cause enfolds its effect. No matter if additive, multiplier type, nonlinear saturation or threshold effect, Machine Learning will find it in data.

If the predictive model is flexible e.g. it can capture previously unknown nonlinearities, it improves predictability. That’s what AI and Machine Learning can do today.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Causal insights require a holistic approach

Coming back to the T-Mobile example. None of the new features had been found to be the direct cause of success. Does this mean they had been useless?

Not at all. The new features like “no contract binding” were reasoning the Robin-Hood-perception. The feature perception proves to be predictive for positioning perception. This is called an indirect causal effect.

A driver can cause the outcome by indirectly influencing the direct cause of the outcome. That’s why you need a network modeling approach.

The whole philosophy of regression and key driver analysis is a simple input-output logic — and it leads to bad, biased, misleading results.

Nothing in this world is without assumptions

…we should use them as a last resort only.

Often we see that NPS ratings do not correlate with increased customer value. The picture below shows the data points of customers. On the horizontal axis is the NPS rating and on the Y-axis the change in cross and upselling afterwards.

Overall, both data do not correlate. That’s what we actually see in most datasets. NPS has a hard time correlating with Cross & Upselling as well as Churn. But not because it doesn’t work.

Often there are high-value segments that tend to be more critical when rating. When the rating improves, the cross & upselling increases even more as these are high-income segments.

Within each segment, the NPS rating correlates, overall it does not correlate.

If your causal model would not have the segment information and if it would not have as well other information that correlates with the segment, THEN ….

…your model is only true with the assumption that no significant third factors (so calledconfounders) influence cause and effect at the same time.

Granger called this in his workthe closed world assumption.

There is a last causal assumption to discuss:

Let’s take NPS rating data again. You could be tempted to take it and correlate or model it against the customers’ revenue.

Customer revenue is an aggregate of the last year’s purchases but NPS is just the loyalty of now. Such analysis would assume that the present can cause the past.

Of course you need to make sure that by any means the cause is likely happening before the effect.

Often, we even do not have time-series data. Then you need to judge in the causal direction using other methods, such as PC-algorithms used in Bayesian networks, or additive noise modeling methods, or as a last resort, an assumption based on prior knowledge.

Neanderthals become Plumper

When I speak about causality in talks, I typically hear the objection: “yes, but it’s impossible to be sure that those two assumptions have met.”

Fair point. But what’s the alternative?


BS storytelling?

Back to Neanderthals spurious correlations?

This is so hard to accept: while insights about facts are obvious, insights about (cause-effect) relationships can NOT ultimately be “proven”. You need to infer them from data.

When doing so the only thing you can do is to make LESS mistakes.

Latest Causal Machine Learning methods enable us to:

  • Avoid using theories as much as possible (when in lack of data, they can still be very valuable)
  • Avoid risk for confounder effects by integrating more variables (plus other analytical techniques)
  • Avoid assuming wrong causal direction by combining direction testing method with related theories about the fact.

Leave Neanderthal times to the past and take the latest tools and become plumper of insights 😊

The good news is…

You can NOT make a mistake by just starting to improve.

The benchmark is not to arrive at the ultimate truth. That’s an impossible and impractical goal. The benchmark is to get insights that are more likely to drive results.

Causation is an endlessly important concept that everyone seems to avoid — simply because it’s not understood.

You can drive change by educating your peers, colleagues and supervisors. The first step is to share this article. 😉

“There is nothing more deceptive than an obvious fact”

Sherlock Holmes


Buckler, F./Hennig-Thurau, T. (2008): Identifying Hidden Structures in Marketing’s Structural Models Through Universal Structure Modeling: An Explorative Neural Network Complement to LISREL and PLS, in: Marketing Journal of Research and Management, Vol. 4, S. 47–66.

Granger, C. W. J. (1969). “Investigating Causal Relations by Econometric Models and Cross-spectral Methods”. Econometrica. 37 (3): 424–438. doi:10.2307/1912791. JSTOR 1912791.

"CX Standpoint" Newsletter

“It’s short, sweet, and practical.”


Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


The No. 1 Misconception in Customer Insights

The No. 1 Misconception in Customer Insights

Founder of and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: 01.04.2021 * 9 min read

 There are 2 types of insights: The “famous” type of insights is delivered in 99% of   cases. The “stepchild” type of insights is what businesses unknowingly looking for   — but not getting.

“What’s an insight?” I asked the audience at the INSIGHTS conference, beginning my keynote with an engaging question. It was surprisingly silent given that the conference’s name was “Insights”. I insisted and got some vague responses like “learn new things about the customer”. 

Sure you can answer the question and categorize “insights” in many different ways. I do it in one particular way with one intention: to set the spotlight on a widespread misbelief.


Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Facts measure things that you CAN observe. People try to analyze ANYTHING by looking at facts, comparing or correlating them.

Facts are everything you can see, measure, quantify, and therefore describe. It is also known as “Descriptives”. It’s obvious and it’s needed. For example, if you want to know the market share of a brand — the fact answers.

But facts are also used to answer questions on things you cannot observe.

Let’s take this: “What drives the NPS”? Professionals look at topics promoters mention to explain their ratings and compare them with what detractors have been mentioning.

It seems more than plausible that this will give you an answer. But it won’t.

It’s like comparing the shoe size of your C-Suite with the shoe size of all other employees. As most C-Suites are male in contrast to the rest of employees they have therefore larger shoes. Nobody would think of shoe size as driving carrier success.

Not a good example? Too theoretical?

Imagine people praise the friendliness of the staff and the great service. Sometimes both together. It’s fair to assume that people who like the friendliness will therefore also praise the service in general.

If now just the friendliness is the key driver, still “great service” will correlate too with the overall loyalty expressed in the NPS rating.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

The question about the Why, is a question to learn about relationships between cause and effect. This is not a fact. It can NOT be observed.

Businesses not just are looking for facts. They do not just want to know the market share, how big a segment is, how you can profile this segment, and all other descriptive things.

What businesses mostly want to know is: What do I need to DO to improve outcomes?

The hard truth is: You can NOT see the answer by just looking at facts.

It is astonishing as this is how we as humans do it every day. We did it since the beginning of mankind and it has served us well. We tried different stones to light a fire and the stone that works best was the way to go.

This way of retrieving insights (to look at correlations of actions and outcomes) works well if the outcome happens immediately.

If there are several other factors influencing the outcome it becomes already difficult. Firestones may not work when it’s raining, or you don’t use the right straw.

Business life and particularly the field of marketing is even worse. There are many context factors moderating outcomes. On top of this, you don’t see effects right away. You may need to wait weeks, months, or years.

Because of this, it’s the rare exception that looking at facts will tell you something meaningful about what to DO to drive outcomes.

The insight type 2 is “relationships”. The question how fact one (facts about what you DO) results into fact 2 (facts about outcomes). This type of insight is always asking a cause-effect question.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

To learn the WHY from data takes the art and science of causal modeling

“One of the first things taught in introductory statistics textbooks is that correlation is not causation. It is also one of the first things forgotten when entering business life.” says Thomas Sowell

Why is it forgetting? Because people do not get proper tools to discover causation. They get stuck and forced to use the best they have: “correlation”.

Step back.

The most often and most important question we have in businesses is cause-effect questions. But the method that we use day in day out is some kind of correlation exercise. This is dangerous, risky and unknowingly cost businesses trillions every year or even a month.

Why has nobody taken a notice of it? The trillion worse industries are built on this.

Answer 1: It’s not a secret, many smart professionals know about it but don’t get heard, science knows this since “ever”.

Answer 2:“Nobody” notices it because you cannot observe cause-effect insights. You can only observe facts and try to correlate them back to actions. If you build your theory on correlations you will find a theory that is supported by facts.

This is a useless theory because it’s not very predictive nor prescriptive.

To arrive at prescriptive (cause-effect) insights, there is no other way than doing modeling — causal modeling. You cannot observe cause-effect, you can only induce it from facts. It is an art and science to do it right.

It will take another article to carve out the pillars of causal modeling. For now: Machine Learning has helped a lot to make this exercise very practical.

Here is a recording of talk I held at the University of Aachen to explain the pillars of modern causal modeling:

Doing it right does not guarantee arriving at the truth. It only guarantees to arrive (on average) at insights that will be closer to the truth. It will improve your effectiveness and reduce risk.

In the past, businesses needed to bypass causal modeling as it was clunky, complicated, expensive and unpractical. With the advent of Causal Machine Learning this has changed.

Here is an example that just stands for the mistakes that we are doing EVERY DAY, that can be prevented by some proper model.

The picture shows the data points of customers with on the horizontal axis the NPS rating and on the Y-axis there later cross- and upselling.

Overall both data do not correlate. That’s what we actually see in most datasets. NPS has a hard time correlating with Cross & Upselling as well as Churn. But not because it doesn’t work.

Often there are high-value segments that tend to be more critical when rating. When the rating improves, the cross & upselling increases even more, so as these are high-income segments.

Within each segment the NPS rating correlates overall it does not correlate. You unearth true effects by causal modeling — nothing else.

Qualitative research is no substitute

“You talked a lot about quantitative analysis but how about talking to people, understanding them, finding the stories behind what is happening?” you might say.

I am a big believer in the value of qualitative research. But it’s mostly applied wrongly. It’s mostly taken as a substitute for causal modeling. This is very dangerous and I will elaborate on this in one of my next CX-Standpoint articles.

Some of you might think “What about plausibility. I can easily check correlations and facts on plausibility and see if they give a plausible holistic story”. That’s another topic I like to discuss in one of my next CX-Standpoint articles as the whole topic of plausibility is a big misconception.

Make being mindful to become a habit

Every evening, before going to bed, please repeat those sentences 5 times 😉

– I do not hunt for facts, but the relationships between facts.

– Correlation is not causation.

– It needs causal modeling to learn what works

Then when the next day your meetings start and again colleagues starring at and comparing facts, make it a habit to remind them “Correlation is not causation”.

And when they again respond “[Your Name], we know this, but that the best guess we can get now”. Tell them: “Yes this is certainly the easiest way to draw conclusion….

…. But what if this conclusion is likely wrong and we could make it “mostly right” with a little effort — how much cost savings and growth would we be able to generate?”

Stay curious …

and remember Sherlock Holmes famous words:

“There is nothing more deceptive than an obvious fact”

"CX Standpoint" Newsletter

“It’s short, sweet, and practical.”


Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

Our Group:

Privacy Policy
Copyright © 2021. All rights reserved.


#14 Interview

Most Important Thing When Collecting Customer Feedback

Most Important thing when collecting customer feedback


Hey, if you collect customer feedback, the most important thing is… to make use of it. That’s the biggest thing, what I am seeing in companies and enterprises that, of course we ask the customers open-end question, but feedback is rarely used.


You may send the bulk of feedbacks to the front line, but nobody looks structurally into it, what it really means and what drives impact. Why? Because we don’t have the means to analyze it. I explained that there are techniques to do it.


If you think about it, customer centricity is a big buzzword nowadays. How customer centric are you? If you ask your most important partner, which is your client and… you don’t give a s***, what he’s saying, because you don’t analyse and act on it, what does this tells you about your customer centricity?


Ethically it’s your duty to make more out of your data. We owe it to our customer – they pay our bills and they secure our future.

“Selling Insights Internally” PLAYBOOK

5 Secret Strategies To Better “Sell” Insights To Internal Business Partners.

Not ready yet?

Watch here “How it Works” video.


Our Group:

Privacy Policy

Copyright © 2021. All rights reserved.

#13 Interview

CX Enterprise Platforms vs. Specialized Solutions

CX Enterprise Platforms vs. Specialized Solutions


Enterprise typically choose so-called enterprise solutions, like we’ve got Qualtrics or Medallia or InMoment. Still there is a universe of specialized solutions just like Would you advise to ignore it?


Good question. If you want to get the best out of it, if you really want to drive impact, you should think of those specialized solutions for a simple reason: it’s possible to use those systems.


Qualtrics are not closed systems. They all have docking stations where you can basically plug, with any kind of other software systems. If you look into this, these systems as any other platform are not the best ones in the market now. So they have good standard modules. So for instance, if you look at those two pieces:


they have a text analytics model and they have a key driver analytics model. You can take those modules from the platforms and try to use it.


Those the text analytics modules are just supervised learning. It will not be sufficiently precise compared to what is possible. The key driver analysis module is a simple regression (invented 100 years ago). It’s neither capturing non-linearities nor capturing indirect effect. It’s not a causal engine.


If you do the comparison you will find that the validity of taking specialized solutions is four times higher.


This is just a number, but actually you can see it in real life. If you just use out of the box solutions, you often get those strange results. For instance, positive things like friendliness suddenly have a negative impact. Then becomes strange because you cannot explain this anymore. It becomes obvious that the methodology is, missing something now. Therefore it is a good mix to used what you have, the enterprise system, and plug and play on a specialized solutions for that.


The best example for such a solution is

“Selling Insights Internally” PLAYBOOK

5 Secret Strategies To Better “Sell” Insights To Internal Business Partners.

Not ready yet?

Watch here “How it Works” video.


Our Group:

Privacy Policy

Copyright © 2021. All rights reserved.

#12 Interview

How to Setup Text Analytics When Having Multiple Languages?

How to setup text analytics when having multiple languages?


There are two things you can think of. I always recommend supervised text analytic systems and there are two ways to train them.


One way is that you auto translate everything into a main language. For instance, English, then have an English coder, teaching the AI. That’s one way.

The other way is to have another native coder and teacher. Everyone teaches the AI in native language.


Both approaches has pros and cons. There is no way which is just better. The con of the translation is obviously you loose some information, while translating it , but the disadvantage of having native teachers is that you cannot make sure that they really understand every category in every topics, the same way. You cannot make sure that they really code the same way. If you end up at least with more than three languages, we recommend to auto translate into one language.


That’s typically what we do and actually what’s not so well known so far is that there is an even much better translation machine than Google translate in the market it’s called

It has been proven to be much more precise than google translate and for all systems, we use that machine to auto translate every single one language.

“Selling Insights Internally” PLAYBOOK

5 Secret Strategies To Better “Sell” Insights To Internal Business Partners.

Not ready yet?

Watch here “How it Works” video.


Our Group:

Privacy Policy

Copyright © 2021. All rights reserved.

#11 Interview

Is Advanced CX Analytics Applicable for B2B or High Value Niches?

Is advanced CX analytics applicable for B2B or high value niches?


So does advance CX analytics also work in a B2B company?


Absolutely. What’s different in B2B? There are maybe two things which come to my mind. Language is very specific and second sample size is low. As explained a supervised learning approach is tailored exactly for specific language. So if you run this methodology, you can really train like an domain expert, i.e. having an AI categorizing like a domain expert to a B2B. Regarding sample size, of course, if you have very little data, it’s probably not enough, but there are many tricks where you can work with it and if you are a sizeable B2B company, typically you have hundreds or even thousands of feedbacks. Enterprise B2B typically have enough data.


One of those tricks and trades to work with lower sample size is so-called split analysis. You take the whole data set and split it in certain subset of your customers, which you want to research on. You model with all data but overweight the split. That handles the instability caused by the low sample size and smooths out by the larger dataset.

“Selling Insights Internally” PLAYBOOK

5 Secret Strategies To Better “Sell” Insights To Internal Business Partners.

Not ready yet?

Watch here “How it Works” video.


Our Group:

Privacy Policy

Copyright © 2021. All rights reserved.