All Posts By

Lilla Szücs

The Better Alternative to Benchmarking

The Better Alternative to Benchmarking

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 23 2021 * 9 min read

Senior management loves benchmarking. They ask for it. With this, they do not do themselves a favor.

Yes, it provides an easy answer to the ultimate question, “Are we doing good or bad”. Yes, it provides relief when you meet the benchmark. And: Yes, it gives incentives to meet competitive performance.

So, what’s the problem?

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

Benchmarking is Dangerous

Professor Byron Sharp conducted the largest cross-vertical study to date on understanding drivers of loyalty. What he found was devastating.

The main predictor of loyalty and churn is simply the market position. Each brand might have individual factors, but this simple mechanism explains the bulk of variance across verticals.

When you look at benchmarking studies, you may feel an “aha” moment right now. The most significant player not only has most likely a good CX score but also performs exceptionally well in most drivers. There is an implicit bias based on market dynamics and psychology.

Benchmarking assumes that your CX KPI is somehow comparable – so that the player with the higher score is performing better. This assumption is broken in many ways.

You mostly do not own the same type of customers. Some customer segments are more critical in giving ratings while still showing the same loyalty. The customers you own differ also in what’s important to them.

Even if the best competitors would be comparable, benchmarking does not answer the question “What is possible” , so it does not answer the question “what’s is good” in absolute terms.

This exposes companies to multiple risks:

  • RISK #1: False signals of performing well: If you overperform competition, you will be satisfied, and there is no reason to improve further.

  • RISK #2: Wrong benchmark due to serving up different customer segments: This is further problematic since the benchmark is typically biased, thus wrong.

  • RISK #3: False signals of performing weakly: These signals causing the blame game and even giving no information on how to become better.
In a nutshell

Benchmarking uses a broken comparison, gives you illusory security of performing well, and false warnings of performing week.  On top of that, it does not provide what everyone thinks it does: a measure of “what is good or bad”.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

LOOK IN THE MIRROR – THIS IS YOUR COMPETITION

Let’s take a step back. What is the use of knowing whether you are doing well or not?

Sure, you then know whether you have room for improvement (if you are very lucky).

Fine. But you still do not know how to leverage the potential.

Here is the alternative. Do this, and you don’t need benchmarks:

  • Know what you need to do to improve (find most critical next actions)

  • Model how much improvement is possible with a particular investment.

  • Then do it, if there is a clear positive ROI.

If you have this process in place, then it is IRRELEVANT, what your competition is doing.

Just do your very best. It is the only thing you can do anyways.

Look into the mirror. This is your competition.

Yes, this is not sci-fi. The methods can be found in state-of-the-art CX-Analytics toolboxes. For instance this training teaches how to implement it.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

This is your alternative to Benchmarking:
  • Know what’s important by using Causal Machine Learning. This is all you need. It works even by leveraging your customer’s text feedback

  • Stop the blame games played based on arbitrary targets, instead set stretched targets on key drivers

  • Constantly challenge yourself and establish a “The sky is the limit” mindset.

Questions? Not the same opinion?

Challenge me!

Frank

P.S. these resources might be helpful to get into benchmarking alternatives

Correlation is not Causation

Predictive Qual

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Fixing the INNER LOOP BIAS

Fixing the Inner Loop Bias

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: July 9, 2021 * 7 min read

Sometimes friends ask me what do I do, and then they ask what is customer experience research is for? The simple answer I give is that employees dealing with customers should get feedback on how the customer views the experience. Only this way they can learn and improve.

Simple, isn’t it? This idea is also referred to as the INNER LOOP. It is contrasted with the OUTER LOOP, which tries to initiate learnings from feedback and conclude strategic initiatives for change.

The Inner Loop is set up to make customer-facing employees learn how customers perceive them, give them praise in case of great feedback, but also give an opportunity to follow up with detractors and complaints quickly.

All this is meant to enable the company at the level of the frontline workers to improve.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

The inner loop is broken

For years I just dealt with the outer loop because the inner loop seemed to be simple and working well. Just recently, I learned that I could not be farther from the truth.

Here is the problem. The idea behind the inner loop is that human reads feedback and learns from it. But this idea is broken for THREE reasons:

REASON #1 – The RAS Filter

Whenever I plan to buy a new car, suddenly everywhere on the streets, I see this car. Suddenly, everyone seems to drive it already.

The reason is a small part of your brain called the Reticular Activation System (RAS). RAS is a bundle of nerves at our brainstem that filters out “unnecessary” information so the important stuff gets through.

The RAS is the reason you learn a new word and then start hearing it everywhere. This is why you can tune out a crowd full of talking people yet immediately snap to attention when someone says your name or something that at least sounds like it.

When reading dozens or even hundreds of feedback, our RAS is bringing those feedback to special intention that somehow caters our personal interests.

A waiter who is frustrated with unpleasant people he needs to serve, will more than others notice complaints as proof of their rudeness rather than looking for ways to satisfy them.

People learn what they want to learn, not what they necessarily need to learn.

REASON #2 – The Frequency-Impact-Illusion:

The most often mentioned reason for the loyalty of speaker users is “great sound”, It is intuitive for us to believe that this is the most important reason, thus the most important thing to further work on.

When using proper cause-effect modeling techniques, you learn that the importance of mentioned topics is hidden and typically NOT correlated with the frequency.

Actually, there are many known mechanisms that explain why this makes sense. First of all, customers “just talk”. They do not have an incentive to be 100% correct and precise. Typically people respond with strongly associated topics that make them talk. As Daniel Kaneman said it “Human brain is like cats. Cats can swim, but avoid it if possible”. Human can think, but if possible, they avoid it because it is exhausting. Even worse, customers are not 100% aware of what drives their own behavior.

This is why when you scroll thru your customer feedback, you will learn the WRONG things because you are primed to believe, frequency means importance.

REASON #3 – Resistance to Critique:

Everyone knows the basic rules of giving feedback. Always start with the good things. It will make the recipient open for critique.

If everyone knows this – why on mother earth are we still taking customer feedback like a dumpster full of thrash and pour it over our frontline coworkers – then expect them to learn productive things from this?

What’s your take? Knowing that people just learn from reading what they want to learn, knowing that what they learn is fooled by its frequency, and knowing that the random sequence of critical feedback sparks more resistance than change.

Knowing all this, does it still make sense to send your coworker the customer feedback verbatims with a kind note “please read“?

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

The INNER LOOP BIAS FIXER™

Here is the promise. With the proper method, you are delivering feedback to the front line so they can:

  • Learn what is important and help them to get out of their bubble of own interest.
  • Thoroughly enjoy the praise they get and deserve
  • Improve frontline workers’ behavior personally by letting them accept critique and focusing on what’s truly important for customers.

The solution requires three things:

First, it is mandatory to institutionalize a modern CX Analytics system. At its core, what it takes is at least:

  1. A) A text analytics system that quantifies the unstructured customer feedback.
  2. B) It needs a proper key driver analytics top of this data that reliably measures the impact and importance of those categories.

Second, by sequencing first the positive, praising comments, you comply with psychological feedback rules.

Third, by batching feedback into important and less important categories, you can help readers to read important feedback first. This automatically frames and primes learning the right way.

The “INNER LOOP BIAS FIXER”-Method works like this: Delivering sequenced feedback in importance-batches:

  1. Praise on TOP IMPORTANT topics
  2. Critique on IMPACTFUL topics (=Importance X Frequency)
  3. Other Praise
  4. Other Critique
claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

In a nutshell

The Inner Loop is meant to enable frontline workers to learn and improve, but this mechanism is broken for three reasons.

People learn from feedback what they want, not what they need. They are fooled by the frequency-importance-illusion, and wrongly sequenced critique makes them less likely to accept critique.

Delivering sequenced feedback in importance-batches is a viable solution. It requires a reliable solution to measure the importance of topics.

The latest systems that combine deep-learning text analytics with causal machine-learning were superior to out-of-the-box solutions.

They deliver 4X higher impact of actions and are thus advised to guide the inner loop process.

CX.AI is such a solution that pioneered this technology. You can even contact CX.AI specialists and get a free consultation.

Your thoughts?

Frank

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Fooled by Interest

Fooled by Interest

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 29, 2021 * 8 min read

Why everything we learn from data or experience is heavily screwed by how human perception works? What does it mean to businesses and insights’ functions and what we can do about it?

Whenever I buy a new car it happens. It happens even before I buy the car model I have in mind. Suddenly everywhere on the streets, I see this car. Suddenly everyone seems to drive it already.

The reason is a small part of your brain called the Reticular Activation System (RAS). RAS is a bundle of nerves at our brainstem that filters out “unnecessary” information so the important stuff gets through.

The RAS is the reason you learn a new word and then start hearing it everywhere. It’s why you can tune out a crowd full of talking people yet immediately snap to attention when someone says your name or something that at least sounds like it.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

Interest and beliefs guide what we see

I once presented research where we used causal machine learning to crack that code of creatives of TV ads. It was a winning paper and presentation at the world’s largest insights conference ESOMAR. Then I presented the same at a conference full of creatives. The feedback was not very “enthusiastic”.

Why? I argued against their key beliefs and interests (they believed that the key thing is the “creative genius” outside scientific reach). Their RAS was mainly spotting what they considered as inconsistent. In short, it was not in their “interests” to believe me.

It happens all the time. Have you ever asked yourself why people have such diverging opinions e.g., about handling the pandemic?

Same thing. People have different interests. In Europe, most people are supported by the government against hardship; many of them can stay in the home office, commute less. The sacrifice and claim that a young entrepreneur feels with a working wife and two kids, while schools are closed are fundamentally different from a university professor with grown-up kids who can now save his 45min community.

Based on your interest, your RAS guides you unwillingly to information that supports your interest. Those who subjectively suffer will find reasons why the lockdown measures are not appropriate. Those who fear more the consequences of the pandemic than suffering from lockdown will consume the opposite information. The more they consume, the longer the pandemic holds, the more people become immune against counter-arguments.

That is what has happened in our society during the pandemic. The exact mechanism is in play when departments in enterprises divert.

How does this impact corporate life?

RAS impacts how you interpret research:

2018 Kevin Heinevetter did an interesting study as a part of his bachelor thesis. He interviewed 30 customer insights leaders and gathered their unbiased pains:

A majority claimed that many research studies are commissioned just to “prove” a point the business partner wants to get evidenced. Once the survey finds the opposite, the study will “disappear” and not be brought to the attention of others.

The same happens with studies that have an unpleasant outcome. There is a tendency to change research methodology until the results fit the interest.

When presenting the same insights to different audiences, you can yield opposite reactions as described in the example of my ESOMAR presentation.

If the insights are not helpful for your interests, they are boring. If they are against their interests, they will doubt its validity.

The insight that the RAS unconsciously guides us means that all this mess is nobody’s fault, nor are managers have immoral or unethical intentions.

They have good intentions, but good intentions are not a good predictor of good outcomes. Good introspective is.

RAS impacts how you “sell” insights:

The same study from Kevin found that customer insights leaders see themselves as the guardian of truth. They enjoy being a Sherlock Holmes digging for valuable insights.

As such, they have an intrinsic interest in bringing truth to practice and having business partners appreciate it.

The good news, there are clear strategies that help overcome this.

RAS also impacts how you gain insights!

The most intuitive and most respected way of gaining insights about customers is to sit down with them, ask them open questions, and actively listen to them. It’s called “qualitative research” and it feels like everyone can and should do it.

When I started my career as a management consultant, I qualitatively interviewed many target decision-makers for our clients. Over and over again, I realized that after two or three interviews, I had a clear opinion in mind, and the following interviews, I was catching validation for those beliefs.

When you now know that his own RAS bias the interviewer, it’s now a questionable exercise. It’s not even a directionally objective collection of insights anymore. It’s a cherry-picking exercise and this article here goes deeper into why this will produce wrong insights and believes.

Actually, it’s a raffle. You don’t know what you are getting.

I regularly look at findings where we categorize qualitative customer response data about a why-question and trying to predict the related outcome (e.g., loyalty). Result: The verbally expressed “why” is not correlating at all with what proves to be important (evidence by predictive modeling).

In short: There is a massive mismatch between what people say and what they mean. It is naive to believe it is good enough to just to ask customers. (Spoiler: the alternative is not to not ask customers!)

Your RAS takes what you focus on and creates a filter for it. It then sifts through the data and presents only the pieces that are important to you. All of this happens without you noticing.

Your focus is defined by what’s interesting to you and what your BELIEF is.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

3 STEPS – Not to Get Fooled by Interest

Imagine a world without mock battles, a world where everyone communicates how it feels relevant (speaks to the RAS), where people accept different interests and find ways to build bridges?

Wouldn’t our work become much more fruitful? Wouldn’t insights become much more respected? Wouldn’t insights much better resonate with business partners and thus driver impact?

There is proof that the 3 Steps that I will show you work

The speaker brand SONOS was historically focusing on sound quality while enabling a super easy user experience. The company is full of engineers and product designers and so their beliefs and interests.

Consequently, research data had always been taken as proof to improve sound quality further while remaining ease of use.

It turned out though, that despite nearly 50% of owners unprompted mentioned the “great sound” as the reason for loyalty this is not what needs to be worked on.

The use of causal machine-learning helps spot hidden true drivers for retention and upsell, but was the key to making predictions. Only the fiscal impact was tapping into another interest of the key decision-makers.

Suddenly research became interesting and as later the initiative resulted in higher loyalty and great sales, the reputation of the insights team skyrocket.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

The 3 Step-Framework

STEP 1: Understanding interests

It starts with understanding: understanding your own interest, understanding interviewers’ interests, understanding your business partners’ interests. We are all humans, don’t judge, just honestly assess interests, hopes and fears. Nobody can ever be objective.

“The shoemaker has the worst shoes” is a German saying. With this, insights leaders must research on their behalf to successfully “sell” the truth. This helps not only the business but also insights leaders’ career.

Insights leaders should know themselves best which methods to use. In doubt, do some simple IDI’s (in-depth interview).

STEP 2. Unearth unbiased truth

It’s hard to convince others if you have your own agenda. It’s hard to convince others when it’s evident that interpreting or even interviewing is a highly biased process.

The solution is to set yourself up to the highest standards of insights: the ambition to find causal root causes of success.

Methods to perform this are causal modeling techniques. They are hardly used because legacy methods have severe restrictions and unrealistic assumptions. With the advent of Causal Machine-Learning, this is over. Read here on all the background

With causal machine learning, you are not bound to just quantitative analysis. You can suddenly quantify the impact of qualitative topics mentioned in open-ends. It has the power to convert “the raffle of qual research” into a science.

STEP 3: Align results with interest

If you know the business partners interest and you’ve uncovered hidden truth, then its time to align results with business partners’ interests.

Research results need to start with a summary on how results will be helpful for the business partner. “Useful” needs to be so tangible that the business partner can feel it serves their personal interests, pains, hopes, and fears.

A good trick is also to ask business partners before presenting results, “What results do you expect?”, “If a finding result is not plausible to you, what is needed to make you still buy into the results?”

We all live in this illusion that our rational brain makes decisions and that our perception is surfacing objective facts. If everyone in the room has read this very article – it will further accommodate the discussion and lead to productive compromises.

In a Nutshell: Be Open, Honest, and Transparent

Being open, honest, and transparent are the best ways to avoid fruitless fights and to win together.

The Reticular Activation System (RAS) is a highly effective human-inbuild filter mechanism that unconsciously scanned millions of information we are exposed to bring to our attention that predictably serves the individual’s needs.

While this is effective for survival in a stone-age setting, it can be counterproductive in the corporate world.

Everything is largely biased. What we learn is based on a biased sample. This fact produces in its tendency wrong knowledge (for details, see this article).

Research is biased, as well as decision-making. It’s worthwhile to overcome it. The 3 STEPS are here to help:

STEP 1: Understanding interests

STEP 2. Unearth unbiased truth

STEP 3: Align results with interest

While implementing Step 1 and 3 is a highly individual exercise, Step 2 is not.

For CX Insights, standardized, widely used methods provide the unbiased hidden truth that gives businesses validity, focus, and finally prosperity.

My colleague Claire offers weekly some Strategy Sessions for Enterprise Insights Leaders, and you can book your time with her to inform yourself on how you can up your insights game.

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How to Choose the Right Text Analytics Method?

How to Choose the Right Text Analytics Method?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: May 18, 2021 * 9 min read

If you receive huge amounts of unstructured data in the form of text (emails, social media conversations, chats), you’re probably aware of the challenges that come with analyzing this data.

Manually processing and organizing text data take time, it’s tedious, inaccurate, and it can be expensive if you need to hire extra staff to sort out text.

Nowadays, in 2021, we all need to know more about what data analysis methods we have been using for a long time and why it is more important than ever to perform text analysis using AI tools in order to automatically analyze the text in real-time.

Let me give you some guidance on this below…

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Manual Coding vs Unsupervised Cateogization vs Supervised Categorization

There are three methods of categorization:

  • Manual coding – manual categorization is when human sits down, looks at verbatims (text response) and categorizes points in the buckets.
  • Unsupervised categorization means using unsupervised natural language processing, unsupervised AI, or whatever you may call it… It’s a method where you upload your text and it does a bunch of magics without even you knowing about it and in the end, it gives you categories.
  • Supervised categorization, supervised texts, whatever you call it. It is called supervised because human teaches the system (AI) to categorize.

You need to bear in mind that each of them have pros and cons and let’s discuss them.

That’s how typical menu coding looks like. You have all your verbatims. You may even have NPS score next to it. And when you start off, you will start to develop codebooks. This means there are different categories where the verbatims may belong to. Once you start coding, you add one or another category to it.

This example differentiates between positive and negative categorization. Every code gets a code number, and that’s how it works. And really it’s a piece of excel. You have some rules because customers talk not just about one topic but about many topics. you may have several columns and the coder goes through it and for the most frequently used topics they know the codes in their mind.

quality is always around 10 and then they type it in it’s super fast for them. One say, have a look in the codebook and that’s how it goes. They go through all your hundreds of thousands or 10 thousands of verbatims, one by one and try to categorize it. That’s the process. And what some also do is trying to judge on the sentiment. We will talk about it in more detail later.

Join the World's #1 "CX Analytics Masters" Course

In this case sentiment means how emotional this response is or is it super emotional? Is it just positive? That’s also what you can do. Human coding it, and once you have coded that, then you can start counting. How many people are saying price is too high? That’s my new coding. And of course there are lots of do’s and don’ts, which we’ll discuss in a bit.

Second unsupervised learning, unsupervised AI, unsupervised text analytics. It’s again, you just take your day, upload it and just by magic, it comes up with a bunch of clusters. It comes up with a bunch of different categories.

basically clusters, verbatims, or pieces within verbatims to each other, which sound kind of similar. There are of course different techniques to do that. But what you get is  just an example of what you get. Then of course there are differences between the technology. But what you see, it seems to generally make sense but not always.

You have, you have still some parameters where you can say, give me just 10 categories. Don’t be so specific or vice versa, but there is no way that you teach the system, because otherwise it would be a supervised learning system.

Below you can find an example of a supervised learning system. the verbatims, and then you start coding here in the software, put some numbers… it’s like manual coding, right? You have a dropdown menu and you can pick and choose.

This is especially good when you have large code books and it’s really hard to remember those numbers. You can be quite fast with software help, and also, it helps you to manage descriptions. Sometimes it’s not enough to just read the label of a categorization you would like to learn in more detail. What is really meant by this categorization and the software also gives you a validity score on how precise the model learned is, because you start teaching the system, but after a few hundred examples, it will pretty much be as good as you, it can categorize the rest for the time being, or even for the future.

The beauty here is that in contrast to manual coding where you start with the list that has a random order… When you start coding it, there are some topics which pop up frequently.

Every second verbatim reads quite similar. It takes a lot of time to eventually get to topic, which is rare. And this is different with supervised learning because supervised learning can help you to detect what’s seldom in the data. If it is already good at understanding the quality, it will not show you quality anymore.

When it’s already been mentioned it shows you something different, you will be much faster in teaching this AI to understand to categorize everything.

Another example (also advantage) of this system is that it’s much better to manage huge codebook. Below you seen an example of 150 codes, and actually this is hard to bear in mind but with this software it’s much better because you can quickly find the codes and again and again read the definitions.

So then, it is easier to become consistent. Also, the system can help you to really understand where you need to become better because this is this, I example, coats. Where the thick slices, they are often mentioned like this. But if it’s below the circle, the validity is not so good.

this is very important to work on. There are others topics, which, where you’re also not very good at, but they are very seldom. you can read and look at where, which codes are already trained. Well. And then try to find you once or revisit what you have coded. it just gives you guidance to detect your own errors, which is not possible in menu coding.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

In a Nutshell

There are three types of categorization, manual, unsupervised, and supervised categorizations. There are some dos and don’ts for categorizations and building the definitions of the categories, especially also for the label convention. It’s important to build granular code books to label it very specifically and to avoid other categories.

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free
CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 3.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How to Analyze Customer Feedback the Right Way?

How to Analyze Customer Feedback the Right Way?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: May 18, 2021 * 9 min read

Businesses are constantly dealing with data; however, raw data is not insightful if not neatly organized. I would say, there is nothing more powerful than getting to see your brand through the eyes of your customers. 

However, if you receive thousands of NPS responses, sorting through the received feedback becomes challenging since there is too much feedback to analyze manually, no consistent criteria to rely on, and lack of granularity involved.

Nevertheless, any feedback received from your customers is an incredible opportunity to take action and convert weaknesses into strengths. NPS analysis will make it easy for you!

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

The 3 Pillars of CX Analytics

In previous blog, we discussed how you gather your data and what kind of data you are gathering. And now it’s time to make use of it. And most of the used cases you already know. It takes data compute, the NPS score you report to, so you slice it down into different times, segments, regions.

This is not what I’m talking about. This is a conventional stuff. This is even not analytics. This is just dashboarding. So the framework has three steps. And the first crucial step is to quantify and to categorize the unstructured text feedback you get. And as we previously mentioned the core part of customer experience serving that you get unstructured text feedback because it’s much faster and it’s so rich. This needs to be done!

It’s still a bit surprising… Most companies don’t do anything about it. They don’t read it or send it to the front line. That’s the feedback you got lost it last week. They read it. But other than that, it’s seldom that companies quantify the news.

It was quantified might mean more than the word clouds, because word clouds do count words, but not meaning. Thus, they are not very useful.

STEP 1 categorizing unstructured text fit because only if we make numbers out of it, we can analyze it.

STEP 2 is once we have categorized, once we have quantified what people are saying, we can start understanding how important they are. That’s one key thing to elaborate a bit more here.

STEP 3 is to take all these insights, quantification, impact and use it… Enable. Conclusions enabled actions from that. This takes some additional work. Different features you can do and I will quickly show you some of them. So let’s go and talk about step number one.

Join the World's #1 "CX Analytics Masters" Course

How To Manage The Unstructured Feedback

Categorizing – there are three ways of categorizing on spark picks and the typical way is manual categorization – that’s what market researchers know. Because they don’t want to do it because lots of work. Someone needs to manually go through every single line through thousands of lines and third, a code book where you say, okay… That was a burden.

It’s a lot of work. it’s useful of course but not scalable. If you have thousands of feedback, it just costly. And there are better ways to do it. Because the most important thing is that they can do it. And if you leave it out because you don’t have the resources or the budgets to do it, you at least need to think about an alternative.

The alternatives are unsupervised categorization, supervised categorization. These are two kinds of AI that deal with that and they have both pros and cons. At least when it comes to manual coding and supervised coding, you need to think about the category definitions and labeling conventions.

Because you as a researcher are responsible for building the code book which is a pro, because you can design it to your needs based on your domain knowledge. But again, work… Which you don’t have to do with unsupervised learning. So in this category, you can also build in hierarchy.

There are some dos and don’ts for that too. another topic which is important is how can you make sure that you will spot new categories, new topics popping up over time? that’s something we will elaborate a lot a bit later in the following blog posts. Also this is very important – managing consistency.

Each of those three methods, manual, unsupervised and supervised categorization have different challenges. And it is something that’s been overlooked because how you make sure that if something is categorized as quality, good quality does really mean good quality… And it doesn’t mean all the time.

And also in the next wave, it has the same meaning. That’s very important because otherwise you cannot compare. Because the unstructured feedback is not a standardized input. if you categorize it, at least this step needs to be consistent. Another challenge is really how to deal with multiple languages.

You can aid or translation services. And then you choose one core language, or you do all this separately with native coder and so forth. There are pros and cons as well for that also there is a topic of sentiment, which we will elaborate later on. So which means it’s verbatim.

Is it positive or is it negative and how much positive? How much negative is it? how much emotional is it positive and negative? This is an emotion, this is an information which goes beyond categories and which can be very useful. There are various specific things you need to know about sentiments to go into a discussion within the organization and to defend the right approach. Lastly, we will talk about ways to do video validity assessment: accuracy, consistency… Accuracy is better. It’s correctly coded. Consistency is it’s always coded at the same time. And the predictive power is that it can be used to predict outcomes and there’s and you can expect that categories that have predictive power. This is an indication that they are true because you can categorize correctly, always the same time, same way I can do consistent way, but still wrong. And if it’s wrong, it’s not predictive.

(Stay tuned to learn more about these topics in the upcoming blog posts)

How to Understand The Relevance of Topics Raised by Customers

The other step too, in the analytics framework is to identify the impact of the categories. What you see here are these two dimensions. In step one, we categorize everything in categories, which are the bubbles that you can see here.

Frequency – you can if you categorize the verbatims, you can count all Y 20% saying. Great personal service – you can say that. And that’s the outcome of the step one which is counting frequencies and you would expect the discounting gifts. You what topics to work on because I mean, you are asking your customers, Hey, why did you give this rating?

They say… I mean the speakers, they have great sound or the service is good. That’s what they say the first time. And when you ask an insurance customer, for instance, but it turns out that this is you lose me on not correlated with the importance of the category, which is very interesting. That’s why we need step two. There are several reasons for that, which we elaborate in the upcoming blog post. So there is resist impact frequency, illusion. We implicitly assume the most often mentioned topic is the most important one, which is simply not true. And just to give you an example…

Here’s an example from insurance – not so many customers say, you know what  I am lacking to recommend because this insurance company is trustworthy, honest, reliable, fair, you know, these kind of sayings. It’s probably a pretty good reason to be satisfied and to recommend your insurance.

And that’s why it’s very likely, or it’s plausible that this has big impact. But not so many people saying it. So why you are not so good at it, you should improve. So this logic tells you that the frequency is not your measurement of what you should do next.

There are these independent things, what they, how often they mentioned things and how impactful they are and this impact needs to be calculated the right way. Typical of the first idea which researchers have is that there’s some you say, oh, that’s right. We need to understand what’s important.

So let’s look what the promoters say, then have a look what detractors say and let’s compare that because when promoters say something different than detractors, that’s probably the reason. And why this may sometimes work. It often doesn’t work. And it is because  you would simply look at correlations and correlations, often spirits.

To give you an odd example. When you look at the shoe size of employees in a company the shoe size very much correlates. Was he likely to get to the C-suite? Why? Because men have larger shoes, and for some reasons they are more likely to become a CO or a C-suite. That’s the effect.

And the shoe is some correlation with something else and if you see what you’re measuring now, service, great service, friendliness  or staff is knowledgeable. All these things relate somehow together. Thus, building your insights on correlation is dangerous.

And you don’t want  to build on your strategic initiatives to drive on spirits findings. it is easy to do better. And what comes to your mind probably to measure impact is key driver analysis. I mean, key driver analysis, typical methodology. There are of course, many different ways to do it.

To understand the impact of different drivers – that is the method. If you have multiple reasons for success to find out what’s the contribution of each of them. So you need some kind of analytics around key driver analysis. So there are of course  elaborations you should consider, and we will go in more details later.

Afterwards, one is a non-linearity, especially when you have a sentiment and you want to understand impact. We see that positive side of the sentiment has lower impact and the negative side. So this is called and non-linearity just an example. So you want that your key driver analysis can model that. I’m just saying that because the convention key driver analysis is basically regression, which assumes the guarantee and independence of drivers and direct impact. So these are the other examples, for instance, interaction, and interactions when you need both, when you need to pray surprise and the product quality only if customers whose praise two things at a time. If this then is impactful, then those two are interacting. They’ve of course interactions as well, you could say only if the quality is great, but there is something else for instance, that the customer is complaining about something.

If this is not there then there are different fields of different ways of interactions, but these interactions, you really don’t know them beforehand. So you want to have a key driver which can spot those interactions. Because the total analysis will make more sense at the end.

It will be more predictive. Another thing you want in a key driver now is to consider mediators. So sounds complicated, but let me give you an example. So for instance, we did this exercise for company who rents out flats and they asking their vendors, what’s your likely to recommend us.

And the most often mentioned topic is and most of mentioned, reason for liking the flat is, a great location. So it turns out it’s also very impactful, but there are other things like the flat has surrounded garden. There’s not a highway next to it.

These kinds of signs are influencing, driving the great location. So there is a mediator effect. That’s a methodology approach that of course the garden and the lack of highway next to your flat is very important, but you cannot see it in a driver analysis because it has an indirect effect through a mediator.

That’s what the key driving should consider. And we will talk about ways to validate all those approaches.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

This is How You Draw the Right Conclusions from CX Insights

Let’s talk about step number three, which is enabling those two insights.  The insight of – thank you. You get to quantify your text feedback measured in the frequency that you then infer the impact of all those topics.

This needs to be now analyzed and structured and transferred in strategies. So, and what you see here are already three fields, which are the typical things to do. So first we have a field and norm strategy called hidden levers. So these are topics that are very important, but not very frequent.

They are hidden because you would not have noticed them in conventional analysis, but just looking at frequency, you do not spot hidden drivers and they are important because obviously there is a huge opportunity to improve and next time, get more customers mentioning it.

That’s strategy one – improve hidden levers. Second is key leakages. The key leakages are negative categories that are important and frequent enough. Why? Because the gnome strategy for leakages for negative things is bringing them down to zero. So either you have something that’s super often mentioned, but not so important.

It can still be interesting to bring it down, although it’s not so important because it’s so often mentioned it can be relevant to bring it down. It’s also relevant to bring it something known, which is super negatively important. But it’s media has some medium level of frequency. That’s obvious.

And then we have the maintain strategies. These are levers, which are some kind of important. We’re also good with it because they are often mentioned. You want that they are not forgotten, You don’t want that your organization go and say, you know what? It’s not a key, we don’t need to do it.

That’s not the result of the analysis. As a result of the analysis is if you want to improve, you should stay as you are, but improve here. And maintain this one is a key every year. The next thing you need to think about is how you convert versus how you then decide what to do. And because the topics just tell you what customers say, don’t say which actions could drive and improve them. So what we recommend is that you list these other categories and the potential actions behind it. And sometimes there are topics which share an action. Then you can combine them. Now, one action could improve in several topics. That’s an important exercise to do.

What you can also do now with the model behind that and the model that creates and gives you the impact, enables you now to do something. You can simulate how much will the NPS improve if I improve this frequency, that’s an important exercise that makes it tangible to your business partner. So I’m going to show you an example right away. Same thing is for the question, why did say NPS changed towards  last time? That’s typical Q&A in companies where it changes, but suddenly you don’t know why. And what companies and people are doing they look at okay.

What kind of categories changed and then everyone who looks at it picks those categories. They the like best. But that’s not how you should do it. The change in the NPS can be attributed mostly to the changes of important categories. Yes, if an important category changes, it has obviously a much higher impact to NPS.

And this is a simulation exercise you can do, and it gives you much more guidance on what happened in the last quarter.

And lastly, very important to then finally when you kind of think of what to do, which actions to take, you need to consider the costs of those initiatives and mirrored to the ROI. Let me now, show you an example of how this can look like in a tool on a dashboard.

You can imagine…

Let’s have a quick look in an implementation example. So this is a dashboard which shows these metrics with frequency and impact on the different topics you see as bubbles and the dashboard can have typical descriptives what’s NPS today. What’s it, what’s development and so forth. So that’s how you can visualize it.

And you can label the hidden levers as well as the key linkages. So now then you can also with the same information, you can build a simulator. So you, you see all the different topics. You just scroll down and see different topics. And for instance, your reliability of performance, you have a slider, you improve it and you can really see that.

The NPS will improve by 3.7 points. This is what you can simulate based on your key drive analysis findings. What you can also do is if you are quantifying, what is the impact of one NPS point, you can measure the impact of those actions. Euro dollar, whenever a  currency.

And this knowledge of what the impact of one NPS point to the bottom line is something we, in the first place we draw from experience from, past data, we can have an estimate, which is of course pretty rough, but it’s the possibility to use separate data to really find out the link between NPS.

And bottom line and we will discuss this in another class. How exactly you can do that. and this is the analysis I also talked about where you do the simulation. I did the NPS change from last time to this time, and sometimes even it didn’t change. But you improved a lot in certain categories, but you don’t see change because the improvement, in other areas  are negative because you got worse in other areas. So it really can boil down what are the key reasons for the change? and where did you really improved? So, because in categorization you have 50 different categories to look at and really to boil it down to three, four, things is very useful argumentation or discussions on what to do next.

This is just an example how you can visualize those and. This really focuses the discussion. It helps very much to drive the right decisions and to focus on, very few actions and initiatives.

In a Nutshell

The framework of analytics is threefold first, most important. You need to quantify, you need to categorize your customer feedback. It’s the one, but this is not enough because you really need to understand what’s the predictive power. What’s the causal power of what people saying.

Sometimes they mentioning things just top of mind is a really need to understand. Is it worth to work on it? This is what you do in step two, identifying the impact of each and every category and topic and third, all this information needs to be structured, to support and guide the right decisions. And I’ve given you a dashboard example about above.

We will again elaborate further on the similar topics in the following blog posts.

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free
CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 2.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

How to set up a CX Measurement

How to Set Up a Customer Experience Measurement

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 11, 2021 * 9 min read

Measuring customer experience has never been more crucial than it is today. Nevertheless, most CX teams continue to face intimidating CX challenges. Meaning that not all CX professionals know how to advance in customer experience measurement and how they can leverage a customer-journey-based approach in order to optimize the CX measurement program.

Before we start with the nits and grits of measuring CX, it is important to understand how vital it is for the businesses to improve the customer experience. Firstly, having an exceptional CX management can instruct a lot of businesses to understand who their customers are and how to capture their sincere feedback in real time.

In this post, I will provide you with guidance on how you can build up and/or improve the CX measurement to be able to shape CX measurement for impact.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS

Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

How to Choose the Right CX Survey Type for Your Business?

The first lesson is what kind of customer experience service do we see? What kind of surveys are possible and meaningful first thing is other the general customer experience survey that you reach out to your audience, to your customers, without a certain trigger, without a certain touch point. You reach out because you want to have a status report.

What is a status? Across all customers of the loyalty, no matter whether or not there have been touch points or not, that’s a general CX survey. And of course it’s very different what you get , compared to touch point customer survey. Because the touch point CX survey is after a certain touch point, after a phone conversation, after you’ve met a customer on the a website or a in a shop or whatever. Whenever the touch point was right after or shortly after you reach out and interview them with specifically towards the experience of this touchpoint. The third is the CX journey survey, which is actually the same as a touchpoint CX survey, basically differentiates because you want to connect those different touch points to map out a holistic journey.

But this journey actually is different for every single customer. So basically you do in the CX journey customer survey,  you make sure that you are measuring where, what was the touch point before the actual touch point… So with this information, you can build a chain for every single customer and also, analyze how those touch points in its sequence influencing each other. The fourth CX survey typically is the competitive survey. So you not only reach out to customers, but also to your non-customers to the customers of your competitors. So these companies do that for benchmarking purpose but also you want to learn, why are they still with this supplier or with this company?

What are they doing better than us? What can we learn? Where are the weaknesses of the competitor? So all this, you can learn with those competitive CX surveys. So these are different types and typically, companies do all of them.

How to Choose the Right CX rating question?

The next question is  what do we ask? What are we measuring? The core thing you want to measure is the measurement of the customer experience and the most used measurement of the customer experience is the resulting loyalty and the resulting loyalty is very well measured by the question whether or not the customer is likely to recommend you. It is one indicator of customer loyalty. That’s the whole basis of the NPS.

And the question is how likely are you to recommend our brand to a friend? In the B2B context you will add colleague or something like that. Always the same question. And the scale is always the same.

It’s always from zero to 10, many suppliers, although do one to 10. Which gives similar results, but it’s a little bit different, right? You have a 10% bias here.  Keep in mind, the original is from zero to 10 and only the extremes are labeled zero, not at all likely and 10 extremely likely.

And then, the NPS score is computed out of this by taking the percentage of promoters who mentioned 9 and 10, you subtract the percentage of detractors which are all from six to zero. If you have 50% promoters, 30% detractors your NPS is 20. That’s a summary, simple question scale from zero to 10. And  you probably know, the formula to compute the NPS score. It is not a mean, it is presenter scores subtracted which is also a weakness. It is made that way because you can explain it to everyone and everyone will understand it.

But the weakness is that if you don’t have so many promoters or detractors, and if your sample size is not so big it will be a matter of luck if someone more or less will be part of the promoters or detractors. The numbers are very much changing by noise. And this will make the whole score fluctuating.

When the sample size is low or when you are operating in the extreme, not very much promoters or not very much detractors. That’s NPS but you can of course alter or use a different one. Typically the market researcher loves Likert scales, more stable and also comparable across different regions. It calculates the mean, and every scale point has a label. With NPS, nobody knows what five means. It’s something in the middle. But then what does seven mean? You don’t know. In different cultures, the seven is interpreted in a different way. This can be eliminated if you put a certain wording on every scale point, that’s what you do in Likert scales. This has advantages of course to use it. On the other hand, it’s harder to understand the mean of 4.1. Other than that both are correlating highly and you can choose either of them.

There are other scores around loyalty Likert scale measures, loyalty, NPS measures loyalty, but you could also decide to measure the customer effort score. How much effort does it take to deal with us which is very different. In some businesses, it’s not so much about the loyalty or the satisfaction, it’s just taking the pain away.

That’s what it can decide for if you found out that it drives your business. And that’s where we come later in the following blog posts, which we will discuss, how you can find out which score is actually best for you, which is connected to your bottom line. Many companies even measure customer satisfaction.

And this is all where it began some decades ago, the whole movement of customer experience began with measuring customer satisfaction, but businesses realize that of course satisfaction, especially when it comes to the touch point, very much measures the moment which is fair. And you may want to know that, but it is very different than the loyalty, because the loyalty is driving bottom line and which is a long-term indicator. It is an attitude. The satisfaction is not so much an attitude. It’s more of a judgment in the certain situation and your satisfaction with that.

Join the World's #1 "CX Analytics Masters" Course

What Channels Are Being Used for CX Measurement?

You need to reach out to your customers.  You need to decide and need to think of all the channels. And there are two different approaches. One approach is to ask them, interview them in the moment, that’s what you want to do when it comes to touch point customer experience. You want to use the medium where you meet your customers.

It could be on the phone. In this call, you can ask them two questions. It could be on the website, it could be pop up, etc. or it could be face-to-face. Different forms of getting feedback, but the idea is to structure in the moment. The other way is to reach out to your customers and even with touch point customer experience surveys, many companies reach out after the touch point or you get an email the next day, which is not ideal, but sometimes it’s the only practical solution. And the channels are of course telephone, email which are the most often used, nowadays even text.

You can imagine people get an SMS asking… Hey, we had this touch point please rate us from zero to 10. How likely would you recommend us… The customers are answering five, sending it out. And then you send another SMS. Thank you. Why did you rate this way? And he can text back, an open-end and that’s part of standard survey platforms. You can choose those things very easily. There are even more options which are seldom use still a social sampling. You reach out to customers over ads. You can do that if you don’t have emails or you’re not allowed to email them you can still do ads. You can also direct messaging if they follow your profile or you can even initiate Alexa interviews. But the last three are certainly not too widely used, but they may become in the future. There are different ways of reaching out every  way has pros and cons.

If you really focus on one way, your results will be more consistent because every way of reaching out has a bias. Someone who picks up the phone is a different person then someone who answers or clicks on emails or even read the emails. Actually if you mix everything if you do all of them, you have the biggest reach, you get most feedback, but you need to make sure that you control the method of reaching out the channels, because every channel has bias and you need to make sure that you control them. You will know, that the change in the NPS or the customer experience score is not due to different open rates or response rates in these channels.

What Are the Main advantages of Open-Ended Questions?

After you’ve measured your customer experience with a rating, you would like to know why did they rate this way? Very powerful… You should ask that directly after the rating. And do ask everyone the same question. Do not split in different questions for the promoters.

This is what we see often that companies ask detractors – what can we improve and ask promoters – what do you love the most in our service? And they do it because in this way you can more easily analyze the responses. But there are multiple problems with it. The first problem is you bias the response because you implicitly assume that there is a reason for being promoted.

They like something and there is not something they don’t like. You implicitly assume that detractor doesn’t like anything. But most importantly, you will not be able to spot differences.  Many things that the detractors say the promoters would say as well.

The other way around promoters may also be criticizing certain things, but it doesn’t impact the rating so much because of being not so important. By asking different questions, you will never learn what’s important. You get answers, but you have no chance to prove whether this makes any sense or not.

Don’t ask the questions, ask one question and do not mark it as optional. That’s what we often see “optional” here you can put some texts in it. No, encourage your customers to talk, ask them for a favor. Of course customers can leave it blank but you should not make it too obvious because the feedback is important and also it is a value for customers as well to give their opinions. The benefits of open ends are quite clear. Customers love this way of giving feedback because it’s the most natural thing. It also is the shortest way to give opinion. We know that these quantitative surveys where you need to read alternatives and you need to decide, but you don’t want to click on anything because you either don’t understand that, or you disagree to the wording. The open ends don’t have that problem.

It’s straightaway. It’s super easy. You can use your words. That’s customer centricity. Customers want that. The only thing we need to make sure is we make the draw the most insights out of this.  And then another benefit is, it is not only describing what people think but also it helps you discover things you have not thought about.

It’s qualitative as well. Also you do not bias here. If you have quantitative surveys, you always bias towards a certain answer. You bias if you ask about delivery time, you assume that delivery time is something in the relevant set or which is relevant to them, which is a bias. There are of course limits of open ends.

You do not learn everything about what customers have in mind. What customers write is they just tell you something that pops in their minds which is very important to them because this will send also top of mind, but basically that’s it. They will not talk about things that are with a need to think a lot about. That’s a limit that  you don’t see everything, especially when it comes to very subconscious things – branding stuff. That’s where you also can get a lot of open ends, but then you need to ask different questions. Also asking different questions is not standardized that’s why it’s unstructured feedback. But we will see there are advanced tech that help us getting standardized measurements out of this information.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

What Are the Pros & Cons of Asking More Closed-Ended Questions?

After the open-ended question, you can ask close-ended questions where customers can choose from a scale from one to five or multiple choice. They can choose whatever which segments they belong to and so forth.

It’s a close-ended question and important is always ask this after the open-ended question. Why? Because first and bias the results of the open-ended question. And second, also the input customers give and the open-end will be more sparse if they already have answered everything in close-ended questions and they will only write something which has missed in the close-ended question.

You get the richest feedback when you start open-ended question and then close-ended questions.

These are things you may want to consider for close-ended questions, which is the first source of response. If you do a touch point analysis, what touch point was the touch point you had before that? How did you arrive to our website? Did you Google it? Did you click on an ad? And so forth.  This kind of source information you may want to capture as well as customer segment information, because you want to measure the texts that very much influence the customer experience. If you interviewing a luxury car customer, he has different expectations. He has different customer experience level than an entry level car customer. You want to know what kind of product he is talking about or what kind of segment he is in. That’s something very useful.

You could also measure some core aspects of your service, quality, service, price, brand, USP, whatever. That option when you want to measure things what customers may leave out you want to track that. But the limits here are not very specific.

What do I mean with specific? When the quality score drops from four to 3.8, what actions would you take? You need to know what should I do to drive quality? What if quality decreases you still don’t know what, but with unstructured feedback you typically get more specific feedback.

In a nutshell…

There are three types of customer experience surveys, which is general CX, where you reach out to all your customers without the reason, second touchpoint CX, which is a survey after having a certain touch point and competitive CX, where you reach out  to your customers and the customers of your competitors to learn what customers or what competitors do better than you. Then the NPS question is the most used form of measuring customer experience. There are other ways of doing it. You want to measure the customer loyalty as a long-term attitude and long-term measurement of the impact of your customer experience.

But you may also consider customer satisfaction as a short-term measurement of your performance. There are different channels you want to use for reaching out emails, texts, phone. Each of them will reach different people. The ideal situation is to use all of them because this can maximize your reach. When you do that, be sure to track this channel. So you can make sure you don’t have a bias when the ratio of those feedback is changing. In any way use the power of open-ends because that’s the most customer-centric way of getting feedback. It’s super intuitive.

It’s hard and it’s super rich. This discovers new topics you may not be aware of. As an option, you can also get a close-ended question. Most interestingly, where does your customer come from for touchpoint survey? And what is the context? What is the customer segment? What is the product he is talking about?

These are things you want to measure to control it. This is needed to find later on what drives customer experience…

"CX Analytics Masters" Course

b2

P.S. Would you like to get the complete & interactive FREE CX Measurement Guidance for your business in 2021?

Simply subscribe on the free “CX ANALYTICS MASTERS” course below and enjoy the above-mentioned training guidance in its Class # 1.

“Solves key challenges in CX analytics”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Predictive Qual

PREDICTIVE QUAL

How to Turn The Art of Qual Into a Science of Impact?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: June 7, 2021 * 9 min read

It was back in 2017 when David wrote me this email. He was heading insights for SONOS, and the company was bravely acting on their CX Insights. The problem: no improvement after all.

It was a mystery. Nearly 50% of speaker owners were reasoning their loyalty to the great sound. Every other topic was around or way below 10%. So the company was tweaking the sound experience for years. Result: None.

There was no problem in measurement, no serious bias, no categorization mistakes. The data was great. It was telling a clear story.

Still: the story was wrong. Because data is not insight, qualitative feedback is not insight either.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS™

Crystal Clear CX-Insights with 4X Impact of Actions 60 Minutes of Condensed Wisdom from World's #1 CX Analytics Course

Our hidden assumption behind this feedback is that customers can answer the question correctly. We assume they are involved and interested enough to dig deep into what is moving their behaviors.

This assumption is like believing humans are rational behaving beings: True only in rare exceptions.

The reality is sobering: customers write something that just comes to their minds. They do not lie, but certainly, they also do not tell the truth either.

As a consequence, what you see across industries that customers are mentioning top-of-mind topics. A speaker owner will say “because of the sound”, a washing machine owner will say “washing powder”, a restaurant customer will reply “great taste of the food”, an insurance customer mentions the service, and so on.

What’s really moving the needle stays hidden.

Is QUAL useless then?

Not at all. Qual is indispensable. It allows customers to use their own language, words, and expressions. It helps us to discover what we have not thought about.

But here is the question: If this qual feedback -as described above- gives wrong signals. How can this be useful in any sense?

Some would argue that if an argumentation is plausible, it may then be valid. If you can find yourself in this statement, please listen up: this is another GREAT misbelief.

(This article here is explaining exactly why we get fooled by plausibility)

Something plausible has some likelihood to be correct, but everything not plausible can very well be right too.

Qual is a great way of collecting data, perception, customer views, and it is helpful to synthesize new hypotheses. But there is no reliable mechanism to validate it.

The mechanism in use is to check plausibility. In in-depth interviews, you can collect many qualitative data points to check for coherence. But plausibility is a check for coherence that only leverages existing beliefs.

The reason why we do research is that our knowledge is not good enough. We want to learn. Everyone who wants to learn is better off not relying on plausibility.

QUAL is broken.

Qual is excellent and indispensable. But it is NOT ENOUGH. Even it alone is MISLEADING and can be dangerous.

Why does it seem that I am the first telling you this? Because this is counter-intuitive. A dose of “qual feedback” is like a heroin shot.

You read it; you have thousands of associations with it, you link it with other things you know, and your brain forms a story. Instantly you get “high”. The rush of the eureka moment into your brain is hard to resist.

We are all on the needle. My rational me is screaming “noooooooooo”, but my heart whispers “yeaaaa” – it automatically beliefs everything that feels plausible.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Predictive Qual – is the Qual of the AI Age.

Imagine a world where we can harness the power of qualitative feedback, but NOW would validate if a mentioned topic is truly causally accountable to explain a specific outcome.

Imagine we could still explore new things that go on in the world of customers and learn about the words they use to express themselves.

But now we would be able to filter out what of those things are just “cheap talk”, verbalize the same underlying phenomenon, and the one killer topic that crushes outcomes.

What would this mean to CX research, and what would this mean to the entire customer insights and market research discipline?

Wouldn’t this mean that suddenly the two worlds of qual and quant do not collide anymore but unit into a coherent approach?

Wouldn’t this mean that research becomes even much easier because you can do all -qual and quant and modeling- in one study? Wouldn’t this mean that the research process not only becomes BETTER, but also FASTER, and CHEAPER?

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

How Predictive Qual Works.

Sonos was eager to understand why nothing was working. So we did a deep dive and ran our causal machine learning methodology. It turned out that the sound of SONOS was already good enough, and improving it a waste of resources.

Instead, other good things about the system (e.g. the reliability of the service) were not often mentioned. But when they have been mentioned, it was the killer reason for customer enthusiasm.

With the predictive system behind these insights we even dare to make a prediction. “when you double mentionings of this topic, NPS will increase by 8 points.”

Was it a risk? Not if you have a system in place that can explain NPS changes also backward.

Fast forward 6 months of implementation work, suddenly the NPS jumped, precisely by the predicted amount”… a pinch of luck that this was so precise.

Can you imagine how Leadership got impressed by that?

The magic follows a simple 3 STEP-framework that everyone can do themselves

STEP 1: QUANTIFY
Use AI to automate text categorization. Train a supervised AI to achieve optimal accuracy and granularity.

STEP 2: MODEL
Use a key driver analysis (KDA) approach and explain your CX outcome (Satisfaction or Likelihood-to-recommend) to the topics mentioned and the verbatim’s sentiment. For better results, use causal machine learning (instead of KDA) for doubling predictive and prescriptive power.

STEP 3: VISUALIZE & PREDICT
Visualized results in an interactive dashboard so everyone can play with it. Use the driver model formula to predict simulated changes in drivers onto outcomes.

The Predictive Qual Trend.

The trend towards “predictive qual” is apparent. Many CX software and insights platforms integrated not only text analytics (step 1) but also a driver analysis (step 2). It’s a clear sign that pioneers among insights leaders are already adopting this methodology.

It fills me with a little bit of pride, having started this movement back then in 2017. In truth, David from Sonos was the “stone” who got the line started.

Soon after, big brands like Microsoft followed and helped us to build with CX-AI.com the best Predictive Qual method available – providing 4X impact of actions compared to DIY solutions.

In a nutshell.

Qualitative feedback is indispensable. It gets us unfiltered feedback and enables us to discover new things.

But just reading it can be largely misleading. It’s even a proven fact that just counting the number of mentioned topics (as a reason for satisfaction or loyalty) is not related to its importance.

To draw adequate conclusions from unstructured feedback, it takes a Predictive Qual approach.

The approach can be filled with a three-step framework: text analytics, driver analytics, and predictive dashboarding.

With state-of-the-art excellence in each step, you can 4X the impact of the action that will be derived from the data.

This means it matters HOW you implement it.

Your next steps.

What can be your next steps in better exploiting your qualitative feedback data?

Do you feel the need to build knowledge around this for you or your team? Then you might consider the “CX Analytics Masters” Course – it’s free for Enterprise Insights professionals.

Do you feel some more urgency, and you want to assess your status quo better? Then it might be wise to schedule a complimentary strategy session with an expert – it’s free for Enterprise Insights professionals.

In case you have not yet subscribed to my standpoint newsletter. Then it would be best if you did this now – because there is more exciting stuff coming very soon.

 

-Frank

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

TRUTH-HACKING: 3 Rules To Not Get Fooled by Data

TRUTH-HACKING:
3 Rules To Not Get Fooled by Data

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.

Published on: May 18, 2021 * 9 min read

Everyone communicates with facts and data to support a certain message. Politics is doing it, Media is doing it, Businesses are doing it. Lying with data has become a shady art, perfectionated by politics, cultivated by managers.

Even worse, business leaders fool themselves day in day out by drawing “obvious” conclusions from data.

Imagine we could make sure we do not fall for fake news, alternative facts, spurious correlations, and alike. Imagine we would have a checklist to see if an insight is legit?

How many trillions of dollars could be saved? How many lives could be saved? How much smarter would people guide political decision-makers? How much better would this world become if we all get a little bit data-smart every day?

This article explains the way…

So now, how can we separate the wheat from the chef? How do you check if a finding is flawed?

These three simple rules are your guide:.

  1. Control How Your Data Is Sampled
  2. Understand What Your Data Really Mean
  3. Be Aware How of You Infer Truth

Here is why…

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS

Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Control how your data was sampled

In 2016, Donald Trump held his inauguration speech. Those media who were critical towards Trump, were highlighting the fact that the present audience was significantly smaller than ever before.

However, Trump’s spokesperson Katrina Pierson mentioned that “the peak subway passengers had been extremely high.” This comment became world-famous as the “Alternative Facts”-Quote.

It stands for cherry-picking examples to prove the point you want to make.

Politicians of all parties are doing it all the time. But not just them. Business leaders do it too.

The moon landing conspiracy is built on using selected facts that would make this project questionable – like the waving flag, although the moon has no wind. It is neglecting the available facts that would explain the phenomenon.

Beware of Cherry-Picking

In the 2000s, when I still used to read business books, I had this eureka moment. I was reading the book “Simply Better” which was mentioning Cardinal Health as an example of bad management. Then I read “Close to the Core” which used Cardinal Health as the case study for how to do it.

Even business book authors do cherry-picking. Every single business book on this mother earth is a cherry-picking selection of cases that prove one single theory.

What’s wrong with getting some inspiration from business books?

It’s a dangerous dance at the edge. Getting inspired by wrong ideas will derail your mind.

The problem: the books hinder you to build your own opinion and finding the truth. They just design to make you believe the theory.

Reading business books will less likely make you smarter or more successful. It will more likely make you become a business fashion victim.

Cherry-Picking is the practice of selecting results that fit your claim and excluding those that don’t.

Why is this strategy so successful of fooling us all?  We are drawing conclusions from any experience no matter if it is representative or not.

What can you do about it? Do not conclude from data, without checking that if it represents the matter of interest.

There are other forms of “cherry-picking”…

Sampling Bias – the unintended cherrypicking

In 1948 when The Chicago Tribune mistakenly predicted, based on a phone surveythat Thomas E. Dewey would become the next US president. They hadn’t considered that only a certain demographic could afford telephones, excluding entire segments of the population from their survey.

This cherrypicking can sneak into decision-making easily. Have you ever done a churner survey?

Survival Bias – another unintended cherry picking

In world war two the US army checked bombers for damages thru gunfire and applied ammunition to those spots. It did not help at all.

Why? They needed to check those who did NOT survive gunfire too in order to find spots that required further protection.

I have never met a client who is doing a churner survey and who realized he fools himself with the survivorship bias. To find out what leads to churn, you need to survey customers, NOT ex-customers, and follow the churn on them.

We believe in data and facts. For us, “facts” are a synonym for “truth”. But cause-effect relationships can not be observed but must be inferred (and a “reason” IS a cause-effect relationship),. This article has more NEVER take facts to decide about reasons.  

ALWAYS be sure to have cases with different outcomes in your sample – successful and not successful, churner and non-churner, winner and loser.

Understand What Your Data Really Mean

When working with an automotive brand, I was astonished at how incredibly high the customer satisfaction was at nearly all of their car dealers.

The client took me aside and explained: Car dealers are incentivized by customer satisfaction. They get millions in cashback from the manufacturer if customers are satisfied. He further explained: sure, larger  car dealers hired personnel just to call those who give lower ratings to take this rating back. They also implemented all kinds of other measures to make sure the rating was excellent.

When I shortly after bought a car, it became apparent when the dealer smiled at me with a huge basket of flowers in his hand and said: “hope you’ll enjoy this car – if someone from Ford calls you, we would be delighted if you say ‘extremely satisfied’ – if you are not, please tell me beforehand.”

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Beware the Hawthorne Effect

In the 1920s at Hawthorne Works, an Illinois factory, a social sciences experiment hypothesized that workers would become more productive following various changes to their environment such as working hours, lighting levels, and break times. However, it turned out that what motivated the workers’ productivity was someone taking an interest in them.

When you try to measure customer satisfaction or the likelihood to recommend, asking alone may increase or decrease the outcome. In CX this is sometimes used in “cuddle calls”. With this you reach out to customers just to show that you care and improve satisfaction.

WHY is this fooling us all?

We are not aware that data is just a representation of a real-world phenomenon. We take the label of the data and take this as the truth. Only when we understand how the data was generated can we understand the resulting data analysis.

WHAT can you do about it?

  • When interpreting analysis results, also consider that data might not be generated the way you believe it was
  • Track the context of data generation (e.g. as a binary variable as 1 for “with observation” and “0” without observation) and include this information in the analysis
  • Make sure you really have understood which piece of reality the data really stands for.
Be Aware of How You Infer Truth

True facts: global warming correlates highly negatively with the number of pirates. The number of people drowning by falling in pools correlates with Niclas Cage appearance in movies. And the shoe size correlates with carrier success.

We all heard about it: “Correlation is not causation“. The intuition to take correlation as causation is hard-wired in our brains. It is tough to resist this conclusion.

Correlation works great where just one or two things impact an outcome, AND the effect happens shortly after the action. Beyond these cases, correlation is largely misleading.

Beware the Cobra Effect

If marketers around the world need to hit their sales numbers for the month, they do price promotions. It’s inevitable that this works, as sales number immediately reacts.

Still, it causes more harm than good: The Cobra Effect.

The good share of the additional sales is simply sales that would have happened anyway, but later, and at a higher price.

The net sales effect is much lower, and the profit effect questionable as margin suffers.

On top of this, competition reacts to defend its market share and pushes its own price promotions. This harms your sales, overall market price level and leading you to the next price promotion. It’s a vicious cycle.

In the 1800s, it was said that the British Empire wanted to reduce cobra bite deaths in India. They offered a financial incentive for every cobra skin brought to them to motivate cobra hunting. But instead, people began farming them. When the government realized the incentive wasn’t working, they removed it, so cobra farmers released their snakes, increasing the population.

This is WHY the Cobra Effect is successfully fooling us?

We see the immediate effect e.g. of a price promotion. The indirect effect as well as the long-term effects are not that obvious because on the long-term other factors influence the outcome as well. Also, the effect can spread over time.

When people do not know a solution to an obvious problem, they take the obvious solution: price promotion.

Actionism is always a “good” strategy in complex environments. Nobody can accuse you not to do something. Also, nobody can easily prove that you are wrong.

Managing a complex system takes complex analytics to understand it and self-organizing measures to address it.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Beware Assumptions

There is a joke among data scientists: “if you shoot past the deer on the left and on the right, on average, it is dead”. Believing that an average well represents all, can be misleading.

It’s all around…

Once, we ran a marketing mix modeling for a pharma sales force to determine which marketing and sales actions drive prescriptions. Conventional (linear) modeling “found” that “giving product samples” to doctors will drive prescriptions.

When applied a more flexible machine learning methodology, it turned out that at some point, more samples REDUCE prescriptions.

After the fact, this is clear. The doctors give the samples away. If they have too many, they will first use samples, not prescribe.

Summarizing always comes with assumptions. Those assumptions are in many (if not most) cases WRONG.

To demonstrate the effect, statistician Francis Anscombe put together four-example-data sets in the 1970s known as Anscombe’s Quartet. Each data set has the same mean, variance and correlation.

However, when graphed, it became clear that each of the data sets are totally different. Anscombe wanted to make clear that the shape of the data is as important as the summary metrics and cannot be ignored in the analysis.

It can be misleading to only look at the summary metrics of data sets. This applies to parametric statistical modeling as well. Their parameters are summarizing a preassumed property (primarily “a linear relationship”).

WHY is this strategy so successfully fooling us all? Our world is complicated enough. We have a desire to make it simple. Simple is beautiful to us. We believe what we want to believe: a simple, plausible explanation.

Confounder at work

When you take NPS ratings of customers and then correlate this with the later development of customers (whether or not they churn or even buy more), repeatedly, you will be surprised.

What we see is that it often hardly correlates for some reasons. One reason is the so-called Simson’s Paradox.

When customer segments that have a higher potential to upsell, give at the same time more critical ratings, it will mess up your correlation.

In the 1970s, Berkeley University was accused of sexism because female applicants were less likely to be accepted than male ones. However, when they tried to identify the source of the problem, it was found that for individual subjects, the acceptance rates were generally better for women than men.

The paradox was caused by a difference in what subjects men and women were applying for. A greater proportion of female applicants applied to highly competitive subjects, where acceptance rates were much lower for both genders.

The Simpson’s Paradox is a phenomenon in which a trend appears in different groups of data but disappears or reverses when the groups are combined.

It works because humans are hardwired to believe in correlations “When something is consistently happening along with something else (correlation) there must be a cause-effect relationship of some kind.”

Correlation leads us astray for several reasons. One reason is highlighted in the Simson Paradox: The influence of a confounding effect.

If there is something that influences the cause (e.g. the NPS rating) and the effect (e.g. customer value, churn, upsell), at the same time then correlation (as well as modeling that excluded the confounder) can be wrong.

WHAT can we do about it?

In a business context, whenever possible, avoid jump from correlation to the conclusion.

Instead, use methods designed to infer causality. They are coined “causal analysis” and the latest tech “causal machine learning”.

In such situations not possible to run proper analytics: at least make yourself aware of how fragile your learning is. Try to hypothesize other explanations for the correlation. Evaluate possible A/B testing options.

A warning sign is always when not only the correlation but also a non-correlation, can be wrapped in a nice story.

My advice on “Indirect (Cobra) Effects”: Beware actionism. If you are not sure, doing something can be more harmful than “wait & see”. The latter is an established strategy in medicine and should be in business too.

There are well-established methods in place that can bring light in the darkness. If you educate decision-makers,it needs proper analytics to see root causes of effects, then the causal models will become a standard practice.

My advice on “Beware Assumptions”: Take a look at raw data first. It will not answer your overall question but can quickly spotlight wrong assumptions you are making.

Practice humbleness. Humans tend to overestimate the validity of what they know – big times. Be aware that most stuff we know about business will turn out to be wrong (or oversimplified) in the future.

Machinelearning is made to model input-output relations with the least amount of assumption. Causal machinelearning has the framework and algorithms to get the insights you are looking for.

My advice on “Confounder under control”:  Berkley University’s example suggests that it is enough to split up the KPI comparison in a two-dimensional table. But this is deceptive.

It needs a good amount of fortune to find the hidden confounder this way. Mostly, you don’t know what you don’t know.

There can be dozens of variables that may turn out to be a confounder. That’s why causal machinelearning is the way to go.

This article further elaborates on how you can spot causation

Truth-hacking – the art & science of the 21st century

Being a Truth-hacker can be hard. Don’t become one if you can not handle uncertainty. Don’t become one if you do not have a passion for truth.

My passion is based on my conviction that it’s unethical and unfair not to aim for truth.

It’s unfair to your colleagues to have hidden agendas, it’s unfair to shareholders who invest hard-earned money, it’s unfair to customers who are those who pay your check.

Truth-hacking can be learned. It is a “simple” three-step process:

  1. Control How Your Data Is Sampled
  2. Understand What Your Data Really Mean
  3. Be Aware of How You Infer Truth

Think about how this would make the world a better place!

What if you don’t have to fool yourself anymore with ludicrous fact-based stories? Wouldn’t it feel better too?

What if middle management can’t trick Company leaders and investors anymore with questionable fact-based explanations? Investments would find uses that thrive instead of making the shady rich.

What if politics can’t blind voters anymore with cherry-picking facts? Voters would elect politicians that truly drive prosperity.

If you are now passionate about truth-hacking too, please spread the word.

Share this article not just to your friends but to those who you really want to adopt this art too.

Share this article and sign up to get my upcoming articles elaborating on these topics.

– Frank

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Beware Storytelling

Beware Storytelling, Practice Truth-telling

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on:  29.04.2021 * 9 min read

Storytelling is a crucial skill to “sell” insights internally, but it comes with a risk. Storytelling makes it irrelevant to produce true insights.

“It was a hot summer day when I got a call from David….” This is how I usually start my conference presentation and keynotes, simply because stories suck the audience’s attention. You merely want to know what’s next.

Like a pleasant song, it feels painful when it stops. Your brain sings the song along even then. Same with stories.

It was 2008 when I was traveling to Russia, and I was fortunate enough to join someone’s class reunion. What I was witnessing was unexpectedly amazing. Many times during the evening, someone stood up, raised his glass of vodka, and started telling a story. It always begins with a random occasion like…

“This morning I went to the shower, and it was hard to calibrate the water temperature …(then interpreting this into an analogy) … Isn’t it like in life?… You need time … But once you find an optimal mix, it’s such a pleasure. It’s like with people, once I find you — my friends, I don’t want to miss you anymore “.

I learned that the Russians are pure naturals in storytelling. It was so emotionally intense … And the perfect validation of the point — Storytelling is so powerful.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS

Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

BECAUSE storytelling is so powerful, it is dangerous.

Please check the following statements and figure out if you agree with them:

  • Differentiating our brand is a vital marketing task
  • Loyalty metrics reflect the strength not the size of our brand
  • Retention is cheaper than acquisition
  • Price promotion boosts penetration, not loyalty

I bet you agree since the trillions of great success stories have been written about those statements in the last decades. Still, none of them is true. Please bear with me for more!

In my college days, I made a strange observation. Students studying language majors were (not surprisingly) very eloquent. But their whole argumentation and the flow of reasoning when talking about what matters in life were odd to me and full of noncongruent thought and explanation.

This was astonishing to me, as I learned that language is the operating system that runs the thought process. Like math runs on numbers and variables, rational thinking runs on words and language. Fewer language skills or words = less elaborate thinking possible.

Although this might be true, it turned out that the eloquence’s wealth can simply be misused to camouflage not existing logical stringency and non-existence of proper meaning.

People who are trained to judge their own comments on “how it sounds” as opposed to “what it exactly means” are not able to produce potentially true content.

The same is true with storytelling. Its key for someone with great insights is to transform the insights into actions because it is needed to win peers and business partners.

At the same time, storytelling is excellent to camouflage non-sense.

Storytelling is like nuclear power.

Nuclear power is able to generate electricity for the prosperity of our society. But in the wrong hands, it can kill billions of lives.

Have you ever thought about how to save your organization from the “BS army”?

Storytelling has become art on its own. Besides the story structure, the sequencing of a Hollywood movie, the use of metaphors to link back to existing memory structures, it’s built on this simple yet super powerful trick: “plausibility.”

I served as a Sales & Marketing Director in my former business life, and I had monthly performance reviews with my sales reps. It was a rainy Friday which I will never forget…

Karl showed me his dashboard, and I asked him. “Mmh. Volume for X is down, why is this?”. Quickly he started giving me a super plausible story as an explanation. Suddenly, I realized we were starring all the time at past years’ data.

Switching that, suddenly, the volume went up. Karl again had a remarkable story at hand.

“How worthwhile are explanations really?” — I suddenly realized.

Storytelling gives holistic examples that illustrate the theory (= the insight). With this, the theory (=the insight) feels plausible.

Here is the catch…

Plausibility is USELESS

“Targeting always improves ROI” — right or wrong?

Sure, this is a plausible statement. The more of the addressed people are likely to respond positively, the better the effect will be.

But still, it’s plain wrong, as you can read in the widely published work at the Ehrenberg-Bass Institute.

We (at Success Drivers) once did a Marketing Mix Modeling for a mobile carrier and included the ad channel of digital affiliate ads to the mix. The client could not believe its eyes. This channel that everyone was bullying as “junk ads”- showed further the most considerable ROI.

Sure, it was junk because affiliate ads are not targeted. But it is super cheap. Furthermore, nearly every internet user needs a mobile carrier. Affiliate ad reaches not ideal but relevant targets. The low ad cost overcompensates the lack of targeting.

The win of targeting needs to be traded by the rise of costs. If everyone rushes for targeting, it will have a lower ROI than non-targeting.

Reality is complicated… 😉

End of the game: the mobile carrier stopped working with us and switched to providers that are happy to produce “plausible results”.

Checking for plausibility means checking for existing beliefs.

It is helpful in the operational and tactical contexts. It’s valuable if you don’t have time to search for the truth, but you need to make decisions fast.

In the context of customer insights, plausibility can be DEVASTATING.

The role of customer insights is to create new knowledge, to challenge and change existing beliefs.

If your new insights only pass the test, when it complies with existing beliefs (=it is plausible), the wealth of mutual insights you are going to learn will be poor.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Plausibility is the end of the insights.

It is not to blame anybody, this is to open our eyes. The “plausibility” superstition has a long history that actually roots in social sciences.

When it comes to applied statistics, still today, students learn to proceed theory-led. It will take another article to clearly prove that this whole research approach is more harmful than helpful in today’s world.

It’s practical to publish scientific papers. But it is not helpful to make a relevant practical impact in real life nor gain genuinely unique, thus valuable insights.

“Nothing is more practical than a good theory” (Kurt Levin) was the professor’s mantra who thought me marketing and statistics. While certainly, the point is valid, it is abused by science and applied researchers.

The problem that we have is NOT that our “good theories” are not used. The problem that we have  is that we do not use proper methods to DISCOVER good theories (=insights).

I know. This now violates the existing beliefs of most of you. You are skeptical. That’s fine. Make up your own mind. (And challenge me if needed to write a more in-depth article about this)

Today’s statistical practice of causal modeling is built on this unpractical theory-led approach and works like this:

Collect all hypotheses that are backed up by theory. If it’s just speculation, leave it out from the model. Then test the model with a statistical modeling approach (can range from regression, econometric modeling, to structural equation modeling) to validate the relationships where you already have the theories.

The only meat on the bone you are getting is the linear strength coefficient of the relationships. No wonder that researchers are starving for richer insights.

And they roam to those who promise “richer insights” — the story and fortune-tellers.

The reason why not many are using conventional causal modeling in practice is not mainly because it is clunky and complicated. They don’t use it because it simply validates what you already know it is not that helpful.

Now, if a plausibility check is a numb sword, how can we test theories and potentially new insights?

The 2 types of insights: There are just facts (descriptives) and relationships between facts (cause-effect relations). The latter is what businesses are unknowingly asking for: “What action A will lead to outcome B”.

The art and science to gain those insights can be labeled as “causal modeling”. This talk here discusses this in more detail.

 
… and this article explains it in layman’s terms.

JUST SIMPLY TRYING to be more causal will drive huge bottom line impacts.

Every baby step to better causal modeling will bring you closer to the truth. It’s a journey. You can’t do it wrong just more or less good.

The only mistake you can make is not doing it and reverting to the usual “plausible” procedure of looking at and comparing facts.

We need to accept that in truth, we might be wrong. Actually, we are all the time kind of wrong. With fancy stories, we make ourselves feel better and camouflage the fact that we are living in a big bubble of false beliefs.

Endless examples pop up if you take your “funnel glasses” off. Do you remember these statements from the beginning of the article?

  • Differentiating our brand is a vital marketing task
  • Loyalty metrics reflect the strength, not the size of our brand
  • Retention is cheaper than acquisition
  • Price promotion boosts penetration, not loyalty

All very plausible statements, right? Most backed up by “theory” and all are elaborated in established marketing books from Kotler & Co.

The world’s largest marketing science institute “Ehrenberg-Bass” has access to the most comprehensive datasets spanning all industries from the largest brands and corporations in this world. They found no support for those statements and often found proof for the opposite.

Just because it’s plausible, just because marketing textbooks site it, doesn’t mean it is true.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Up Your Storytelling Game: Practice TRUTH-TELLING.

Storytelling is the art of wrapping a theory in a way it feels easily understandable and true. It feels like the natural way of proving a point.

But it’s not. It’s an illusion.

Now, what do we do with this new insight? Depending on your role in a company, the learning will be different.

TAKEAWAY FOR INSIGHT LEADERS:

· Continue the art of storytelling but add the art of proving the truth to the mix

· If you want to be an ethical leader, you should align your research with what creates the best stories. Remember, if you wish you can make ANYTHING fueling a great story.

· Instead, focus on creating true insights. (sounds self-evident, but it’s the exception)

· Educate business partners on truthtelling instead of simply selling them what they ask for (plausible stories)

TAKEAWAY FOR BUSINESS PARTNERS:

· Challenge your insights leaders to provide evidence of (causal) truth.

· Every time they come up with facts, comparing or correlating them, stand up and shout “BS”. Or if you are a nice guy with manners, tell them you don’t buy it because of the apparent risk for spurious findings.

· Embrace insights that VIOLATE existing beliefs. Take them as an opportunity to learn and grow.

I like to close with three simple takeaways that will guide you on your way from storytelling towards truth-telling:

1. Be suspicious of plausible stories

2. Be aware that most of what you know is wrong

3. Be CURIOUS TO DISCOVER CAUSAL INSiGHTS

“If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.”

– Rene Descartes


Stay suspicious,

Stay aware,

Stay curious,

Frank

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI

Correlation is not causation — but what is causal?

Correlation is not causation — but what is causal?

Founder of CX-AI.com and CEO of Success Drivers
// Pioneering Causal AI for Insights since 2001 //
Author, Speaker, Father of two, a huge Metallica fan.

Author: Frank Buckler, Ph.D.
Published on: 15.04.2021 * 9 min read

Correlation, causation, statistics — all this sounds boring, complicated, and not practical. I’ll prove in this article that NONE OF THIS IS TRUE.

Since the beginning of humanity, we have roamed through savannahs and ancient forests and gained causal insights day in day out.

One tried to light a fire with sandstone — it didn’t work. One used a sharp stone to open the Mammut — worked. One tried these red berries — died within one hour.

Correlation works excellent in simple environments. It works great if you have only a handful of possible causes, AND the effect is following shortly after.

Fast forward, one million years: Day in day out, we are roaming through leadership zoom meetings and business dashboards.

“David did this, next year sales dropped. Let’s fire him.”. “NPS increased, great job our strategy is working”.

Is it really that easy?

We still use our stone-age methods. We use them to hunt for causal insights and to justify the next best actions. Action that costs millions or billions in budget.

FREE LASER-TRAINING: The 3Q-METHOD™ for CX-INSIGHTS

Crystal Clear CX-Insights with 4X Impact of Action
60-minute Condensed Wisdom from The World’s #1 CX Analytics Course

Business still operates like Neanderthals

If you invest today in customer service training, you will not see results right away. It may even get worse for a while. Later dozens of other things will impact the overall outcome — new competitors, new staff, new products, new customers, new virus mutation, or even a new president.

You cannot see -just by looking at it- that an insight is wrong or right. Even if you put the insight into action and try it out, you will not witness if it works or not.

Dozens or hundreds of other factors influence outcomes. Even worse, activities take weeks, months, or years to culminate into effects.

I believe people know this. But they don’t have a tool to cope with it. This is why everyone goes back to Neanderthal modes — like a fly, hitting the window over and over again, just because it knows no better way.

Businesses live on Mars, Science on Venus

It was a sunny September day in 1998. I was sitting in my final oral exam of my master diploma with Professor Trommsdorff — THE leading Marketing scientist in Germany at that time.

He was asking me, “What are the prerequisites for causality?” I answered what I had learned from his textbook:

  1. Correlation: effect happens regularly after cause.
  2. Time: cause happens before the effect.
  3. No third causes: no obvious external reasons why it correlates
  4. Supported by other theory

Even during this exam, I knew that this definition is useless for real life.

Here is why…

Point #1 — Correlation: most NPS ratings do NOT correlate with resulting customer value. We can still prove a significant causal effect. Below you will find a great example of why it is. Correlation is NOT a prerequisite of causality. This is only true in controllable laboratory experiments.

Point #2 Theory: How can you unearth new causal insights if you always need to have a supporting theory? This is just useless for business applications. Actually, it’s also holding back progress for academia too.

One underlying reason for this useless definition is that academia has different goals than businesses. Academia aims to find the ultimate truth. As such, it wants to set more rigid criteria (spoiler: this helps for testing but not exploring causality).

For businesses, the ultimate truth is not relevant. Instead, what you want is to choose actions that are more likely to be successful and less likely costly.

Because today “Causality” is associated with “ultimate truth”. Academia is avoiding this word like the devil in the holy waterfrom statistics all the way through marketing science.

Because science is largely neglecting causality, it is not correctly taught in universities and business schools.

This then is why businesses around the world are still in a Neanderthal mode of decision-making.

Join the World's #1 "CX Analytics Masters" Course

Free for Enterprise “CX-INSIGHTS” Professionals

Causality in business equals better outcomes

Question: What are the most crucial business questions that need research? Is it like how large a segment or market is (descriptive facts), or is it which action will lead most effectively to business outcomes?

Exactly, this is the №1 misconception in customer insights. Everyone expects that “insights” are unknown facts that we need to discover.

In truth, these crucial insights are mostly not facts but the relationship BETWEEN the facts that a business is looking for. It’s the hunt for cause-effect insights.

But how can we unearth such insights?

Here is a practical causality understanding that enables the exploration of causal insights from data. At its core, it relies on the work of Clive Granger. In 2000 he was awarded the Nobel Prize for his work.

In 2013 we took a look at brand tracking data of the US mobile carrier market. T-Mobile was interested to find out why its new strategy was working. The question was: is it the elimination of contract terms, the flat fee plan or the iphone on top that attract customers?

Causal machine learning found that NONE of the many well-correlating factors had been the primary reason. It was the Robin-Hood-like positioning as the revolutionary brand “kicking AT&Ts bud for screwing customers”.

A “driver” is causing an “outcome” directly if it is mutually “predictive”. It means that when looking at all available drivers and context data, this particular driver data improves the ability of a predictive model to predict the outcome. So did the new positioning perception for T-Mobile.

If every driver correlates with outcomes, the model may need just one of all drivers to predict the outcome. This one driver is -proven by Granger- most likely the direct cause.

Machine Learning revolutionizes causal insights

95% of new product launches in grocery do not survive the first year — although brands have professional market research departments.

We let causal machine learning run wild on a dataset with all US product launches, its initial perception, ingredients, pricing, brand, repurchase rate, and then the effect to survival and sales success.

Our client was desperate as nothing was correlating and classical statistical regression had no explanatory power.

It turned out that reality violates rigid assumptions that conventional statistical models require. Machine Learning suddenly could very well predict launch success with 80% accuracy. It even could explain it causally. What it takes to launch success is to bring ALL success factors in good shape. You cannot compromise on any of them.

The product needs to be in many stores (1), the pricing must be acceptable (2), the initial perception must be intriguing (3) and the product must be good to cause repurchases (4). Only if all comes together, the product will fly.

A driver is causal if it is predictive. Now Machine Learning enables us to build much more flexible predictive models. We don’t need to assume anymore that those factors add up (like in regression).

We can have Machine Learning find out how exactly the cause enfolds its effect. No matter if additive, multiplier type, nonlinear saturation or threshold effect, Machine Learning will find it in data.

If the predictive model is flexible e.g. it can capture previously unknown nonlinearities, it improves predictability. That’s what AI and Machine Learning can do today.

claire_popup (2)

Get a Free Assessment of your CX Analytics Strategy.

Book a Time with Claire to Discuss Your Situation.

Causal insights require a holistic approach

Coming back to the T-Mobile example. None of the new features had been found to be the direct cause of success. Does this mean they had been useless?

Not at all. The new features like “no contract binding” were reasoning the Robin-Hood-perception. The feature perception proves to be predictive for positioning perception. This is called an indirect causal effect.

A driver can cause the outcome by indirectly influencing the direct cause of the outcome. That’s why you need a network modeling approach.

The whole philosophy of regression and key driver analysis is a simple input-output logic — and it leads to bad, biased, misleading results.

Nothing in this world is without assumptions

…we should use them as a last resort only.

Often we see that NPS ratings do not correlate with increased customer value. The picture below shows the data points of customers. On the horizontal axis is the NPS rating and on the Y-axis the change in cross and upselling afterwards.

Overall, both data do not correlate. That’s what we actually see in most datasets. NPS has a hard time correlating with Cross & Upselling as well as Churn. But not because it doesn’t work.

Often there are high-value segments that tend to be more critical when rating. When the rating improves, the cross & upselling increases even more as these are high-income segments.

Within each segment, the NPS rating correlates, overall it does not correlate.

If your causal model would not have the segment information and if it would not have as well other information that correlates with the segment, THEN ….

…your model is only true with the assumption that no significant third factors (so calledconfounders) influence cause and effect at the same time.

Granger called this in his workthe closed world assumption.

There is a last causal assumption to discuss:

Let’s take NPS rating data again. You could be tempted to take it and correlate or model it against the customers’ revenue.

Customer revenue is an aggregate of the last year’s purchases but NPS is just the loyalty of now. Such analysis would assume that the present can cause the past.

Of course you need to make sure that by any means the cause is likely happening before the effect.

Often, we even do not have time-series data. Then you need to judge in the causal direction using other methods, such as PC-algorithms used in Bayesian networks, or additive noise modeling methods, or as a last resort, an assumption based on prior knowledge.

Neanderthals become Plumper

When I speak about causality in talks, I typically hear the objection: “yes, but it’s impossible to be sure that those two assumptions have met.”

Fair point. But what’s the alternative?

Guesswork?

BS storytelling?

Back to Neanderthals spurious correlations?

This is so hard to accept: while insights about facts are obvious, insights about (cause-effect) relationships can NOT ultimately be “proven”. You need to infer them from data.

When doing so the only thing you can do is to make LESS mistakes.

Latest Causal Machine Learning methods enable us to:

  • Avoid using theories as much as possible (when in lack of data, they can still be very valuable)
  • Avoid risk for confounder effects by integrating more variables (plus other analytical techniques)
  • Avoid assuming wrong causal direction by combining direction testing method with related theories about the fact.

Leave Neanderthal times to the past and take the latest tools and become plumper of insights 😊

The good news is…

You can NOT make a mistake by just starting to improve.

The benchmark is not to arrive at the ultimate truth. That’s an impossible and impractical goal. The benchmark is to get insights that are more likely to drive results.

Causation is an endlessly important concept that everyone seems to avoid — simply because it’s not understood.

You can drive change by educating your peers, colleagues and supervisors. The first step is to share this article. 😉

“There is nothing more deceptive than an obvious fact”

Sherlock Holmes

Literature:

Buckler, F./Hennig-Thurau, T. (2008): Identifying Hidden Structures in Marketing’s Structural Models Through Universal Structure Modeling: An Explorative Neural Network Complement to LISREL and PLS, in: Marketing Journal of Research and Management, Vol. 4, S. 47–66.

Granger, C. W. J. (1969). “Investigating Causal Relations by Econometric Models and Cross-spectral Methods”. Econometrica. 37 (3): 424–438. doi:10.2307/1912791. JSTOR 1912791.

"CX Standpoint" Newsletter

b2

Each month I share a well-researched standpoint around CX, Insights and Analytics in my newsletter.

+4000 insights professionals read this bi-weekly for a reason.

I’d love you to join.

“It’s short, sweet, and practical.”

Big Love to All Our Readers Around the World

IMPRINT
Our Group: www.Success-Drivers.com

Privacy Policy
Copyright © 2021. All rights reserved.

FOLLOW CX.AI