Author: Frank Buckler, Ph.D.
Published on: October 19, 2021 * 9 min read
At first, we are going to learn how we can deal with the sparse text feedback. There are several reasons for sparse text feedback that you CAN NOT control. It is a part of your target group.
FREE for all client-side Insights professionals.
We ship your hardcopy to USA, CA, UK, GER, FR, IT, and ESP.
The reasons for sparse text feedback are as:
There are some reasons that you CAN control, and they are as:
Audio and video feedback give richer feedback as compared to text feedback. There are many studies around that, but according to the rule of thumb,
“They give at least 2X (double) more feedback than text feedback.”
It depends on the context, and the field is very much evolving because customers get more and more used to those kinds of feedback, and they become more tech-savvy. Some years ago, a study showed that the feedback that was received by 15% of the customers doubled in the year after as more people participated.
The acceptance of giving feedback via mobile is rapidly increasing, and it will be a standard soon. It is because you can have everything with the push of a button, unlike desktops where you do not know whether they have a webcam and microphone on or muted. So, all these things are barriers, but typically they DON’T exist on a mobile phone. So, here we have a high likelihood to get feedback.
Do you know why the audio or the video feedback works? It works because it implies social pressure. So if you record something, you feel someone will listen to it. The process is as:
Active listening is evolving the tech and topic methodology you should use for your open-ends. But what is active listening? It is an adaptive real-time individual response to feedback to foster customer elaborating feedback. So, you get more of it as it is much more than a pre-formulated question.
But why does active listening work? It works because of the following reasons.
There are two different approaches or implementation examples of active listening we see in the market. They are as:
Consider an example of in-field probing below:
This example shows when people write some short text; the active listening approach pops in what’s good. A meter (as shown below) also shows how detailed your text is and primes you how good you are.
So, this is one approach to active listening, and it depends a little bit on the complexity of your inputs. It can go wrong if you wrongly categorize what has been said.
There is another chatbot type of technique that is a little bit more foolproof. So, there is an open-end, and someone writes something in it and submits it. Then, the chatbot pops up and says:
“Hey, I’m a bot and I didn’t understand what you actually wrote.“ So, the person can write better as the bot has the option to be authentic and open. It can also ask whether it has understood the information correctly.
Even though the chatbot categorizes well what has been said, the respondents feel that it is not hundred percent true. They need to write more specifically about what they meant. So, this way, this feedback gives more power to probe for more feedback, and it is a dialogue conversation that doubles the number of topics mentioned.
So far, we discussed that open-ends often result in scar responses, so you have to do something against it. You need to make sure that you apply standard rules to get your customers talking. Further, you need to collect audio or video feedback that can be translated into text and categorized into topics.
Use active listening as it also applies to audio and video feedback. You should also know that better unstructured feedback is the most customer-centric way of collecting feedback, and that’s why it is really important to collect a lot of it.
Our Group: www.Success-Drivers.com