Recently I was given the task of running a project that will create what is known around here as a Customer Interaction Framework. In short, it’s meant to determine what our internal customers need when they do their job with our product, and then find a way to get useful and meaningful feedback from them. And once we figure that out, create the framework so it can be applied to all the global teams so they can generate a feedback loop to build in improvements.
Man, that is a lot of work.
The project scope is meant to span the globe yet only comprises 5 team members. We decided that keeping the team small and staffed with people who not only have experience in this area, but who are also passionate about having a dialogue with their customers and encouraging feedback. It will focus on deliverables we know we can get good feedback on, and the people in the project all have their area of expertise.
But one of the aims of this project is to set up a framework where we can get pretty specific feedback on certain deliverables and then act on it at a global level. If we don’t, we run the definite risk of alienating our customers. The tricky part is that some people are nervous about getting feedback as they think it will reflect poorly on them personally. But we have to avoid that.
However, the prerequisite for success in this project, in my opinion, isn’t the team itself being competent or motivated. That was a given. It’s actually doing our homework at the start of the project to make sure we know who we are speaking to — so we can speak their language. This will entail mapping our deliverables to the project phases and the roles that come into play at specific points, and what those people in those roles need to get their jobs done. Not an easy task. But with a name like this, I never thought it would be easy.
I hope to post more updates on this topic as the year goes by.
I am helping out with a customer engagement project where we are trying to get a feedback loop built in to the testing validation process. The idea is to get feedback from the experts on our documentation before it goes out on the market. A novel idea, in many ways.
The whole effort was a result of just asking questions internally at the company as to when we need feedback and what that feedback should include. It isn’t an effort to get a focus group. It’s using our internal experts and speaking to them face-to-face about possible issues.
It turns out the feedback loop internally is lacking. Our area hasn’t even been considered as part of the feedback chain and never was. But it wasn’t due to intentional neglect. As with so many companies, people just don’t think of the whole feedback loop and when it begins.
To make a long story short (this is a blog after all), I did some hunting around, contacted everyone I know and finally found the right person. We talked, I explained my needs and situation, and we agreed on how the documentation can be tested to get feedback from the people who will use it the most. And all this before the product goes out to the customer. Not after it’s out on the market. Problems solved by us, not the customer.
So your organization picked you to run a feedback program. Congratulations.
When it happened to me I knew I was doomed. I had the project from hell. It had failed every year and it was now my turn to go before management and tell them all of the details surrounding the latest failure. Not exactly what you might want to hear.
No one had ever been able to get a feedback project to work in our department. But I was lucky enough to gather a great team. To have a manager that was more like a partner. To have built up some positive internal karma so I would get a bit of latitude. To have a sense of humor. And to know I needed a vision.
I was first looking at focus groups. I love the interaction you can generate with a well facilitated group. After a gentle shove from management, the focus group concept warped into one-on-one conversations.
These would actually be easier to organize, maintain, and control, especially as we were running the program ourselves. So we sharpened up our questions. Put in a lead tracking system to manage our contacts. And put out the word we were looking for subjects. We networked our way into a representative set of subjects. And we asked every feedback subject if they knew someone else we could talk with. This, as it turned out. was surprisingly effective.
So, when it is your turn to run your feedback program, do look at the literature, and check with the experts. But, remember that your internal resources (your team, your management, your style) can provide the edge you need to succeed.
Posted in Uncategorized
Tagged business communications, complaints, customer feedback, customers, electronic surveys, employee input, face-to-face, feedback project, monitoring, project management, running feedback program, telephone surveys
Many organizations don’t have the luxury of using dedicated customer service professionals to collect, organize, and analyze their customer feedback. I was in that situation in the customer feedback project I ran. I did, however, have the luxury of using authors and translators to ask the survey questions. Communications professionals trained to gather and publish information. The major difference? We were asking our customers to give us feedback on the products we produced. We weren’t the service department. We were the front-line production. This was good news and bad news. The good news, we had the opportunity to finally ask our customers all of the things we wanted to know but had never had a chance to ask. The bad news, they might actually tell us.
We worked at the questions and developed a set that would require real answers. And we talked about the kinds of responses we might receive just to be prepared. And did a bit of role playing to get a feel for the rhythm of asking questions and getting responses. Then we started.
Everyone that ran a survey discussed their experience with the group. The group was in a safe enough space that we could honestly talk about the good parts and the bad parts of every discussion. Because we were performing the work, it was also very exciting as we completed the project a month earlier than originally planned.
Doing the surveys ourselves turned out to be a good thing. It really helped us to establish ourselves as internal experts. And because of the work we did, the members of the group became leaders for an on-going department-wide feedback program.
Given the breadth and size of the organization where I ran a feedback project, we combined face-to-face and telephone surveys into one chain. It was going to be impossible to get the survey team in front of enough targets in a manageable timeframe without using both avenues. We felt that this would not adversely affect the outcome.
We were looking for feedback on our products (technical documentation) from our main customers (internal consultants). The point of the survey was to determine how we could provide consultants with better products.
The conventional wisdom was that the consultants would never talk to us – they never had in the past. All feedback our internal team received prior to our project was based on rumor and innuendo.
But even starting from here – that they would not talk to us – gave us a position. It told me I had to make it as easy as possible from the start to increase participation. So this is what I did to set the stage:
As these were consultants that always billed their time to projects, I was able to get a general project in place where they in fact could put the time we asked for.
As this was in a multi-language company, we offered surveys in two languages to eliminate language issues.
If they were in an office where we could meet – we ran the survey face-to-face, if not we set up a phone interview. And with survey takers literally around the world, we could account for any time they had to talk with us.
Part of our success was based on the attention we took at the start. It wasn’t the strength or probing quality of the survey questions, it was that we established the ground rules so that we made it easy to participate.
I found this list of seven ways of collecting customer feedback in the SkillSoft Corporation course The Voice of the Customer:
- Face-to-face surveys: Face-to-face surveys are easy to administer and provide you with an opportunity to spend time with your customer. They require consistent, objective questions, and expert analysis.
- Mail and electronic surveys: Surveys sent by mail or via the Web or email need formal questionnaire development and analysis. Unfortunately, these types of surveys yield the lowest response rate.
- Telephone surveys: To be effective, telephone surveys need consistent questions and require expert analysis.
- Employee input: Soliciting employee input is an excellent informal technique. However, the information is sometimes “second hand” and may be subject to personal interpretations.
- Customer complaints: Customer complains are easy to track and analyze. They reveal trends and provide a means of checking quality and customer satisfaction. They are, however, somewhat biased because you are hearing primarily from dissatisfied customers.
- Quality call monitoring: Quality call monitoring is a good technique for a manager or supervisor. It is important to note that the results are subject to the manager’s interpretations and opinions.
- Customer advisory panels: Customer advisory panels require a great deal of coordination. They are, however, particularly useful for testing products or services, and for solving problems.
I thought the list was a helpful place to start. I wanted to put this out and then talk about the various methods in later posts.