When it comes to IT management and customer expectations, no news is good news. If the telephones are not ringing at the help desk, then everything is well, or at least we hope so. Typically in IT management, the number of complaints received are the metric used to baseline whether the job is being done correctly.
Unfortunately, this is not always true. There is always the silent majority who will stop using the service without making any effort to resolve the issue. I remember going to a sales symposiums on customer satisfaction and hearing a story about a gas station on a busy motorway that was suffering from declining revenues. The owner tried everything. He painted the station, put a new sign-age and added hot food, but still experienced losses.
It was not until his gas station was on the verge of bankruptcy that a casual conversation about his losses with a friend revealed the problem with his business was not the lack of service or the amenities, but with the wash rooms. He had a reputation of having the worse kept wash rooms. The business owner had just assumed the cleaners did their job. Such was the lasting impression of the wash rooms on customers. And, rather than complaining, they just drove another five miles to the next service station.
In a service provider environment, this is known as customer churn. Service providers will do everything in their power to lock a customer into a contract in order to reduce the amount of sales churn. Without contracts, customer perception of reliability and performance can change in an instant, such as a newspaper article stating that a new provider has the most advanced network and the maximum speed.
Furthermore, a network outage that may be beyond the control of the service provider, can have a negative impact. Mobile phone and internet subscribers are continuously looking for two things. Best performance and a cost effective pricing. This is also true of any service orientated business. Maintaining customer satisfaction levels whilst increasing growth is a difficult balancing act. If you scale back support resources, there is a direct and negative impact on profitability.
Many service providers turn to self-service or automation tools to realise maximum profitability, whilst reducing administration and support staff. A good example of this type of automation is the online flight and hotel booking systems. These systems have matured to a point where they have all but pushed retail high street travel agents out of business. They have managed to achieve this by slashing operational costs and passing the savings on to the customer.
However, even these systems are facing stiff competition as other operators join the online service revolution. Price aside, customer satisfaction can now be measured in terms of success rate of payment confirmation, web-site response, and automated transaction updates. The obvious metric for gauging customer satisfaction is repeat business. Whether this consists of a pre-pay cell user topping up an expired balance or repeated business from an e-tailor. Existing customer satisfaction is the easiest to monitor.
Service Related Issues
The crucial time to manage customer satisfaction from a service experience perspective is when a customer or an end user has experienced an issue. Sometimes a service related issue will only affect a single user, other times it will affect an entire group. If a service related incident isn't resolved in a satisfactory manor, dis-satisfied users will usually fall into one of two categories - vocal or silent. Vocal users will let as many people as possible know about their bad perception of service regardless if it is justified or not.
They will write to the newspapers, escalate their cause with management, and the service related issue becomes the most important thing to them. Silent users, on the other hand, will quietly move to other service providers if they have the choice. If they can not to do this due to contractual or policy purposes, they will wait until an opportunity presents itself to object to the service they are receiving. In this situation the response is usually more calculated and specific. The vocal users are an asset in this regard since they can highlight a deficiency in a service delivery model.
However, it is also important to remember that there is a certain breed of end users who are never satisfied. They will continue to consume the maximum amount of support resources at the expense of other users. In most cases the service issues they are experiencing are self generated, or are caused by lack of understanding.
Customer surveys are a useful mechanism to gauge how a service is performing. In my experience, customers are likely to be honest in a survey when it is written and they have the ability to post it anonymously. A customer services representative (CSR) agent calling after an incident and asking questions is not effective. This is because the person asking the question as the representative of the organisation will respond according to the agent's demeanour.
Even if a customer has experienced terrible service, if a survey agent is friendly and the opposite sex, chances are the survey results will be inaccurate. Creating a written survey consisting of 50 questions is also not helpful as after the first 10 to 12 questions most people just want to get back to what they are doing; hence will just randomly choose an answer without considering the question carefully. Survey questions are likely to be accurate if a simple yes or no answer is required.
Arranging questions in this manner is more effective than subjective answers from multiple choice questions. In addition, there should be a good cross section of approximately 50 questions to construct survey results which are useful. It is better to rotate the 50 questions in blocks of 10 on a daily or weekly basis, and produce your customer survey response with less data, but with more accuracy.
The best surveys always have numbered options like this:
How would you rate the overall experience of the issue you logged?
1=Excellent 2=Good 3=Average 4= Below Average 5= Unacceptable
Questions with a rating of 4 or 5 should automatically be tagged with a call back to further understand the issue. This not only helps the organisation determine the root cause of the problem, but also make the dissatisfied end users feel like they are important.
Capacity versus Quality
A customer survey falls into two categories: capacity or quality. A capacity survey as the name suggests is used for planning purposes. It could be either an expansion or reduction of services. For example, a question like: Would having an option to log a support call by smart phone help improve our service?
1=Strongly Agree 2=Agree 3=Possibly 4=Unlikely 5=Not At All
Organisations will often provide services based on their perception of an end user requirements. Most of the time this leads to a complete waste of maintenance and support resources. The above question could be a good example of this situation. A capacity survey can come in the form of the infrastructure itself. For example, there is no need to conduct a survey where you asked about how long you had to wait in a queue for the call to be answered, or how long it took for the request to be handled. This information would be available from the call distribution contact centre. A quality survey ties directly to the customer experience, rather than the supporting infrastructure. Did the agent assisting you have the necessary knowledge to direct your call to the correct team?
Third Party Customer Experience and Contact Centres
Although outsourcing has become a common place over the last decade, it has its share of advantages and disadvantages. Technology vendors and service providers may have the best intention when teaming up with a third party, but things can go wrong. For example, if call centre services are outsourced to a third party in a foreign country, issues such as accents and lack of local knowledge will skew the results of a customer survey, regardless of how well the issue and resolution were handled.
The service experience from the customer perspective is based on the contact centre agent. It is not based on the service itself. When dealing with outsourced contact centres, one of the tell tale signs of call centre efficiency is the average talk time and wrap up time for each call. If you have the luxury where a percentage of your calls are handled by a local team, and another percentage is handled by the outsourced call centre, you can measure the difference.
If agents are taking too long on each call then two things are occurring. The number of available agents is reduced, creating greater queue lengths. This leads to customer dissatisfaction due to inefficiency. Wrap up time usually is the amount of time agents take to update the database or fault management system after putting the receiver down. And, perhaps they need to update their supervisor before taking the next call. Unfamiliar systems or slow data connections to the hosted database can cause the wrap up time to increase to unacceptable levels.
A well tuned system can have virtually no wrap up time. Some contact centres I have worked with have a paper free policy. This means agents are forced to enter all details into the system directly, rather than writing them down and inputting information in a wrap up time window. Call centres are great in terms of reporting in this regard, since you can see the direct benefit of agent training reducing both agent talk time and wrap up time.
Monitoring Third Party
When aspects of the customer experience are outsourced to third parties, gaining near real time feed back on the customer experience becomes challenging. A third party organisation manages many outsourcing projects and will pool resources to manage costs. Selecting a service with dedicated staff is costly and often negates the benefit of outsourcing service delivery. Customer feedback becomes more critical when using third party resources for service delivery.
Cost versus Benefit Monitoring Tools
I know a lot of companies that spend a lot of money on expensive management tools, and they either don't use the products at all, or only use 15% of the products' capabilities. This occurs due to many reasons. To begin with, an business may not have a comprehensive understanding of the product. Secondly, it may only use the features that are required for their business and ignore other features, which could help them with the overall business objective. Sometimes corporations just can not justify the budget for a complete network management suite. In these instances they either opt to cover just device monitoring (link up/link down) or rely on their users to notify them of a fault. There are two methods used for network management. The reactive and pro-active methods of monitoring a network.
The Break/Fix - Reactive Method
The most widely adopted method of network management is reactive. It starts with the phone call.The end user saying: “Help, it's not working, come and fix it.” At the outset it is definitely the most cost effective to set-up and run.
The user calls complaining about loss of service or service issue.
The call is assigned to a specialist who solves the problem in the following manner.
Are any other users affected (yes/no)? - Is it a local problem or global one?
Is the issue a one off or is it constantly happening? (intermittent/reproducible) - Am I going to need to bring in some other tools (network analyser, enable debugging) to capture the problem when it occurs again or is it going to be an easy fix?
Has this issue happened in the past? Ask other colleagues.
Re-install program, replace PC, engage with application team to check account settings.
Solve problem by process of elimination.You will notice that a lot of people are involved in this scenario of troubleshooting.
The end user is playing a part in the troubleshooting process by providing updates and notifications. May be a third party support company is also involved in eliminating the hardware as the issue. Finally, the application team may also be engaged to investigate the application settings. This may be an extreme example.However, the number of lost productivity hours over a year and the actual support costs per incident make this type of support extremely expensive over a period of time.
The Squeaky Wheel
Have you ever heard the statement, 'the squeaky wheel gets the oil'? What this means is that only when we hear about an issue, we then deal with it. If we are not told about it, we do nothing. A long time ago, I had a large banking customer which had a sizeable branch network. One of the branches had been operating for some time without any issues and as part of an upgrade, new hardware was required since the old PCs were reaching end of life.
When the new hardware was delivered and set-up, one of the staff made the comment: "I would have thought these new PCs would have been much faster but they are not, they are just as slow as always." The support side of the bank swung into action and started investigating as to why the new PCs were slow. They found sometimes it would take approximately two minutes just to login to a remote banking application and another two to three minutes to save a customer loan application.
Further investigations revealed that every wide area network (WAN) application at the branch performed in this fashion. After a lengthy process of elimination it was revealed that incorrect settings on the switch port connecting the branch network to the WAN router had been causing the slow down for nearly two years. Since the users did not complain, nothing was done about it. Imagine the lost productivity time of staff who just accepted the system was slow, and that's the way it was.
Interpreting Network Management System Information
Another reason organisations initially get excited about a new NMS and then stop using it is because staff do not know or trust the automated diagnosis information or know how to act upon it. Going back to the bank scenario, we got in touch with the department responsible for the bank's network monitoring, which had an impressive wall-board displaying a network map of the entire network. This included the branch WAN router objects coloured in green. I wanted to understand why the NMS system didn't detect a problem at the branch.
When I mentioned to the NMS supervisor the problem found at the branch, she said they always got 'cyclic redundancy check (CRC) errors generated from the network switch at the branch, which they just filtered out because the link was green, meaning it was up. What they did not realise was that the CRC errors were a tell-tale sign of the problems at the branch. The NMS team was trained to react to link up/ link down messages (green versus red) not warnings, so it did not do anything about it. It is more like 'why send a fire-truck when nobody has reported a fire!
The Body of Evidence
Most of the time a single alarm message is insufficient evidence to send the cavalry in. Multiple tools working in conjunction with each other are required before acting on a non-reported problem. In the case of the bank, a comparison of throughput performance from a similar sized branch using the same applications or an application response time graph would have provided sufficient evidence with the remote monitor (RMON) alarm to open a non-reactive support ticket.
It requires a different approach and a different skill set in some cases to move to a pro-active support role. Most service providers, due to the sheer volume of traffic and numbers of users on their networks, depend on these tell-tale alerts to address an issue before it snowballs into a major service outage costing thousands of dollars.
Migrating to Pro-Active Support Role
Businesses which have completely migrated to a pro-active mode of network management enjoy a common benefit --- predictability. Often these organisations have identified, isolated and eliminated every unpredictable element from their network and monitor the daily operations like a seismologist monitoring an active volcano. Moving to a pro-active network management role is the Nirvana of network operations, but for many organisations getting there is the difficult part.