Define Measure Acheive. Repeat

Archive

Posts Tagged ‘Customer Satisfaction’

Are Service Management Tools Replaced for the Right Reasons?

December 1st, 2010

When we get engaged to be married, why do we focus so much energy on the wedding, which after all is one day, rather than the countless days we will be spending with our new significant other post ceremony? Why, when pregnant do we focus on the birth and not on the life that we will share for the rest of our time on this planet? And why when IT departments implement new tools or processes do they focus on the implementation and not much beyond?

The answer, at least in my humble opinion, is that we are slave to the human condition that focuses on something we can control. For example, it is easy to become transfixed on making the wedding day or labour the best possible experience because there is a sense of management. On the other hand, putting a plan in place to manage the rest of our lives – well, good luck with that!

With that being said, after working with organizations for over sixteen years on technology evaluations and deployments I can safely say that way too much time is spent picking a vendor rather than ensuring the project will be a success. If you work in IT, you probably know the routine, but to make my point, you may have been involved in painstaking web research looking for a Service Desk tool vendor, you narrow the list to about 50 (after all it would be positively rude to exclude). Then you build the RFI, where you construct a document inviting said vendors to answer a series of questions (most of which are often generic enough that almost all tools are designed to accomplish).

Then there is the RFP, the shortlist, the demos, the refined shortlist, another round of demos. Finally a decision is made and just when a final decision is to be announced… funding is pulled so the whole process can begin again next year. I have watched this process countless times, sometimes happening repeatedly at the same organizations for several years. Finally, a new tool is picked and work begins to implement. Everything is great, for a while that is.

Soon enough, the enthusiasm and hope for the initiative wanes as focus is on the next project and after a few years of no provable ROI or measures of what has changed, a decision is made that ‘we need a new tool’ and the whole un-virtuous cycle begins again.

Of course, it doesn’t have to be this way. But we as IT stakeholders need to make a stand and build a ‘life plan’ for our new products.

My guide for avoiding the rip n’replace cycle is as follows:

#1 A tool implementation is the beginning of a journey not the end of one.

Understand that 6-12 months after you acquire a new tool is just the beginning and recognize that budget and time will need to be allocated to ensure that this is going to run for a while.

# 2 Reporting should always be part of Phase 1.

The number one reason cited by organizations for a new tool is normally ‘better reporting’. Then the reporting phase is often relegated to Phase 2, to give everyone a chance to `gather the data’. The problem with this approach is that if you have not defined your measurement upfront, how do you know that you are gathering the right data?

#3 Never say you want ‘better reporting’.

Better reporting does not get anyone (holding the budget) excited. Decide on what you want to measure and then measure it.

For example, if you had a personal goal of getting healthy because the doctor told you that your blood pressure was too high. You might decide that the capability you will need to achieve the goal is improved fitness and weight loss. The activity you perform is reducing your caloric intake and increasing the amount of exercise.  The key measures that would track your progress toward your goals would be things like weight, waist size, blood pressure and resting pulse as they have all been proven impacts to reduce blood pressure. Now if your goal was to fit into a pair of jeans you wore 20 years ago, the capabilities (diet and exercise) might be the same, however, the measures would be different.

For the IT Service Desk, the goal might be an `Excellent` rating in customer service. The capabilities would include being able to track information consistently, and being able to solicit feedback after work is performed. The measure might be MTRS, First Call Resolution Rate and Customer Satisfaction Surveys. A different goal would require different capabilities and measures. 

#4 Achieving a goal is great but you damage the impacts by not showing where you started.

The winner of last season’s reality show, `The Biggest Loser` weighed in at 265lbs, a weight in itself meant the winner was still clinically obese. Yet, there was incredible accomplishment and a sense of victory because the producers of the show meticulously take a baseline at the beginning showing a starting weight of 526lbs.

Buying new tools provide tremendous opportunities to demonstrate value of your Service Desk, the value of IT to be able to execute and tell a story to the people who control the budget. Yet many organization don`t effectively take a snapshot of where they currently are, so those end results, however fantastic, don`t due justice to the journey that you went on to achieve them.

#5 Reaching your goal is part of the journey. 

Extending my fitness analogy from above, once you have reached your goal of lower blood pressure, the whole exercise (pun intended) becomes pointless if after it is achieved, it is no longer a focus and is not watched.

Similarly, organizations often do a great job with `sprints` but then do not sustain. Reaching a goal on the Service Desk is not a destination but merely part of the journey. Once the goal has been met, we need to understand what is necessary to maintain that goal over time.

#6 It`s not the tool, it`s you.

Sometimes we just have to look in the mirror and take ownership for the situation.

In 90% of the tool replacement projects I have been involved in over the years, the IT department blames a tool for something that is not within the tools domain to fix. There are absolutely valid reasons to rip n`replace a Service Desk tool, just make sure it is about cost of ownership and not `better reporting` or `ease of use`.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , ,

Service Desk Measurement - Be careful what you wish for!

September 29th, 2009

Mini-series on measurements that can come back to bite you…..

The trouble with performance metrics is that they can actually encourage inefficiency, de-motivate resources and result in misinformed management decisions if not thoughtfully designed and carefully monitored!

Take for example an organization that has a published target for their Service Desk’s ability to resolve inquiries at the first point of contact (FPOC rate).

This measure is expected to positively impact two important facets of the service desk – customer satisfaction and cost.

Customer Satisfaction - The premise behind this measure is that customers’ satisfaction is positively impacted by quick resolutions.  After all, who doesn’t like to have their questions/issues resolved quickly and without being transferred or bounced around an organization?

Cost  - Generally organizations have tiered levels of support with resource costs increasing at each level. It makes sense that the more incidents/inquiries that can be resolved at FPOC, without handoffs to more expensive tiers of technical support, the more money that can be saved.

Also, organizations often use this measure to compare themselves to other service desk organizations and to communicate their service desk’s value proposition.

While on the surface this seems simple, practical and a no-brainer there are potential “gotchas”. Consider the following simple scenario.

An organization’s service desk proudly promotes an FPOC rate of 80% or better consistently month-over- month.  Remarkably the Desk has been able to maintain this for 6 straight months! Agent’s performance reports show that individually they are meeting or exceeding their assigned FPOC targets.  Comparisons to other service desks FPOC are favourable and management assumes this is a very positive measure – right?

Maybe not, a High FPOC rate may actually be signalling a lot of repetitive incidents.  Agents at the desk can get very skilled and efficient at resolving the same issues over and over again and the hidden cost can be easily overlooked.

Points to Ponder

While initially customers will be pleased with a timely restoration of the service, their satisfaction will drop quickly if they keep having the same problems over and over again.

Resolving the incident quickly with lower cost resources is, on the surface, efficient; however, if that incident is happening repeatedly what is the total cost of the repetitive incident?

From a value perspective, which desk would you choose?  One that has an 80% FPOC but is continuously resolving the same type of incidents or one that has a 50% FPOC but continuously reviews incidents trends and permanently removes repetitive incidents from their environment?

Simple Tips

1. Don’t look at your FPOC in isolation.  Always look for correlation to satisfaction trends and hidden costs.  

2. Seek out and destroy repetitive incidents. Analyze the types of incidents that are being resolved FPOC and identify repetitive incidents for permanent resolution.

3. Use your incident data to “expose the cost” of repetitive incidents and be sure to report the cost savings/avoidance achieved by removing the incidents right along beside your new, possibly lower FPOC number.

4. Your service desk agents are a great source of information on your top repetitive incidents – tap into their knowledge & experience.  Reward them for identifying repetitive incidents and other improvement opportunities.

One Final Thought with a Bit of a New Twist on an Old Measure

Many desks set high FPOC rates but then do not give the agents the tools or permissions they need to achieve it!

If you have a lower than desired FPOC resolution rate, you may want to consider measuring “designed for FPOC”.  You may be surprised to find that your service desk is actually not able to achieve an acceptable FPOC because of the way your support delivery is designed!

Too often the service desk is not provided with the necessary information (troubleshooting scripts, configuration information, etc) or the necessary permissions to actually resolve incidents at FPOC.

This very simple measure, “FPOC by Design”, reflects an organization that has designed support specifically for each product/service, and is monitoring how well its service desk is performing against achievable targets.

Low FPOC on a product that has been designed for FPOC provides an important area to analyze agent performance and training opportunities.  Conversely, high incident areas with low FPOC for products/services NOT designed for FPOC make be a great place to review the support model and look for opportunities to empower your service desk further.

Measurement is definitely an important part of continuously improving your Service Desk and driving up value and customer satisfaction.  Relentless review of relevant, insightful metrics will keep you at the forefront. Next time let’s look at Mean Time to Restore Service (MTRS)………

Question
Have you come across any measures that have done you more harm than good?  Do you have any really GOOD or really BAD measurement stories to share? I’d love to hear them……

  • Share/Bookmark

Maria Ritchie Uncategorized , , , , , ,