Define Measure Acheive. Repeat

Archive

Posts Tagged ‘Service Desk’

A New Year’s Resolution for the Service Desk

January 5th, 2011

I love the New Year.  January is about fresh starts; hopefully you have had a break over the holidays and feel revived for the challenges ahead. On the Service Desk this is a time for renewal too, and there is no better time to evaluate your current Service Improvement goals for the year.  The business case for Service Improvement is easy to make – identifying ways to reduce costs and improve service is something all levels of management get excited about!

One of the easiest things to do is to find out what your customers’ think of the service that you provide. Many Service Desks’ find all kinds of excuses not to do this, however, by following some simple guidelines you can do a survey and get a snapshot in time of how your customers feel about the support you provide.

#1 Establish a repeatable approach to assess satisfaction and action improvements as part of a continual improvement practice. Make a commitment to survey and re-survey at regular intervals and if you want to improve response rates then share the results with your customers.

#2 Collect information within a Satisfaction Framework to create “actionable” information and simplify analysis and trending. You can measure satisfaction in key dimensions of the service for ease of analysis and action. Use a Service Quality Model to measure important service quality dimensions like: “ON TRRAC” – Timely, Reliable, Responsive, Accessible and Cost effective (see Survey Framework Overview below for more details).

#3 Focus initially on Client (IT User)  and Practitioner(IT Provider) satisfaction with the Service Desk and Incident Management Practice. (Later, you may want to consider adding a survey of the Business-Level Client (Business Executive) to help understand their satisfaction with key elements of your overall IT Services and IT Executives to see how well aligned they are with their Business’s satisfaction/importance ratings.)

#4 Execute an annual survey to baseline the satisfaction and inform annual improvement action plans. Client Surveys target users collected by business area. Practitioner Surveys target both the Service Desk and Technician level collected by group.

#5 Schedule “Checkpoint” surveys to gauge progress and to provide needed information to flag in-year corrective actions.

Client Surveys at targeted intervals throughout the year:

  1. Auto-generated short survey at the closure of every incident;
  2. Call-back short survey to a set # of users every month (closed tickets); and
  3. Optional – Warm call transfer short survey to users upon completion of their call to your Service Desk.

#6 Ensure that all survey results are routinely analyzed and incorporated into the Service Improvement Plan (SIP), results are presented to primary performance stakeholders AND that feedback is published back to survey recipients, provide them with the ‘What We Heard’ along with planned actions, ‘What We Are Doing About It’.

We make many of our tools available for use at no charge.

Start surveying your Clients (IT Users), Practitioners (IT Providers), IT and Business Executives today with ITSM Coach: Satisfaction Surveys (included as part of ITSM Coach).

To learn more click here…

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , , , , , ,

Are Service Management Tools Replaced for the Right Reasons?

December 1st, 2010

When we get engaged to be married, why do we focus so much energy on the wedding, which after all is one day, rather than the countless days we will be spending with our new significant other post ceremony? Why, when pregnant do we focus on the birth and not on the life that we will share for the rest of our time on this planet? And why when IT departments implement new tools or processes do they focus on the implementation and not much beyond?

The answer, at least in my humble opinion, is that we are slave to the human condition that focuses on something we can control. For example, it is easy to become transfixed on making the wedding day or labour the best possible experience because there is a sense of management. On the other hand, putting a plan in place to manage the rest of our lives – well, good luck with that!

With that being said, after working with organizations for over sixteen years on technology evaluations and deployments I can safely say that way too much time is spent picking a vendor rather than ensuring the project will be a success. If you work in IT, you probably know the routine, but to make my point, you may have been involved in painstaking web research looking for a Service Desk tool vendor, you narrow the list to about 50 (after all it would be positively rude to exclude). Then you build the RFI, where you construct a document inviting said vendors to answer a series of questions (most of which are often generic enough that almost all tools are designed to accomplish).

Then there is the RFP, the shortlist, the demos, the refined shortlist, another round of demos. Finally a decision is made and just when a final decision is to be announced… funding is pulled so the whole process can begin again next year. I have watched this process countless times, sometimes happening repeatedly at the same organizations for several years. Finally, a new tool is picked and work begins to implement. Everything is great, for a while that is.

Soon enough, the enthusiasm and hope for the initiative wanes as focus is on the next project and after a few years of no provable ROI or measures of what has changed, a decision is made that ‘we need a new tool’ and the whole un-virtuous cycle begins again.

Of course, it doesn’t have to be this way. But we as IT stakeholders need to make a stand and build a ‘life plan’ for our new products.

My guide for avoiding the rip n’replace cycle is as follows:

#1 A tool implementation is the beginning of a journey not the end of one.

Understand that 6-12 months after you acquire a new tool is just the beginning and recognize that budget and time will need to be allocated to ensure that this is going to run for a while.

# 2 Reporting should always be part of Phase 1.

The number one reason cited by organizations for a new tool is normally ‘better reporting’. Then the reporting phase is often relegated to Phase 2, to give everyone a chance to `gather the data’. The problem with this approach is that if you have not defined your measurement upfront, how do you know that you are gathering the right data?

#3 Never say you want ‘better reporting’.

Better reporting does not get anyone (holding the budget) excited. Decide on what you want to measure and then measure it.

For example, if you had a personal goal of getting healthy because the doctor told you that your blood pressure was too high. You might decide that the capability you will need to achieve the goal is improved fitness and weight loss. The activity you perform is reducing your caloric intake and increasing the amount of exercise.  The key measures that would track your progress toward your goals would be things like weight, waist size, blood pressure and resting pulse as they have all been proven impacts to reduce blood pressure. Now if your goal was to fit into a pair of jeans you wore 20 years ago, the capabilities (diet and exercise) might be the same, however, the measures would be different.

For the IT Service Desk, the goal might be an `Excellent` rating in customer service. The capabilities would include being able to track information consistently, and being able to solicit feedback after work is performed. The measure might be MTRS, First Call Resolution Rate and Customer Satisfaction Surveys. A different goal would require different capabilities and measures. 

#4 Achieving a goal is great but you damage the impacts by not showing where you started.

The winner of last season’s reality show, `The Biggest Loser` weighed in at 265lbs, a weight in itself meant the winner was still clinically obese. Yet, there was incredible accomplishment and a sense of victory because the producers of the show meticulously take a baseline at the beginning showing a starting weight of 526lbs.

Buying new tools provide tremendous opportunities to demonstrate value of your Service Desk, the value of IT to be able to execute and tell a story to the people who control the budget. Yet many organization don`t effectively take a snapshot of where they currently are, so those end results, however fantastic, don`t due justice to the journey that you went on to achieve them.

#5 Reaching your goal is part of the journey. 

Extending my fitness analogy from above, once you have reached your goal of lower blood pressure, the whole exercise (pun intended) becomes pointless if after it is achieved, it is no longer a focus and is not watched.

Similarly, organizations often do a great job with `sprints` but then do not sustain. Reaching a goal on the Service Desk is not a destination but merely part of the journey. Once the goal has been met, we need to understand what is necessary to maintain that goal over time.

#6 It`s not the tool, it`s you.

Sometimes we just have to look in the mirror and take ownership for the situation.

In 90% of the tool replacement projects I have been involved in over the years, the IT department blames a tool for something that is not within the tools domain to fix. There are absolutely valid reasons to rip n`replace a Service Desk tool, just make sure it is about cost of ownership and not `better reporting` or `ease of use`.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , ,

Signs that a Service Management Initiative has Stalled

November 8th, 2010

Continual Service Improvement is the fuel that provides long term momentum for any Service Management initiative. The fuel of momentum is vital; great IT Service Delivery is a journey that can take months and sometimes years to reach the productivity and cost savings that were sold at the beginning of a project. 

Many Service Management initiatives stall soon after the beginning because, as we try to achieve efficiencies we generate organizational resistance (the human condition is that most of us resist change) and if we don’t have the ammunition to overcome the resistance then the project stalls, inertia overtakes and the Service Management initiative never has the opportunity to reach its potential.

Signs that a Service Management Initiative has Stalled:

You implemented a Service Management tool a few years ago and you have not performed any significant customizations to the tool since it was implemented;

You implemented a couple of practice areas (Incident, Problem, Change) but have not been able to move much beyond those areas due to time and cost restraints;

You did a Process Maturity Assessment over a year ago and have not completed a re-assessment to measure the differences over time;

You generate reports for management but there are no defined ‘next steps’  to improve service after looking at the information;

You survey your customers but do little with the results of the survey;

You are not sure what the business outcomes are for a successful ITIL or ITSM Practice;

And if you do know the business outcomes, you are not sure whether you are meeting them.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , ,

Do you measure your incident backlog?

May 5th, 2010

Incident backlog provides an informative KPI that you should consider adding to your reporting repertoire. The KPI should measure the number of incidents outstanding that have missed an SLO or SLA. The KPI should be trended over time and should either be stable or decreasing.

In the example chart below, we can see the backlog rising over the last three months.

Incident Backlog

In this example we see the backlog increase by 50% over 3 months and should be investigated. To determine how urgent the issue is the first thing to explore is to breakdown the  backlog by incident priority.

Open Incidents

It would be highly unlikely that a backlog with high priority issues would persist over a period of time and the chart above now shows that a majority of the back log are priority 3 and 4. This is fairly common and is often systemically ignored. However, it is worthwhile to examine further. 

The reasons for the rising trend could include:

- Second line support personnel are not closing tickets in your ITSM tool.

- Resourcing level may not reflect the current volume of tickets being received on the Service Desk.

- The organizations Change and/or Release practice is causing unscheduled spikes in incident volume

- Are their particular workgroups with particularly high backlogs that could indicate a bottleneck?

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , , , , , ,

Reporting is the soufflé of the IT Service Desk.

April 21st, 2010

For those of you that are culinarily inclined you will know that a soufflé is made with a couple of basic ingredients; a cream sauce and egg whites, and yet the final dish remains elusive to the many that simply don’t pay attention to the details for prerequisite success. Oh, and such success is wonderful to observe - fluffiness contained within a towering cloud of caloric goodness - it is truly an elusive culinary accomplishment.

Figure 1 – The rare object d’art itself – a light fluffy soufflé produced at L’Atelier by Joel Rubuchon.

Figure 1 - The rare object d’art itself - a light fluffy soufflé produced at L’Atelier by Joel Rubuchon.

Like the fracturable soufflé, good service desk reports are easy to order but more difficult to enjoy. 

Although the ingredients are simple, the execution is questionable and the ultimate result is often unsatisfying. The particular reports I am thinking of are not operational in nature (the wham bam thank you ma’am of reports). The ones, I am thinking of are tactical in nature. These require a little more finesse, they are the thinking persons’ report, a tactical view of service desk performance that can enable service improvement and actually inform decision making. In other words, reports that provide information that is ‘actionable’. How delicious!

Anyhow, so many of these failed attempts leave me wanting more. Inadequate execution reduces them to merely visually appealing, useless and perhaps even inconsequential. 

So perhaps we should examine the ingredients and execution that can turn a miserable meaningless humble report into something worth consuming.

Ingredient 1: Consistency

Consistency is one of the few things that matter when generating decision support material. Everyone should be saying the same thing when answering the telephone, asking the same questions, and documenting the information received in the same way.

Ingredient 2: Track the right stuff!

Set yourself up for success and build a support model. Outside of the obvious items like impact, customer information etc. there are three things that the service desk needs to capture:

#1 –what was the customers’ perception of the failure (i.e. the end to end service),
#2 - what was the underlying IT reason for the failure (i.e. the provider service) and,
#3 - finally what infrastructure item was involved in the failure (i.e. the component category).

See the figure below to see a breakdown of the critical criteria that should be captured in the incident.

figure-2-the-essential-elements-fo-information-capture-for-incident

Figure 2 - The essential elements of information capture for an Incident.

These items enable simple and easy information gathering from the customer plus makes escalation of the issue through the IT organization easier to manage.

Ingredient 3: Focus on the WHAT, the WHY and the ACTION.

Generating reporting for reporting sake doesn’t work. It sounds obvious but many of us get in the habit of reviewing the same reports every month and then do nothing with the information.

If this sounds like you, STOP! 

Ask yourself three things when looking at a report:

Do I care about what this report is telling me?

If your answer is NO, move on and deal with something more important.

If your answer is YES, then you need to figure out WHY the information in the report is occurring.

Once the WHY has been determined, implement a performance tweak or involve the relevant stakeholder group and share the information with them as part of the ACTION.

In my next blog (on Monday), I  will explore a real life example of how this process works.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , ,

Anatomy of a KPI – Mean Time to Restore Service (MTRS)

March 3rd, 2010

Mean Time to Restore Service is an important KPI that most help desks measure (or at least should). MTRS tells us about the average customer experience a user has when a service interruption is identified.

To calculate the MTRS you take the total amount of time of open incidents divided by the total number of incidents logged in a given time period (normally a month). I would recommend that the KPI only show the top 2 tiers of classification as performance on lower classification would probably reduce the usefulness of the KPI based on how most organization service their lower priority issues.

Now the usefulness of KPI is just that, ‘an indicator’. If it is going in the wrong direction (i.e. up) there is no reason to panic – the most important thing is to identify whether there really is an issue and if so then be in a position to address it as soon as possible.

practice-indicators1

The first thing to look at in regard to MTRS is to see whether Incident volume has spiked. When incident volume changes unexpectedly, the help desk doesn’t have a chance to change resourcing so the average time to restore service will often rise.

analyze-your-mtrs1

The second thing Read more…

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , ,

Why Service Desk Managers have the most Challenging Position in IT

February 26th, 2010

The job responsibility itself is demanding.  Managing the group that provides support to end user’s IT related service interruptions, as well as being the face of IT.

Below are some of the key reasons that make fulfilling this role challenging.

1.  Right Staffing!  How do you staff a team effectively when the volume of communications can fluctuate widely?  And the cause of fluctuations are beyond the Service Desk Manager’s control (i.e. outage, change gone bad, new rollout, etc.)?

2.  Higher Staff Turnover!  Service desk staff, compared to other IT staff, tend to turnover at a fast rate.  The primary reasons include burnout from constantly dealing with frustrated end users, as well as, staff using the service desk as a stepping stone to other IT areas.

3.  Unrealistic End User Expectations!  Despite the magnitude and complexity of technology, the end users expect, that regardless of the nature of their communication / situation, the service desk analyst should be able to resolve their issue immediately. Read more…

  • Share/Bookmark

swaxler Uncategorized , ,

Service Desk Measurement - Be careful what you wish for!

September 29th, 2009

Mini-series on measurements that can come back to bite you…..

The trouble with performance metrics is that they can actually encourage inefficiency, de-motivate resources and result in misinformed management decisions if not thoughtfully designed and carefully monitored!

Take for example an organization that has a published target for their Service Desk’s ability to resolve inquiries at the first point of contact (FPOC rate).

This measure is expected to positively impact two important facets of the service desk – customer satisfaction and cost.

Customer Satisfaction - The premise behind this measure is that customers’ satisfaction is positively impacted by quick resolutions.  After all, who doesn’t like to have their questions/issues resolved quickly and without being transferred or bounced around an organization?

Cost  - Generally organizations have tiered levels of support with resource costs increasing at each level. It makes sense that the more incidents/inquiries that can be resolved at FPOC, without handoffs to more expensive tiers of technical support, the more money that can be saved.

Also, organizations often use this measure to compare themselves to other service desk organizations and to communicate their service desk’s value proposition.

While on the surface this seems simple, practical and a no-brainer there are potential “gotchas”. Consider the following simple scenario.

An organization’s service desk proudly promotes an FPOC rate of 80% or better consistently month-over- month.  Remarkably the Desk has been able to maintain this for 6 straight months! Agent’s performance reports show that individually they are meeting or exceeding their assigned FPOC targets.  Comparisons to other service desks FPOC are favourable and management assumes this is a very positive measure – right?

Maybe not, a High FPOC rate may actually be signalling a lot of repetitive incidents.  Agents at the desk can get very skilled and efficient at resolving the same issues over and over again and the hidden cost can be easily overlooked.

Points to Ponder

While initially customers will be pleased with a timely restoration of the service, their satisfaction will drop quickly if they keep having the same problems over and over again.

Resolving the incident quickly with lower cost resources is, on the surface, efficient; however, if that incident is happening repeatedly what is the total cost of the repetitive incident?

From a value perspective, which desk would you choose?  One that has an 80% FPOC but is continuously resolving the same type of incidents or one that has a 50% FPOC but continuously reviews incidents trends and permanently removes repetitive incidents from their environment?

Simple Tips

1. Don’t look at your FPOC in isolation.  Always look for correlation to satisfaction trends and hidden costs.  

2. Seek out and destroy repetitive incidents. Analyze the types of incidents that are being resolved FPOC and identify repetitive incidents for permanent resolution.

3. Use your incident data to “expose the cost” of repetitive incidents and be sure to report the cost savings/avoidance achieved by removing the incidents right along beside your new, possibly lower FPOC number.

4. Your service desk agents are a great source of information on your top repetitive incidents – tap into their knowledge & experience.  Reward them for identifying repetitive incidents and other improvement opportunities.

One Final Thought with a Bit of a New Twist on an Old Measure

Many desks set high FPOC rates but then do not give the agents the tools or permissions they need to achieve it!

If you have a lower than desired FPOC resolution rate, you may want to consider measuring “designed for FPOC”.  You may be surprised to find that your service desk is actually not able to achieve an acceptable FPOC because of the way your support delivery is designed!

Too often the service desk is not provided with the necessary information (troubleshooting scripts, configuration information, etc) or the necessary permissions to actually resolve incidents at FPOC.

This very simple measure, “FPOC by Design”, reflects an organization that has designed support specifically for each product/service, and is monitoring how well its service desk is performing against achievable targets.

Low FPOC on a product that has been designed for FPOC provides an important area to analyze agent performance and training opportunities.  Conversely, high incident areas with low FPOC for products/services NOT designed for FPOC make be a great place to review the support model and look for opportunities to empower your service desk further.

Measurement is definitely an important part of continuously improving your Service Desk and driving up value and customer satisfaction.  Relentless review of relevant, insightful metrics will keep you at the forefront. Next time let’s look at Mean Time to Restore Service (MTRS)………

Question
Have you come across any measures that have done you more harm than good?  Do you have any really GOOD or really BAD measurement stories to share? I’d love to hear them……

  • Share/Bookmark

Maria Ritchie Uncategorized , , , , , ,

ITIL Case Study – If you want to Lose Weight, Get on the Scale.

August 25th, 2009

Like a successful weight reduction plan, service desk improvements need to be defined based on a good knowledge of where you are and how far you want/need to go. Taking a little time at the beginning of your improvement planning to baseline your service desk practice and inform your improvement priorities will provide you with a surprisingly valuable set of information!

This blog is a follow on to the “ITIL – Not a Cure for the Common Cold!” blog where I provided an overview of a large government’s service management journey and outlined their 5-Step Roadmap to improvement.  This article will focus on getting started, using the case study organization as a guide….

Where do you start with no money, no credibility and no time?

The answer is not really all that difficult.  You have to start with a solid understanding of where you are and knowledge of where you want & need to be.  The art is then to pick the combination of outcomes and activities that will generate momentum, produce useful improvements and build credibility. Read more…

  • Share/Bookmark

Maria Ritchie Uncategorized , , , , , , ,

ITIL – Not a Cure for the Common Cold!

May 25th, 2009

Shockingly, ITIL is not a cure for the common cold and suffers from being overprescribed by often well-intentioned but ill-advised ITIL enthusiasts.  Doing ITIL will not necessarily generate any benefits for your organization and can consume significant resources with little return on the investment.  Fortunately the diagnosis is not all bad though.  ITIL can be a powerful tool when coupled with a clear improvement purpose and plan.

As a matter of fact, ITIL was a significant influence in the success of several large IT transformation initiatives that I was involved with as a senior manager in the Ontario Government.  Over the next weeks and months, I will share some of the insights I gained as an IT practitioner using ITIL in the hopes that some part of my experiences may be useful to you on your continual service improvement journeys.

Let’s start at the beginning…… Read more…

  • Share/Bookmark

Maria Ritchie Uncategorized , , , , ,