Define Measure Acheive. Repeat


Posts Tagged ‘Reporting’

Are Service Management Tools Replaced for the Right Reasons?

December 1st, 2010

When we get engaged to be married, why do we focus so much energy on the wedding, which after all is one day, rather than the countless days we will be spending with our new significant other post ceremony? Why, when pregnant do we focus on the birth and not on the life that we will share for the rest of our time on this planet? And why when IT departments implement new tools or processes do they focus on the implementation and not much beyond?

The answer, at least in my humble opinion, is that we are slave to the human condition that focuses on something we can control. For example, it is easy to become transfixed on making the wedding day or labour the best possible experience because there is a sense of management. On the other hand, putting a plan in place to manage the rest of our lives – well, good luck with that!

With that being said, after working with organizations for over sixteen years on technology evaluations and deployments I can safely say that way too much time is spent picking a vendor rather than ensuring the project will be a success. If you work in IT, you probably know the routine, but to make my point, you may have been involved in painstaking web research looking for a Service Desk tool vendor, you narrow the list to about 50 (after all it would be positively rude to exclude). Then you build the RFI, where you construct a document inviting said vendors to answer a series of questions (most of which are often generic enough that almost all tools are designed to accomplish).

Then there is the RFP, the shortlist, the demos, the refined shortlist, another round of demos. Finally a decision is made and just when a final decision is to be announced… funding is pulled so the whole process can begin again next year. I have watched this process countless times, sometimes happening repeatedly at the same organizations for several years. Finally, a new tool is picked and work begins to implement. Everything is great, for a while that is.

Soon enough, the enthusiasm and hope for the initiative wanes as focus is on the next project and after a few years of no provable ROI or measures of what has changed, a decision is made that ‘we need a new tool’ and the whole un-virtuous cycle begins again.

Of course, it doesn’t have to be this way. But we as IT stakeholders need to make a stand and build a ‘life plan’ for our new products.

My guide for avoiding the rip n’replace cycle is as follows:

#1 A tool implementation is the beginning of a journey not the end of one.

Understand that 6-12 months after you acquire a new tool is just the beginning and recognize that budget and time will need to be allocated to ensure that this is going to run for a while.

# 2 Reporting should always be part of Phase 1.

The number one reason cited by organizations for a new tool is normally ‘better reporting’. Then the reporting phase is often relegated to Phase 2, to give everyone a chance to `gather the data’. The problem with this approach is that if you have not defined your measurement upfront, how do you know that you are gathering the right data?

#3 Never say you want ‘better reporting’.

Better reporting does not get anyone (holding the budget) excited. Decide on what you want to measure and then measure it.

For example, if you had a personal goal of getting healthy because the doctor told you that your blood pressure was too high. You might decide that the capability you will need to achieve the goal is improved fitness and weight loss. The activity you perform is reducing your caloric intake and increasing the amount of exercise.  The key measures that would track your progress toward your goals would be things like weight, waist size, blood pressure and resting pulse as they have all been proven impacts to reduce blood pressure. Now if your goal was to fit into a pair of jeans you wore 20 years ago, the capabilities (diet and exercise) might be the same, however, the measures would be different.

For the IT Service Desk, the goal might be an `Excellent` rating in customer service. The capabilities would include being able to track information consistently, and being able to solicit feedback after work is performed. The measure might be MTRS, First Call Resolution Rate and Customer Satisfaction Surveys. A different goal would require different capabilities and measures. 

#4 Achieving a goal is great but you damage the impacts by not showing where you started.

The winner of last season’s reality show, `The Biggest Loser` weighed in at 265lbs, a weight in itself meant the winner was still clinically obese. Yet, there was incredible accomplishment and a sense of victory because the producers of the show meticulously take a baseline at the beginning showing a starting weight of 526lbs.

Buying new tools provide tremendous opportunities to demonstrate value of your Service Desk, the value of IT to be able to execute and tell a story to the people who control the budget. Yet many organization don`t effectively take a snapshot of where they currently are, so those end results, however fantastic, don`t due justice to the journey that you went on to achieve them.

#5 Reaching your goal is part of the journey. 

Extending my fitness analogy from above, once you have reached your goal of lower blood pressure, the whole exercise (pun intended) becomes pointless if after it is achieved, it is no longer a focus and is not watched.

Similarly, organizations often do a great job with `sprints` but then do not sustain. Reaching a goal on the Service Desk is not a destination but merely part of the journey. Once the goal has been met, we need to understand what is necessary to maintain that goal over time.

#6 It`s not the tool, it`s you.

Sometimes we just have to look in the mirror and take ownership for the situation.

In 90% of the tool replacement projects I have been involved in over the years, the IT department blames a tool for something that is not within the tools domain to fix. There are absolutely valid reasons to rip n`replace a Service Desk tool, just make sure it is about cost of ownership and not `better reporting` or `ease of use`.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , ,

Do you measure your incident backlog?

May 5th, 2010

Incident backlog provides an informative KPI that you should consider adding to your reporting repertoire. The KPI should measure the number of incidents outstanding that have missed an SLO or SLA. The KPI should be trended over time and should either be stable or decreasing.

In the example chart below, we can see the backlog rising over the last three months.

Incident Backlog

In this example we see the backlog increase by 50% over 3 months and should be investigated. To determine how urgent the issue is the first thing to explore is to breakdown the  backlog by incident priority.

Open Incidents

It would be highly unlikely that a backlog with high priority issues would persist over a period of time and the chart above now shows that a majority of the back log are priority 3 and 4. This is fairly common and is often systemically ignored. However, it is worthwhile to examine further. 

The reasons for the rising trend could include:

- Second line support personnel are not closing tickets in your ITSM tool.

- Resourcing level may not reflect the current volume of tickets being received on the Service Desk.

- The organizations Change and/or Release practice is causing unscheduled spikes in incident volume

- Are their particular workgroups with particularly high backlogs that could indicate a bottleneck?

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , , , , , ,

Reporting is the soufflé of the IT Service Desk.

April 21st, 2010

For those of you that are culinarily inclined you will know that a soufflé is made with a couple of basic ingredients; a cream sauce and egg whites, and yet the final dish remains elusive to the many that simply don’t pay attention to the details for prerequisite success. Oh, and such success is wonderful to observe - fluffiness contained within a towering cloud of caloric goodness - it is truly an elusive culinary accomplishment.

Figure 1 – The rare object d’art itself – a light fluffy soufflé produced at L’Atelier by Joel Rubuchon.

Figure 1 - The rare object d’art itself - a light fluffy soufflé produced at L’Atelier by Joel Rubuchon.

Like the fracturable soufflé, good service desk reports are easy to order but more difficult to enjoy. 

Although the ingredients are simple, the execution is questionable and the ultimate result is often unsatisfying. The particular reports I am thinking of are not operational in nature (the wham bam thank you ma’am of reports). The ones, I am thinking of are tactical in nature. These require a little more finesse, they are the thinking persons’ report, a tactical view of service desk performance that can enable service improvement and actually inform decision making. In other words, reports that provide information that is ‘actionable’. How delicious!

Anyhow, so many of these failed attempts leave me wanting more. Inadequate execution reduces them to merely visually appealing, useless and perhaps even inconsequential. 

So perhaps we should examine the ingredients and execution that can turn a miserable meaningless humble report into something worth consuming.

Ingredient 1: Consistency

Consistency is one of the few things that matter when generating decision support material. Everyone should be saying the same thing when answering the telephone, asking the same questions, and documenting the information received in the same way.

Ingredient 2: Track the right stuff!

Set yourself up for success and build a support model. Outside of the obvious items like impact, customer information etc. there are three things that the service desk needs to capture:

#1 –what was the customers’ perception of the failure (i.e. the end to end service),
#2 - what was the underlying IT reason for the failure (i.e. the provider service) and,
#3 - finally what infrastructure item was involved in the failure (i.e. the component category).

See the figure below to see a breakdown of the critical criteria that should be captured in the incident.


Figure 2 - The essential elements of information capture for an Incident.

These items enable simple and easy information gathering from the customer plus makes escalation of the issue through the IT organization easier to manage.

Ingredient 3: Focus on the WHAT, the WHY and the ACTION.

Generating reporting for reporting sake doesn’t work. It sounds obvious but many of us get in the habit of reviewing the same reports every month and then do nothing with the information.

If this sounds like you, STOP! 

Ask yourself three things when looking at a report:

Do I care about what this report is telling me?

If your answer is NO, move on and deal with something more important.

If your answer is YES, then you need to figure out WHY the information in the report is occurring.

Once the WHY has been determined, implement a performance tweak or involve the relevant stakeholder group and share the information with them as part of the ACTION.

In my next blog (on Monday), I  will explore a real life example of how this process works.

  • Share/Bookmark

Charles Cyna Uncategorized , , , , , , , , , ,

Service Management reporting… where to start?

May 17th, 2009

A key challenge facing many ITSM practitioners is defining and producing meaningful reports.  Many ITSM vendor products, designed to automate and manage process activities, come equipped with a wide range of reports. Sometimes they speak to IT managers and, more importantly the business, but others consistently fall short of that target.  While this may be due to the messages they are attempting to convey, it may also be that the wrong reports are being shared with the wrong stakeholders.  How do you avoid this hit and miss strategy and ensure key information is being conveyed?  What we need is a way to present reports that are meaningful for specific stakeholders.   Read more…

  • Share/Bookmark

Michael Oas Uncategorized , , , , ,