Developing metrics and measures for IT

Tim Banks
Faculty IT Manager
University of Leeds

This morning I attended a session run by Martin Klubeck from the Consortium for the Establishment of Information Technology Performance Standards (CEITPS)

This group is working to establish a common set of measures and metrics across education IT. CEITPS volunteers have spent some time over the EDUCAUSE 2015 conference writing the first 21 metrics, in between attending sessions.

CEITPS have a refreshingly common sense approach to develop standards as follows:

  • Get some interested and enthusiastic people in a room
  • Write some standards, plagiarising as much as possible from other sources
  • Review within the group and amend as necessary
  • Don’t worry if you don’t get everything perfect first time
  • Send out to the wider CEITPS group for comment, but give them a limited time to respond (e.g. seven days). If you give them six weeks, they will take that long.

What is the difference between a measure and a metric?

This was a question asked by a member of the audience. Martin answered in the form of a tree analogy:

  1. The leaves are like data – there are a lot of them and a lot can be thrown away. Data are typically just raw numbers.
    1. NB: Never give data to a manager! Business Intelligence (BI) tools are particularly bad because not only do they give data to managers but they also make it look pretty…
  2. The twigs can be thought of as measures (e.g. ‘50%’ or ’20 out of 30′) – has some context.
  3. The branches are like information,which have more context around them.
  4. The trunk of the tree is your metrics,which have sufficient contextual and trend-over-time information to make them suitable for presentation to senior managers.
  5. It is vital to find out the root (i.e. underlying) question that the person asking wants answering before you provide any metrics.

Martin gave us an example of one of the metrics that they have developed this week:

Description: Rework [re-opening] service desk incidents.
Definition: Each and every time any incident requires more effort after it was incorrectly or not fully resolved but was considered to be resolved.
Presentation: Usually presented as a percentage of total incidents re-worked [re-opened] in a given timeframe.
Note: Need to cover the use case where a member of IT staff opens a new incident is opened rather than reopening the old one.

Other examples of metrics which the group have developed this week are as follows:

  • Defects found during development
  • Defects found during testing
  • Top 10 categories for incidents over given time period
  • Mean time to resolve (MTTR)
  • MTTR minus customer wait time
  • Adoption Rate
  • Call Abandon rate
  • On-time delivery

In total they have developed 21 of a total of 42 IT service management metrics. 37 of these came from the ITIL framework and a further five were added by the group.

The USA Core Data Survey was mentioned several times by both Martin and those attending the session. The Educause Core Data Service carries out surveys of standard benchmark data across all US institutions, and there has been much discussion about making sure that the CEITPS metrics could be combined with the CDS information to provide an even richer information source.

The CEITPS has several member institutions from outside the USA, and they are keen to get some more involvement from UK Universities, especially those who are currently implementing the ITIL framework and/or developing service metrics and measures.

Additional resource:

The University of North Carolina Greensboro metrics page

Leave a Reply

Your e-mail address will not be published. Required fields are marked *