Tag Archives: data

The importance of an international view of humanities digital content

Sarah Ames
Library Learning Services Support Officer
University of Edinburgh

DHC2018 part 1: some key themes

I was fortunate to receive bursary funding this year from UCISA to attend DHC2018 (Digital Humanities Congress – not to be mistaken with the 16th International Symposium on District Heating and Cooling, which tops the Google results). DHC is a biennial conference organised by The Digital Humanities Institute at the University of Sheffield, exploring digital humanities research, as well as its implications for the cultural heritage sector and IT support services.
In this first blog post, I’m going to list the key themes raised at the conference and in my next post, I’ll summarise some of the papers that I found particularly interesting.

Digitisation

This one isn’t new: without digitised content (and digitised content at scale), libraries’ DH offerings begin to fall short. While, in some academic libraries, DH tools and skills will become a key focus, ultimately, without making available collections, content, or data to interest researchers, partnerships with digital projects becomes problematic.

Data

One paper (Bob Shoemaker’s ‘Lessons from the Digital Panopticon’) discussed a project bringing together 50 datasets to trace the lives of individuals convicted at the Old Bailey; another drew together 4 different library datasets, to investigate the provenance of manuscripts; many others reflected on similar experiences. As libraries look to release collections as data, considering the most appropriate and accessible formats for these will be important. The need to bring together a mix of data types, formats and models, and often ‘bespoke’ formats, complying with no particular standard, is a barrier to research, requiring technical skills that most don’t have.

Global DH

A number of papers raised the issue of the ease of slipping into a Western-focused digital humanities, to the detriment of the field itself. With web and programming languages written largely in English, the focus of research, and particularly of text analysis, has been predominantly English-language. With papers focusing on Asia and Australasia, the global view of DH produces plenty to learn from – with much for libraries to consider, particularly in the relationship between libraries and DH in other cultures and countries.

Sustainability

A repeated issue raised in talks was the sustainability of DH projects going forwards – particularly in relation to web platforms. How are these projects to be maintained post-project completion, and who is responsible for this? What kind of documentation, languages and platforms can be used to assist with, and standardise, this? Is a website an output or a transient resource? How can library and IT services support this?

Funding

Of course, a major part of sustainability is funding: funding models need to meet the cost of web resources over time; not maintain their current short-term focus. The possibilities of crowdfunding to enable ongoing access to tools were raised, but ultimately this remains too fragile a source to rely on.

Digital preservation

With these exciting new platforms and tools becoming part of research outputs, the challenge of how to preserve them becomes ever more pertinent. Unusual data formats; new, innovative research using AR; and the function, importance and relevance of the front end of a website, in comparison to the data it surfaces, are all issues and challenges that need to be considered by libraries.

Publishers

Gale launched their new DH tool, sitting on top of their platforms, enabling researchers to analyse their content at scale without the use or in-depth knowledge of manual computational methods. Although raising issues of ease of use – while this is important to increase accessibility, an understanding of what the tools are doing under the surface remains important, particularly in relation to built-in biases – the platform looked good, and is currently in its early stages. However, this emphasises just how much work libraries have on their hands. With both the content and the tools increasingly in the domain of publishers, there’s a lot of catching up to do.
This blog first appeared in the University of Edinburgh’s Library & University Collections blog.
Interested in finding out more about a UCISA bursary, then visit UCISA Bursary Scheme.

Making the most of a UCISA bursary award at ALT 2018

Marieke Guy
Learning Technologist
Royal Agricultural University

Planning for ALT 2018

It’s only 12 days and 17 hours till ALT 2018 – ALT’s 25th annual conference and the biggest meet up of Learning Technologists this side of the Atlantic (possibly?)
I have been lucky enough to be funded to attend by the UCISA bursary scheme and I intend to make good use of my subsidized ticket.
There is so much on it’s hard to know where to start but in traditional festival fashion I have a list of potential topics and sessions, though who knows what will happen when I actually get there!
Student engagement – At the Royal Agricultural University (RAU) we really want to get better at asking the students what they think. This year we ran the Jisc digital student experience and it was both enlightening and a little scary. I’d like to hear more about how other institutions have been using their data so will be attending Rating their digital experience – what do our students really, really want?.   I might follow this up with What organisational variables support a positive student digital experience? – which also looks at the broader tracker data. The session on Students as partners in technology initiatives: How does the technology aspect affect partnerships, and how can we make the most of this? also looks interesting.
Staff  digital skills – We also need to improve our staff digital literacy so the session on Witchcraft to Wonder – My journey empowering staff with technology sounds like a definite.
Data – I’m a big data fan and it is an area we’d like to explore at RAU. The session on Getting to grips with Learner Dashboards: a research informed critical approach to understanding their potential will be useful as does the well-named session Honey I shrunk the data: small design steps towards a data-informed blended learning approach .  I might also attend the workshop session on Using learning analytics to inform evidence-based interventions on live courses. Hopefully we can get some dashboards up and running in the next year.
VR – Virtual Reality offers so much potential. I’m hoping the Creating VR: what we learned along the way session will give some good pointers on how to get started. There is also Virtual Learning Environments: Walking in the Park or Wandering in the Jungle?. Sounds appropriate for an agricultural university!
Multimedia – Video is where it’s at. If I get time I will take a look at OSCEs at the Oscars: how video assessment has stolen the show and I like the look of the workshop Capturing Imaginations: Why it’s important to consider alternative uses of (lecture) capture technologies .
Distance learning and course design – For the Catalyst project, we need to design four blended learning programmes from scratch so any ideas are useful. I might try OSCAR: A Structured Approach to Course Design. We also know that we will be using ePortfolios for a considerable chunk of the assessments and the talk on Eportfolios in placements: unlocking the potential through collaboration could prove useful.
I’ll also be catching the keynotes from the fantastic all-female line up: Dr Tressie McMillan Cottom, Dr Maren Deepwell and Amber Thomas.

I will be presenting a poster during the poster and talk session entitled From little acorns…growing a learning technology culture.  If you’d like to discuss what it’s like being part of a one-person team then please find me. As I explain in the brief the poster is “of interest to anyone who wants to hear about how ‘more with less’ is possible if you make the most of collaborations and outside help. There will be lots of useful tips and far too many agriculture analogies!” I’ll post up my poster as soon as it’s finished.
Of course, as we all know the networking opportunities are what really make a conference. The Awards Evening and Dinner at the Midland Hotel will be great and I’m looking forward to hearing who has been voted ALT Learning Technologist of the Year.
I’ll also be catching up with my fellow UCISA bursary winner Karl Luke (Business Change Officer from Cardiff University). Karl and I bumped into each other at the recent Panopto user group meet up in Birmingham. We’ll clink glasses on behalf of UCISA!
Interested in applying for a UCISA bursary? Then visit UCISA Bursary Scheme.

Learning about the importance of customer feedback at SITS18

Mia Campbell
IT Support Services
Leeds Beckett University

The Service Desk and IT Support Show, June 2018

The seminars at SITS2018, which I was able to attend courtesy of a UCISA bursary, consisted of hour long talks. I have condensed here and in my next blog, information that was mentioned in the talks, which I believe may be helpful to colleagues.

Key points included learning that:
A vision for a project should be: Direct, clear, brief, achievable, believable
The mission for a project should include: What, how, from whom, why
In order to understand requirements, it is important to look at: processes, strategy, functionality, output, future
Future requirements for IT services are likely to include: Shift left testing, self-service/help/healing, AI/chatbots, business relationship management, predictive analytics
Effective research should include: Engaging with experts, engaging with community, demo, SDI intelligence, seminars, software showcase
The following inputs provide opportunities to improve: Customer satisfaction surveys, complaints/compliments and suggestions, management reports, major incident and quality reviews, cross-functional meetings, corridor conversations, social media.
These foundations should help create and sustain success if applied correctly and should continue to be focused on even after the initial launch date. For instance, if maintained, regular performance reviews will help improve services. Another factor that is sometimes overlooked, is when a small and quick addition or change is made. These play a big part in improvement and promotion of the tool.
Other areas that are important to consider include the fact that customers do not necessary want a silent switch out and may like to be informed of improvements being made to the system they use. It is important to advertise the product/tool that is being put in place, inform users why there is an improvement but also underline how it should not be problematic for the users to get the service they require. Customer experience is a huge factor in whether something fails and this should be constantly monitored.
Pictured here is a cycle of processes that I was shown at the conference, which I believe are important from the presentation by Matt Greening, ‘The Naked Service Desk’. It is a good way to further understand satisfaction levels. Correspondingly, another speaker that day underlined that ‘user experience drives improvement’ so keeping, observing and collating this useful data, can help lead to improvements.
Interested in applying for a UCISA bursary? Then visit UCISA Bursary Scheme.

 

Coping with research data access and security challenges

Universities and colleges harbour a great deal of sensitive data which should be protected. But they are also encouraged to be open and make maximum use of the data they hold through personalisation and open access to research data. Here, UCISA’s Executive Director Peter Tinson looks at the issues for institutions in balancing the need to be open and yet secure.

 

 

 

BALANCING AGILITY, OPENNESS AND SECURITY

The challenges of providing effective services for the research community while supporting open access are many and varied. Researchers need access to both short-term storage and computational resources but the requirements of research funders are moving toward long-term preservation and archiving.
There is resistance to openness – researchers see the data as ‘theirs’ and there is a reluctance to place data in institutional repositories until all the research opportunities have been realised and the results published. Open access to research data requires that data to be tagged with appropriate metadata in order to be discoverable. However, few researchers possess the skills to tag their data and there are few incentives for them to do so.
The demand is for easy to access services provided free of charge at the point of use. While a number of institutions are starting to provide high volumes of storage for their researchers, there are few, if any, effective costing models for long-term storage and preservation. The absence of a cost-effective model provides the opportunity for a shared service; it is hoped that Jisc’s embryonic Research Data Shared Service will provide an effective solution for the sector.
Where there are no centrally provided services, or where researchers find those services too difficult or too costly to use, researchers sought alternative solutions. These included free or low-cost cloud services to store and share data, cloud services for computational resource, and the use of ‘personal’ devices such as removable hard disks or memory sticks. Information security rarely features in decisions to use easily accessible cloud services – this is due in part to the ease with which such services can be purchased but is also indicative of a lack of awareness amongst researchers. This challenge has now been recognised by many institutional IT services who are now providing supported access to cloud storage solutions and computation.
Data management is relatively immature within institutions. There is growing recognition that the data and information that an institution holds are assets and poor management of those assets represents an institutional risk. However, a one size fits all approach is not appropriate – information and data needs to be classified to determine the level of security that needs to be applied to it. The HESA Data Futures project, and HEDIIP before it ,has surfaced the lack of maturity in this area. Although there has been some improvement, we are still some way from data management being an established discipline.
Effective support of research and research data management requires a cross-institutional approach yet this is not readily understood by senior university management. This is all the more frustrating given that a briefing paper jointly produced by UCISA, SCONUL, RLUK, RUGIT, ARMA and Jisc highlighted the need for an institutional approach over three years ago.
A lack of understanding is sometimes reflected in diktats being issued and a resultant poor take up of services. Meeting the demands of both researchers and research funders requires resourcing, both in terms of staffing and services, and an understanding of how cloud services can be used effectively to meet the storage and computational demands securely. The planning process needs to be responsive to long-term trends but also to changes in policy, legislation and technological developments that may require quicker response.
The threat of cyber attack is a major concern; there is growing evidence that state-sponsored attacks primarily aimed at accessing research outputs and institutions’ intellectual property are on the rise. Yet the threat often comes from within as a result of a lack of awareness and poorly maintained systems within the institutional perimeter.
It is important that all staff in the institution realise and accept that information security is their responsibility. The institution’s management needs to recognise that information security is an institutional issue and requires a coordinated and risk-based approach. Where there are policies established to mandate information security awareness training for all staff, it may be necessary for senior institutional management to oversee the enforcement of that mandate, although such enforcement may be detrimental to building understanding and acceptance of individual responsibility.
In conclusion, managing the conundrum of being open in a secure environment requires effective governance, and a central coordinated approach that supports both research and information security. There is likely to be no one solution applicable to every research discipline but shared services such as Jisc’s RDSS should have a strong role to play.

Strategic questions to consider:

  • How mature is your institution’s information management capability? Does your institution have a business classification scheme? Are records management processes embedded in normal operations?

  • How influential is your internal audit function in determining or supporting information security policy and implementation?

  • What mechanisms do you have to learn from information security incidents, whether internal to your organisation or external?

  • Do you have an institutional approach to research data management?

 

UCISA welcomes blog contributions and comment responses to blog posts from all members. If you would like to contribute a new perspective or opinion on a current topic of interest, simply contact UCISA’s marketing manager Manjit Ghattaura via manjit.ghattaura@it.ox.ac.uk

 

The views expressed on UCISA blogs are the authors’ and do not necessarily reflect those of UCISA

Breaking the ice and digital literacies at DigiPedLab 2017


Beccy Dresden
Senior TEL Designer
The Open University

 

 

 

DigiPedLab Vancouver 2017 – Day 1

Beccy Dresden was funded to attend this event as a 2017 UCISA bursary winner

Breaking the ice

(One minor quibble though: not enough coffee on Day 1!)

At any really cool educational event these days, there has to be Lego, right? Well DigPedLab was no exception. As an icebreaker, each table was given a box of bricks and bits, we were instructed to introduce ourselves to our neighbour and, based on what we said and the available Lego, they had to create an avatar for us. The lovely Greg Chan gave me abundant shiny hair and a dog: what more could I ask for? NB My less-than-beaming smile below is due to horrific jetlag and a dislike of being photographed, not dissatisfaction with my avatar!

 

I can’t resist sharing this one with you too…

A speech and a song

To formally kick off the institute we were treated to an amazing, inspiring speech and a traditional song from a Kwantlen First Nation elder (the institute was sponsored by and held at Kwantlen Polytechnic University’s Richmond campus, just outside Vancouver).

DigPedLab co-founder Sean Michael Morris then made us laugh by commenting that this event wouldn’t have happened without Trump – the Virginia Institute,  which took place a week or so after Vancouver’s, was meant to ‘bring everyone together in one place’, after three separate DigPedLabs in 2016, but the President’s travel ban made it impossible for some key participants to get to the USA in 2017.

Morning session – Literacies track

Bonnie Stewart kicked off the digital literacies track with a bit of activity: getting us to vote with our feet (Runaround style!) on a digital literacies ‘survey’ and emphasising (with reference to Lisa Simpson) that there were no ‘right’ answers.

 

(Click on image to enlarge)

 

 

 

These were my favourite questions/answers…

I need to find resources to teach/write with. I do the following:
0=nothing. Last year’s notes are fine.
1=check the library
2=Google stuff
3=crowdsource my digital network

I know what the following mean/do:
command f
404
PLN
swipe right
LMGTFY

When I Google myself I find:
0=Google myself?
1=An ax-murderer with my name
2=Vaguely embarrassing pictures my buddy tagged on FB  3=Traces of my work on the first search return page
4=A fair & cultivated representation of who I am and what I do.

As you can probably imagine, this activity caused lots of laughter and a few revelations.

We then sat down and went round the room briefly introducing ourselves and explaining our experience/interest in digital literacies. The Literacies track had proved extremely popular, so rather than being a small group, there were actually nearly 30 participants for Bonnie to wrangle. Two Brits apart from me – David White from The University of the Arts London, and Penny Andrews, a PhD student at the University of Sheffield (and a brilliant follow on Twitter) – a professor from Puerto Rico, an educator based in the Austrian Alps, and the rest from North America, a mix of librarians, academics, educational project managers, IT folk, and even a practising attorney. This diversity was one of the many things that made DigPedLab so attractive to me: I wanted my western European, middle-class, middle-aged, cis white female perspective to be thoroughly challenged. Over the course of the weekend, it certainly was.

Digital literacies defined?

Having let off some steam and started to get to know one another, the teaching began in earnest. As I write this, I’m looking at Bonnie’s PowerPoint, and wondering what I can possibly say that’s more useful/informative than just sharing her slides verbatim, but I’ll try to limit myself to just a handful, and share my observations/responses to them.

(Slide courtesy of Bonnie Stewart. Click on image to enlarge)

The cluster at the top left represents the institutional model, whereas the bottom rightish cluster is the present. The idea of education as market is not necessarily progression, and these shifts are only loosely tied. Dealing with data/ information/ knowledge abundance is arguably the biggest challenge for digital literacies to overcome.

 

 

 

Key points to remember in the context of digital literacies:

  • (access to) content does not equal literacy
  • web does not equal digital
  • tech does not equal digital literacy.

The concept of ‘literacy’ is changing, because there’s so much more than literature now, and the goal of education is handling data, rather than just accumulating it.

Bonnie then summarised what she planned for us to explore over the next three days.

 

(Slide courtesy of Bonnie Stewart. Click on image to enlarge)

She gave us a timeline of literacy: from considering it as a threat to the knowledge of classical scholars in 400 BCE, to the control of knowledge via the spread of printing presses throughout Europe in 1500 CE, to the management and synthesis of knowledge we’re dealing with in the present day. A quote from educational researcher Doug Belshaw neatly encapsulated this:

 

 

“Digital literacies are not solely about technical proficiency but about the issues, norms, and habits of mind surrounding technologies used for a particular purpose.”

Or, as I noted it down at the time, thinking about technologies vs being a techie!

Bonnie highlighted more benefits of developing your digital literacy:

  • improving your capacity to analyse a medium’s affordances
  • identifying ‘thinking tools’ to help you manage knowledge abundance – I think this is a particular challenge for those of us working at the interface of education and technology, where abundance can all too easily become overload.

This led us on to thinking about networks…

The power of networks

 (Slide courtesy of Bonnie Stewart. Click on image to enlarge)

 

 

 

 

 

…and another fun stand-up activity about one-to-one, one-to-many, and many-to-many interactions, and how we become network nodes, forming webs of visible (and invisible) connections.

 

(Slide courtesy of Bonnie Stewart. Click on image to enlarge)

 

 

 

 

Finally, we discussed the ‘price of admission’ to these networks: public identity. Bonnie’s references here ranged from Jon Ronson’s So You’ve Been Publicly Shamed to Walter Ong’s work on oral traditions vs literate traditions:

  • oral traditions – participatory, situational, social, formulaic, agonistic (conflict based), rhetorical (vs the ‘artificial memory aid’ of writing)
  • literate traditions – interiorised, abstracted, innovative, precise, analytical, indexed.

If I understood correctly, how this relates to social media is that we experience the instant message, the tweet, in an oral way – although they are textual verbal exchanges, they register psychologically as having the temporal immediacy of oral exchange (Ong, 1996). But the flipside of this is that because these ‘speech-based activities’ on social media can be captured as if they were print literature, we end up with a call-out culture that treats flippant remarks like gospel.

 

(Click on image to enlarge)

 

 

 

 

The takeaway from this session for me? Digital literacy is about knowing how to manage audience, visibility and publics.

 Interested in finding out more about a UCISA bursary, then visit UCISA Bursary Scheme.

Climbing the DIKW Pyramid: Applying Data, Information, Knowledge, Wisdom principles at the University of Leeds

Tim Banks
Faculty IT Manager
University of Leeds

One of the many areas of knowledge that the EDUCAUSE conference  helped me to develop was the importance of metrics and monitoring. All good metrics are based upon accurate data, but data isn’t useful on its own or in isolation. Here is one concrete example of how my attendance at EDUCAUSE 2015 has helped to shape my professional development and bring benefits to my institution.

The Information Technology Infrastructure Library (ITIL) framework makes reference to the DIKW pyramid (Data, Information, Knowledge, Wisdom) as can be seen below. Wisdom is based on sound knowledge, which in turn comes from useful information, which is based on accurate data.

final blog image

 

 

 

 

 

Let’s take an example of a typical automated monitoring system. An example of each level of the DIKW pyramid is as follows:

Data
09/01 18:29:45: Message from InterMapper 5.8.1

Event: Critical
Name: website-host.leeds.ac.uk Nagios Plugin
Document: Unix: Webhosting
Address: 129.11.1.1
Probe Type: Nagios Plugin
Condition: CRITICAL – Socket timeout after 10 seconds

Time since last reported down: 39 days, 3 hours, 12 minutes, 47 seconds Device’s up time: N/A

Information
This alert relates to one of our website servers.
This is not normal behaviour.

Knowledge
There is a planned network upgrade in one of our datacentres between 18:00 – 19:00 which is expected to cause network outages.
The server is part of a clustered pair with only one node affected, so service to end users will not be interrupted.

Wisdom
No action is required.

Most systems will generate endless data records. With some careful filtering of the data, it is possible to automatically generate ‘Information’. However, in most cases, ‘Knowledge’ (and in all cases ‘Wisdom’) will need some level of human intervention.

My team have recently started using the University of Leeds IT Service Management system (ServiceNow) and as part of this move, we have updated all of our automated monitoring systems so they now report into one shared email account. Previously,  they were going to various individual and shared email accounts, so we didn’t have a single view of everything. This single shared email account is our data store in the DIKW model. We have then applied a number of rules to identify the subset of alerts from the general notifications. We have defined alerts are something which we have defined as requiring human intervention. This takes us to the information level. These alerts are automatically entered into our Service Management system as incidents, where they are reviewed by a human and acted on as appropriate.

The ultimate goal is to use the configuration management database (CMDB) and change management records to try and automate some of the ‘Knowledge’ layer. e.g. Approved change X will affect the network between 07:00 and 07:30 on 5th May in Data Centre 1 in which server Y is located, so ignore any warnings from this server on this date between these times.

Accurate monitoring is the basis of building meaningful metrics. You cannot generate a useful metric on the ‘number of unplanned service outages in the last six months’ based on data alone. By ensuring that we have a model which allows us to record useful knowledge based on the raw data, we will be able to build some accurate and meaningful metrics.

The sessions I attended on data monitoring and metrics, in particular the one by led by the Consortium for the Establishment of Information Technology Performance Standards (CEITPS), really helped to define this approach and stopped me from falling into the trap of generating endless metrics (of little value) based on data alone. Hearing from other institutions that are further ahead on this journey than us and having the benefit of their advice on what approach to take and what pitfalls to avoid has been invaluable. I am also part of a small group at the University who are responsible for defining the institution-wide IT configuration management standards for recording and managing IT assets. Again, I will be bringing information and knowledge from EDUCAUSE sessions to these discussions.

Performance management and assessing capacity

Giuseppe Sollazzo

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

 

 

 

Velocity day three – the final one – has been another mind-boggling combination of technical talks and masterful storytelling about performance improvement in a disparate set of systems. The general lesson of the day is: know your user, know your organization, know your workflows – only then will you be able to adequately plan your performance management and assess your capability.

This was the message from the opening keynote by Eleanor Saitta. She spoke about how to design for ‘security outcomes’, or, in other words, ‘security for humans’: there is no threat management system that works if isolated from an understanding of the human system where the threats emerge. We have some great examples of this in academia, and at St George’s one of the major challenges we face is securing systems and data in a context of academic sharing of knowledge. Being a medical school, the human aspect of security – and how this can affect performances – is something we have to face on a daily basis.

One of the best presentations, however, was by David Booker of IBM, who gave a live demo of the Watson system, an Artificial Intelligence framework which is able to understand informal (up to a point) questions and answer them in speaking. As per every live demo, this encountered some issues. Curiously, Watson wasn’t able to understand David’s pronunciation of the simple word “yes”. “She doesn’t get when I say ‘yes’ because I’m from Brooklyn,” David said, triggering laughter in the audience.

Continuous delivery
Courtney Nash of O’Reilly spoke at length about how we should be thinking when we build IT services, with a focus on the popular strategy of continuous delivery. Continuous delivery is the idea that a system should transition from development to production very often, and this idea is taking traction in both industry and academia. However, this requires trust: trusting your tools, your infrastructure, your code, and most importantly, the people who power the whole organization. Once again, then, we see the emergence of a human factor when planning for the delivery of IT services.

The importance of 2G
In another keynote with a lot of applicable ideas for academic websites, Bruce Lawson of Opera ASA has focused on the ‘next billion’ users from developing countries who are starting to use internet services. Access to digital is spreading, especially in developing areas of Asia, where four billion people live. India had 190 million internet users in 2014, and this is poised to grow to 400 million by 2018.

The best piece of information in this talk was the realisation that if you take the US, India and Nigeria, the top 10 visited websites are the same: Facebook, Gmail, Twitter, and so on. Conversely, the top 10 devices give a very different picture: iPhones dominate in the US, cheap Androids in India, and Nokia or other regional feature phones in Nigeria. This teaches us an important lesson: regardless of hardware, people worldwide want to consume the same goods and services. This should tell us to build our services in a 2G-compatible way if we want to reach the next billion users (91.7% people in the world live within reach of a 2G network). This is of great importance to academia in terms of international student recruitment.

Performance optimisation
The afternoon sessions were an intense whistle-stop tour of experiences of performance optimisation. Alex Schoof of Fugue, for example, gave an intensely technical session about secret management in large scale systems, something that definitely applies to our context: how do we distribute keys and passwords in a secure way that allows that secrets to be changed whenever required? With security issues going mainstream, like the infamous Heartbleed bug, this is something of increasing importance. Adam Onishi of London-based dxw, a darling of public sector website development, gave an interesting talk on how performance, accessibility and technological progress in web design are interlinked, something academic website managers have too often failed to consider with websites that are published and then forgotten for years.

As someone who has developed mobile applications, I really enjoyed AT&T’s Doug Sillars’ session about ‘bad implementation of good ideas’, showing that lack of attention to the system as a whole has often killed otherwise excellent apps, which are too focused on local aspects of design.

Velocity has been a great event. I was worried it would be too ‘corporate’ or sponsor-oriented, but it has been incredibly rich, with good practical ideas that I could apply to my work immediately. It has also offered some good reflection on ‘running your systems in house’: we often perceive this dualism between the Cloud and in-house services. This is a technology that can be run in-house with no need to outsource. As IT professionals we should appreciate it, and make the case for adopting technologies that improve performance and compliance in a financially sound way. This often requires abandoning outsourcing and investing on internal resources: a good capital investment that will allow continuous improvement of the infrastructure.

 

Developing metrics and measures for IT

Tim Banks
Faculty IT Manager
University of Leeds

This morning I attended a session run by Martin Klubeck from the Consortium for the Establishment of Information Technology Performance Standards (CEITPS)

This group is working to establish a common set of measures and metrics across education IT. CEITPS volunteers have spent some time over the EDUCAUSE 2015 conference writing the first 21 metrics, in between attending sessions.

CEITPS have a refreshingly common sense approach to develop standards as follows:

  • Get some interested and enthusiastic people in a room
  • Write some standards, plagiarising as much as possible from other sources
  • Review within the group and amend as necessary
  • Don’t worry if you don’t get everything perfect first time
  • Send out to the wider CEITPS group for comment, but give them a limited time to respond (e.g. seven days). If you give them six weeks, they will take that long.

What is the difference between a measure and a metric?

This was a question asked by a member of the audience. Martin answered in the form of a tree analogy:

  1. The leaves are like data – there are a lot of them and a lot can be thrown away. Data are typically just raw numbers.
    1. NB: Never give data to a manager! Business Intelligence (BI) tools are particularly bad because not only do they give data to managers but they also make it look pretty…
  2. The twigs can be thought of as measures (e.g. ‘50%’ or ’20 out of 30′) – has some context.
  3. The branches are like information,which have more context around them.
  4. The trunk of the tree is your metrics,which have sufficient contextual and trend-over-time information to make them suitable for presentation to senior managers.
  5. It is vital to find out the root (i.e. underlying) question that the person asking wants answering before you provide any metrics.

Martin gave us an example of one of the metrics that they have developed this week:

Description: Rework [re-opening] service desk incidents.
Definition: Each and every time any incident requires more effort after it was incorrectly or not fully resolved but was considered to be resolved.
Presentation: Usually presented as a percentage of total incidents re-worked [re-opened] in a given timeframe.
Note: Need to cover the use case where a member of IT staff opens a new incident is opened rather than reopening the old one.

Other examples of metrics which the group have developed this week are as follows:

  • Defects found during development
  • Defects found during testing
  • Top 10 categories for incidents over given time period
  • Mean time to resolve (MTTR)
  • MTTR minus customer wait time
  • Adoption Rate
  • Call Abandon rate
  • On-time delivery

In total they have developed 21 of a total of 42 IT service management metrics. 37 of these came from the ITIL framework and a further five were added by the group.

The USA Core Data Survey was mentioned several times by both Martin and those attending the session. The Educause Core Data Service carries out surveys of standard benchmark data across all US institutions, and there has been much discussion about making sure that the CEITPS metrics could be combined with the CDS information to provide an even richer information source.

The CEITPS has several member institutions from outside the USA, and they are keen to get some more involvement from UK Universities, especially those who are currently implementing the ITIL framework and/or developing service metrics and measures.

Additional resource:

The University of North Carolina Greensboro metrics page

Shaping the information landscape

One of UCISA’s roles is to ensure that suppliers to our sector are kept abreast of developments that may impact the software and services they deliver. The aim is to alert suppliers of potential changes in legislation or other statutory requirements so that they can effectively plan future developments. A recent example of this activity was the briefing day that UCISA and HEDIIP arranged at the end of January to bring suppliers of student records systems up to date with the work being carried out under the HEDIIP programme.

The meeting heard updates on four of the HEDIIP projects: data capability; the new subject coding system, the Unique Learner Number and the new Information Landscape. In addition we heard from HESA about the CACHED project. The aim of the HEDIIP programme is to redesign the information landscape to enhance the arrangements for the collection, sharing and dissemination of data and information about the HE system. Each of these projects will contribute to that overall goal – I won’t go into detail on these here but if you are interested in learning more, each is outlined on the HEDIIP website.

There were a number of common themes that emerged from the day. The first was the adoption of standards. One of the challenges the sector faces currently is that the same term can mean different things to different organisations (the term course being a prime example) so standard data definitions are essential to a common understanding and data sharing. This has been a particular problem with the JACS subject coding scheme where changes and growth in JACS’ range of functions mean it is no longer consistently applied.

The second theme was managing cultural change both within higher education institutions and a number of the organisations requesting data from the sector. In some institutions, many processes are geared around producing the HESA return and the need to get it “right”. The focus on a single return suggests that these institutions may be unaware of the volume of demands made on their data and the amount of resource across the institution spent in ensuring the various returns made are correct. It is highly unlikely that there will be one version of the truth in these institutions – indeed it was noted that one institution had over 200 separate collections of student records. It goes without saying that the data management in such institutions is poor – it will take a significant change to move away from data being an input to deliver a return to a point where it is seen as an institutional asset.

Finally, the biggest challenge is governance. At an institutional level, mature data management will only be achieved with effective information governance being driven from the top table. Getting the value of data understood at senior management level is key to improving the data and information management within an institution. There are wider governance issues that the HEDIIP programme will need to address. Moving to a set of standard data definitions is one challenge – ensuring that the governance mechanisms are in place to ensure that the standards remain consistently applied and understood is a league apart. Similarly with the new subject coding scheme, establishing a governance model that is supported by an appropriate selection of stakeholders, with sufficient authority and resources to manage its evolution will be critical to the success of the new scheme.

The feedback from those suppliers present was positive. They could recognise the efficiencies in moving to a model where, for the most part, data is submitted to a single point at various points in the year and drawn down from a single repository. The HEDIIP programme is only part of achieving this goal – the institutions need to improve their data management and change their processes, those requesting data may also have to change their processes and suppliers will need to amend their systems to implement new standards and enable data to be extracted at key points in the academic year or cycle. It will be a long journey but one that offers much reward.