Tag Archives: web

What is content management, and how do we support it?

James Cox
Customer Success Analyst – Web CMS
University of Oxford

Institutional Web Management Workshop (IWMW) 2018

This summer, with the aid of the UCISA bursary scheme, I attended the Institutional Web Management Workshop (IWMW) in York. This was my first conference since I started working in HE Digital 16 months ago, when I became part of an in-house software development team in the University of Oxford’s central IT services department.
My team built and develops a University-wide platform which comprises two distinct elements: a ‘toolkit’ to build and host websites; and a service, which responds to queries which users have raised, and provides a set of resources for users, such as live demos, documentation, and how-to guides. Ultimately, our team provides a potential solution to anyone in the university who needs to quickly create engaging web content and to make their administration of their website as painless as possible. No small task when you’re serving a highly-devolved organisation containing a wide array of use cases and user needs!

IWMW17 Ruth Mason, Matthew Castle by Kevin Mears is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
I have the reassuringly positive title of Customer Success Analyst, which situates me somewhere between the developers and business analysts – both of whom work with project partners to move the toolkit forward – and our users, who so far in the platform’s short life (the full service became operational two days after I joined the team) have created almost every kind of website a university could expect to host: from individual academic and research group sites to new web presences for academic faculties and museums.
As a customer-facing person in a technical team, I get to see both sides of the software creation and usage coin. And, as someone new to web management in HE and working on a relatively new service, I’d like to know what challenges similarly-positioned professionals are facing. As a result, IWMW seemed like a convivial space where HE Digital folk could share their experiences wrestling with similar considerations, such as supporting the creation of engaging, on-message content within their organisations, and how to make a technical solution like a CMS useful and usable to people whose day-to-day work includes only peripheral technical engagement with systems.
So, what struck me most from my first conference since working in this new sector? Which messages resonated strongest with me? And what lessons have I tried to put into my work in the four months since?

It was my first conference whilst working in HE Digital; what struck me most?

The balance between content-focused talks and ones centring on the technical parts of institutional web management differed to what I anticipated. Although the technical and management side of maintaining web services within HE was touched upon, there was a strong emphasis on content, and how to create it in a way that strengthens an institution’s brand and ultimately establishes a space for an audience to identify with it – as showcased by this promotional video for ETH Zürich, mentioned in a talk by Dave Musson. Reflecting on this during the conference, it seemed that one reason for this balance might be that technical offerings available to universities now often mean turning to SaaS solutions, which bring with them a reduced need for in-house technical expertise – allowing for greater resource allocation to the parts of web management where demand is now greatest: content and user experience.

Which talks did I enjoy and which prompted some lightbulb moments?

Telling the Birkbeck story: How customer journey mapping helped us develop our new approach to web

  • Brand identity through customer journey mapping: I enjoyed the unpacking of customer journey mapping and how it was used to design the UX of Birkbeck’s new website, and how this approach was undertaken as a foundation in promoting the Birkbeck brand: beginning with understanding the brand you have, and importantly “how your brand is no longer what you say it is, but what your users say it is”. This means you better give them a good experience or else you’re going they’re going to tell you about it – most likely through the amplification of social media.

Old school corporate identity: Blackbeard’s brand promise.
Reproduced from https://commons.wikimedia.org/wiki/File:Pirate_Flag_of_Blackbeard_(Edward_Teach).svg, CC0 1.0 Universal Public Domain Dedication.
  • Mapping customer journeys and where the experience can be improved: The mapping process was presented in detail (key events and stages in the journeys; user feelings; touchpoints, friction, opportunities for improvement), which resonated with work that our team is currently going through, working with our administrative division.
  • Guidelines for the design process: Birkbeck adopted five design guidelines: simplify and clear clutter; push content up within the navigation and reduce user steps; connect content and surface related content on every page; flatten navigation hierarchy; don’t be afraid of long pages. Presenting good web design and information architecture practice is central to our team’s work so it’s interesting to see another institution’s take on what principles to follow.

Understanding invisible labour: University of Greenwich

  • Think about the cost of the ‘invisible’ work: A huge amount of time is lost during task switching. A Microsoft study of one of its development teams and the effect of task switching found an average increase in the time to complete a task of 226%. Think about the process a user has to undertake to complete a task using the system you support. How many steps are there? How many times does the user encounter ambiguities or increases in cognitive load, where they need to make a decision which could result in an error being made? How likely is a support request going to be raised under these circumstances? Can a change to something within the service remove this problem for the user and reduce the support load?
  • Learn the art of nudging: some users won’t jump; you need to give them a gentle push. Make tutorials (good documentation, videos, how-to docs) so users can easily engage with the system you are supporting but they need to operate. Turn it into a user experience exercise – ‘how would I have wanted to learn about that?’
  • Manage how users interact with your system: provide the basic config options and hide the rest. There is often a lot of advanced functionality in CMSs – features the average content editor isn’t likely to need. Keeping them all on display is at best confusing for users who will never need these features and at worst can result in the web-equivalent of ‘Leeroy Jenkins’, i.e. an editor clicking on the option which makes a major adverse change to the site – our team learnt that this is a thing last week, when a new content editor unfamiliar with the editing options deleted their organisation’s homepage. As a result, we’re going to make a change to prevent homepages from being deleted.
HE Digital is a small community and IWMW does an amazing job of bringing together web management professionals into a supportive community to share experiences and lessons learned. Head over to the IWMW website to see some videos of the plenary talks this year.

Interested in applying for a UCISA bursary? Then visit UCISA Bursary Scheme.

The importance of an international view of humanities digital content

Sarah Ames
Library Learning Services Support Officer
University of Edinburgh

DHC2018 part 1: some key themes

I was fortunate to receive bursary funding this year from UCISA to attend DHC2018 (Digital Humanities Congress – not to be mistaken with the 16th International Symposium on District Heating and Cooling, which tops the Google results). DHC is a biennial conference organised by The Digital Humanities Institute at the University of Sheffield, exploring digital humanities research, as well as its implications for the cultural heritage sector and IT support services.
In this first blog post, I’m going to list the key themes raised at the conference and in my next post, I’ll summarise some of the papers that I found particularly interesting.

Digitisation

This one isn’t new: without digitised content (and digitised content at scale), libraries’ DH offerings begin to fall short. While, in some academic libraries, DH tools and skills will become a key focus, ultimately, without making available collections, content, or data to interest researchers, partnerships with digital projects becomes problematic.

Data

One paper (Bob Shoemaker’s ‘Lessons from the Digital Panopticon’) discussed a project bringing together 50 datasets to trace the lives of individuals convicted at the Old Bailey; another drew together 4 different library datasets, to investigate the provenance of manuscripts; many others reflected on similar experiences. As libraries look to release collections as data, considering the most appropriate and accessible formats for these will be important. The need to bring together a mix of data types, formats and models, and often ‘bespoke’ formats, complying with no particular standard, is a barrier to research, requiring technical skills that most don’t have.

Global DH

A number of papers raised the issue of the ease of slipping into a Western-focused digital humanities, to the detriment of the field itself. With web and programming languages written largely in English, the focus of research, and particularly of text analysis, has been predominantly English-language. With papers focusing on Asia and Australasia, the global view of DH produces plenty to learn from – with much for libraries to consider, particularly in the relationship between libraries and DH in other cultures and countries.

Sustainability

A repeated issue raised in talks was the sustainability of DH projects going forwards – particularly in relation to web platforms. How are these projects to be maintained post-project completion, and who is responsible for this? What kind of documentation, languages and platforms can be used to assist with, and standardise, this? Is a website an output or a transient resource? How can library and IT services support this?

Funding

Of course, a major part of sustainability is funding: funding models need to meet the cost of web resources over time; not maintain their current short-term focus. The possibilities of crowdfunding to enable ongoing access to tools were raised, but ultimately this remains too fragile a source to rely on.

Digital preservation

With these exciting new platforms and tools becoming part of research outputs, the challenge of how to preserve them becomes ever more pertinent. Unusual data formats; new, innovative research using AR; and the function, importance and relevance of the front end of a website, in comparison to the data it surfaces, are all issues and challenges that need to be considered by libraries.

Publishers

Gale launched their new DH tool, sitting on top of their platforms, enabling researchers to analyse their content at scale without the use or in-depth knowledge of manual computational methods. Although raising issues of ease of use – while this is important to increase accessibility, an understanding of what the tools are doing under the surface remains important, particularly in relation to built-in biases – the platform looked good, and is currently in its early stages. However, this emphasises just how much work libraries have on their hands. With both the content and the tools increasingly in the domain of publishers, there’s a lot of catching up to do.
This blog first appeared in the University of Edinburgh’s Library & University Collections blog.
Interested in finding out more about a UCISA bursary, then visit UCISA Bursary Scheme.

Benefits of receiving a UCISA bursary

Giuseppe Sollazzo

 

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

 

 

Last October I was lucky enough to be selected for a UCISA bursary to attend O’Reilly Velocity in Amsterdam. Velocity is one of the most important conferences for performances in IT Systems, which is my area of work at St George’s, University of London: I lead a team of systems analysts who take care of the ongoing maintenance and development of our infrastructure. I had wanted to attend the conference for quite a while, but was always prevented from doing so by the hefty funding required, something that my institution could not readily justify.

The format of Velocity is particularly well suited to a mixture of blue-sky thinking, practical learning, networking with other professionals. Each day ran from 8:30 till 18:30. Following this schedule for three days was intense, but extremely rewarding in terms of learning.

I have written blogs for UCISA day by day throughout the conference. You can read about the specific sessions I followed on each day at the following links: day one, day two and day three. In summary, I learned about a mixture of practical techniques and heard about experiences in a variety of sectors.

As I wrote in my first blog post ahead of the conference, a focus on performance and optimisation is important for academic IT services, and specifically for my institution: with our 300 servers and 30,000 accounts to take care of, this is not just an important consideration, but our major worry on a daily basis. Access to funding is becoming increasingly competitive, as is student and researcher recruitment; it is becoming our primary goal to provide systems that are effective, secure, scalable, fast, and at the same time manageable by constrained staff numbers.

I was interested in three types of sessions:

  • practical tutorials about established techniques and tools
  • storytelling from people who have applied techniques to certain specific situations
  • sessions about new learning about new systems, to see where the industry is heading to.

Velocity has been great to help me crystallise my strategy on how to make St George’s systems evolve. In the past four months, this has translated into taking action on a number of aspects of our infrastructure. The most important are the following:

  • leading the team to build upon our logging systems, in order to extract metrics and improve the ability to respond to incidents
  • increasing our dependability on our ticketing system, by measuring response times and starting a project to make the ongoing monitoring of this part of our weekly service reviews
  • launching an investigation into researchers’ needs in terms of data storage and high performance computing; this has so far resulted in an experimental HPC cluster, which we are testing in collaboration with genomics and statistical researchers who are interested in massively parallel computations where performances are vital to the timeliness of research results for publishing.

I’m very grateful to UCISA for the opportunity it has given me. The knowledge and experience I’ve gathered at Velocity have been invaluable not just for starting new projects and reviewing our current service offer, but most importantly in beginning to understand what our strategy to maintain performances should be to still be able, in five to ten years’ time, to provide excellent industry-standard services to our community.

Interested in applying for a UCISA bursary? Then visit UCISA Bursary Scheme 2018.

Disruptive statistics, Linux containers, extreme web performance for mobile devices

Giuseppe Sollazzo

 

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

 

 

Day one at the Velocity conference, Amsterdam

What a first day! O’Reilly Velocity, the conference I’m attending thanks to a UCISA bursary, is off to a great start with a first day oriented to practical activities and hands-on workshops. The general idea of these workshops is to build and maintain large-scale IT systems enhancing their performances. Let me provide you with a quick summary of the workshops I have attended.

Statistics for Engineers
A statistics workshop at 9.30am is something that most would find soul-destroying, but this was a great introduction on how to use statistics in an engineering context – in other words, how to apply statistics to reality in order to gather information with the goal of taking action.

Statistics is, indeed, very simple maths and its difficult yet powerful bits allow practitioners to understand situations and predict their outcomes.

This workshop illustrated how to apply statistical methods to datasets generated by user applications: support requests, server logs, website visits. Why is this important? Very simply because service levels need to be planned and agreed upon very carefully. The speaker showed some examples of this. In fact, the title of this workshop should have been “Statistics for engineers and managers”: usage statistics help allocate resources (do we need more? can we reuse some?) and, in turn, financial budgets.

The workshop illustrated how to generate descriptive statistics and also how to use several mathematical tools for forecasting the evolution of service levels. We have had some experience with data collection and evaluation at St George’s University of London, and this workshop has definitely helped refine the tools and reasoning we will be applying.

Makefile VPS
This talk presented itself as a super-geeky session about Linux containers. Containers are a popular way to manage web services that does not require a full-fledged physical or virtual server. They can be easily built, deployed, and managed. However, they are rarely properly understood.

The engineer who presented this workshop showed how in his company, SoundCloud,  they build their own containers to power a “virtual lab” in order to simulate failures and train their engineers to react. His technique, based on scripts that build and launch containers at the press of the “Enter” button, is an effective solution both for quick prototyping and production deployment whenever docker or other commercial/free solutions are not a viable option (due to funding or complexity).

As much as this was quite a hard core session, it was good to see how services can be run in a way that makes their performances very easy to manage. This is definitely something that I will be sharing with my IT colleagues.

Extreme web performance for mobile devices
A lightweight (so to say!) finale to the day, discussing how mobile websites present a diverse range of performance issues and what techniques can be used to test and improve. However, the major contribution from this session was to share some truly extraordinary statistics about mobile traffic and browsers.

For example, the fact that on mobile 75% of traffic is from browser and 25% from web views (i.e. from apps) – 40% of which is from Facebook. Of course, these stats change from country to country and this makes it hard to launch a website with a single audience in mind. For universities, this becomes incredibly important in terms of international students recruitment.

Similarly shocking, we have learnt that the combination of Safari and Chrome, the major mobile browsers reach 93% on WiFi networks but only 88% on 3G networks; this suggests that connections speeds still matter to people, who might opt for different, more traffic-efficient browsers in connectivity-challenged environments (for example, OperaMini goes up from 1% to 4%)

One good practical piece of advice is to adopt the RAIL Approach, promoted by Google, which is a user-centric performance model that takes into consideration four aspects of performance: response, animation, idle time and loading. The combination of these aspects, each of which has its own ‘maximum allowed time’ before the user gets frustrated or abandons the activity, requires a delicate balance.

There was also some good level of discussion around the very popular “responsive web design”, a technique that has become a goal in itself. The speaker suggested that this should be just a tool, rather than a goal: users don’t care about “responsive”, they care about “fast”. Never forget the users is a good motto for everyone working in IT.

Summary
Velocity’s first day has been a very hands on day. The overall take-home lesson is simple: managing performance requires some sound science, but with adequate tools and resources it’s not impossible to do it on a shoestring budget and in an effective way. As I’m an advocate of internal resource control and management with respect to outsourcing, today’s talks have surely provided me with some great insight on how to achieve this smartly.

Aside from this summary, I’ve also been taking some technical notes, which are available here and will also contain notes from the future sessions.

Heading to Velocity

Giuseppe Sollazzo

 

 

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

 

This blog is the first in a series about my participation in the O’Reilly Velocity Conference in Amsterdam, funded by a UCISA bursary.

My job is to lead a team of systems analysts who take care of the ongoing maintenance and development of our infrastructure. I have a genuine passion for my job; knowing we provide services that benefit future doctors and health professionals in training gives me a positive attitude. As I believe that expanding my horizons is vital in keeping my interest and skills alive, I also have a number of other activities outside of my 9-5 work, most notably as an Open Data activist. I have been a ministerial advisor for Cabinet Office on Openness and Transparency Policy for the past two years.

Until 2012, the academic IT community had a yearly meetup at dev8d, a Jisc-sponsored three day conference. This event gathered developers, systems administrators, devops, digital librarians and support staff in a feast of sessions about development, new services, maintenance of systems, performances, and the future-proofing of everything “digital” in academic environments. The resulting networking and experience swapping had a lasting effect on the quality of academic outcomes.

However, in the subsequent difficult financial climate, events like dev8d have become rare (with dev8d itself being cancelled). In a situation of budget cuts and increased pressure from students and staff, the IT community has had to find alternative ways to get that same level of training and thinking about the future that came from such events. In this context, receiving funding from UCISA in order to sponsor attendance to a conference that my institution could not otherwise afford was welcome news.

My choice of event is O’Reilly Velocity in Amsterdam at the end of October. Velocity is an important conference – it also happens in New York, Santa Clara and Beijing – and it provides forward-looking sessions about performance and optimisation in systems and web operations. The sessions are often very practical, providing attendees with clear, pragmatic, and effective ideas on how to improve services. Engineers, developers and technology leaders share the challenges their businesses are facing and provide insight on technologies, best practices, and solutions that they have successfully employed to address those challenges.

In the situation I have described, it is evident why a focus on performance and optimisation is important for academic IT services, and specifically for my institution: with our 300 servers and 30,000 accounts to take care of, this is an important consideration.

With access to funding becoming increasingly competitive, as is student and researcher recruitment, it becomes our primary goal to provide systems that are effective, secure, scalable, fast, and at the same time manageable by constrained staff numbers.

The sessions I plan to attend focus on a single goal; understanding how to improve services and ensure our users are satisfied and engaged with our systems. Some examples of sessions I intend to follow include:

I will be reporting from the conference floor both on Twitter (from my account @puntofisso) and this blog. Stay tuned!