Performance management and assessing capacity

Giuseppe Sollazzo

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

 

 

 

Velocity day three – the final one – has been another mind-boggling combination of technical talks and masterful storytelling about performance improvement in a disparate set of systems. The general lesson of the day is: know your user, know your organization, know your workflows – only then will you be able to adequately plan your performance management and assess your capability.

This was the message from the opening keynote by Eleanor Saitta. She spoke about how to design for ‘security outcomes’, or, in other words, ‘security for humans’: there is no threat management system that works if isolated from an understanding of the human system where the threats emerge. We have some great examples of this in academia, and at St George’s one of the major challenges we face is securing systems and data in a context of academic sharing of knowledge. Being a medical school, the human aspect of security – and how this can affect performances – is something we have to face on a daily basis.

One of the best presentations, however, was by David Booker of IBM, who gave a live demo of the Watson system, an Artificial Intelligence framework which is able to understand informal (up to a point) questions and answer them in speaking. As per every live demo, this encountered some issues. Curiously, Watson wasn’t able to understand David’s pronunciation of the simple word “yes”. “She doesn’t get when I say ‘yes’ because I’m from Brooklyn,” David said, triggering laughter in the audience.

Continuous delivery
Courtney Nash of O’Reilly spoke at length about how we should be thinking when we build IT services, with a focus on the popular strategy of continuous delivery. Continuous delivery is the idea that a system should transition from development to production very often, and this idea is taking traction in both industry and academia. However, this requires trust: trusting your tools, your infrastructure, your code, and most importantly, the people who power the whole organization. Once again, then, we see the emergence of a human factor when planning for the delivery of IT services.

The importance of 2G
In another keynote with a lot of applicable ideas for academic websites, Bruce Lawson of Opera ASA has focused on the ‘next billion’ users from developing countries who are starting to use internet services. Access to digital is spreading, especially in developing areas of Asia, where four billion people live. India had 190 million internet users in 2014, and this is poised to grow to 400 million by 2018.

The best piece of information in this talk was the realisation that if you take the US, India and Nigeria, the top 10 visited websites are the same: Facebook, Gmail, Twitter, and so on. Conversely, the top 10 devices give a very different picture: iPhones dominate in the US, cheap Androids in India, and Nokia or other regional feature phones in Nigeria. This teaches us an important lesson: regardless of hardware, people worldwide want to consume the same goods and services. This should tell us to build our services in a 2G-compatible way if we want to reach the next billion users (91.7% people in the world live within reach of a 2G network). This is of great importance to academia in terms of international student recruitment.

Performance optimisation
The afternoon sessions were an intense whistle-stop tour of experiences of performance optimisation. Alex Schoof of Fugue, for example, gave an intensely technical session about secret management in large scale systems, something that definitely applies to our context: how do we distribute keys and passwords in a secure way that allows that secrets to be changed whenever required? With security issues going mainstream, like the infamous Heartbleed bug, this is something of increasing importance. Adam Onishi of London-based dxw, a darling of public sector website development, gave an interesting talk on how performance, accessibility and technological progress in web design are interlinked, something academic website managers have too often failed to consider with websites that are published and then forgotten for years.

As someone who has developed mobile applications, I really enjoyed AT&T’s Doug Sillars’ session about ‘bad implementation of good ideas’, showing that lack of attention to the system as a whole has often killed otherwise excellent apps, which are too focused on local aspects of design.

Velocity has been a great event. I was worried it would be too ‘corporate’ or sponsor-oriented, but it has been incredibly rich, with good practical ideas that I could apply to my work immediately. It has also offered some good reflection on ‘running your systems in house’: we often perceive this dualism between the Cloud and in-house services. This is a technology that can be run in-house with no need to outsource. As IT professionals we should appreciate it, and make the case for adopting technologies that improve performance and compliance in a financially sound way. This often requires abandoning outsourcing and investing on internal resources: a good capital investment that will allow continuous improvement of the infrastructure.

 

Leave a Reply

Your e-mail address will not be published. Required fields are marked *