Tag Archives: micro services

Benefits of a UCISA bursary – six months later

Allister Homes Profile pic - small

 

 

Allister Homes
Senior Systems Architect
University of Lincoln

 

 

 

I have attended a number of HE-sector EA events over past few years, and applied for the UCISA bursary hoping that the Gartner EA summit would help me learn more from experts outside the HE sector, and perhaps help me to consider different perspectives. I didn’t see official figures, but I estimated that there were roughly 400-600 attendees. The same summit also takes place in the USA on different dates (with, I would imagine, an even larger number of delegates). As you would expect, there were a lot of sessions running in parallel, so it was impossible to get to everything, and I cherry picked what I thought likely to be the most interesting and useful sessions.

It wasn’t surprising to find that the EA practice of universities is more modest than that a lot of other organisations represented by delegates at the conference. I mentioned in the reflections blog post that there was often an unvoiced assumption that delegates were part of teams with significant numbers of architects and developers, with suggestions such as “when you get back, why not assemble a small team of 5 people to go and investigate X, Y and Z”. It’s good to see how EA is being done outside the sector, but equally important to remember that we need to use it appropriately by learning and adapting from billion-pound organisations, rather than hoping to replicate.

I found the summit helpful to maintain my thinking as an architect on how the architecture we implement now can support the changes that we will need to implement in coming years. Nobody knows exactly what these changes will be, but nonetheless we need to make the best decisions we can now in order to be flexible for whatever change comes along later.

Cloud maturity

Gartner’s views on cloud maturity were interesting and seemed sound. Things such as breaking through vendor fog and hype to get the real information about what offerings are available, the fact that many vendors now offer new services as cloud first, the need to frequently update cloud strategies, and the fact that it’s a case of the “degree of cloudiness” rather than whether to take a cloud approach or not, all ring true.

There was useful insight into changes that Gartner Analysts expect to see over the next few years. Information about strategic trends was also interesting and useful as background information to keep in mind when considering enterprise architectures over the next few years. So too was the session on making sure the architecture is flexible enough to respond to business moments as rapidly as possible; in a setting such as HE, I think getting to that point of the intuition’s architecture being flexible is itself a significant undertaking that will take a long time to achieve, and has to come about gradually, but with deliberate direction, as things are introduced, removed and changed.

Software architecture

In retrospect, I’d categorise several sessions as being about software architecture rather than enterprise architecture; for example, more than one session looked at designers splitting applications into smaller applications and using micro-services for massive web-scale SOA.  Cases in point included Netflix and Facebook, but I think the enterprise architect would be more interested in the services Netflix delivers, how it interacts with other services and how people interact with it, than the detailed software architecture of how Netflix works internally.

Networking

Unlike many of the HE events I’ve attended, I didn’t make any useful contacts at the conference with whom I could occasionally keep in touch to share information. I mentioned in the reflections blog that conversations appeared to be mainly limited to people who already worked together, and a bit of people-watching seemed to reveal that others who, like me, tried to strike up conversations with ‘strangers’ didn’t get much of a flow going. This may well be the norm for a large conference with people from diverse organisations, the vast majority of which would be profit making entities less inclined to openly share.

Attending the summit has not fundamentally changed what I (or we at the University) do or how I think, and it’s a conference that would be useful to attend every two or three years rather than annually, but overall it was beneficial and an interesting experience.

Perhaps one of the most thought-provoking things was that Gartner estimates that by 2017 60% of global 1000 organisations will execute at least one revolutionary and currently unimaginable business transformation effort. Of course, there are no universities in that list, but I wonder – what the proportion of universities that will undergo such a transformation by 2017 will be, and what that transformation will be?

Interested in applying for a UCISA bursary? Then visit UCISA Bursary Scheme 2018.

PaaS, bots, alerts and using analytics to improve web performance

Giuseppe Sollazzo

 

 

 

Giuseppe Sollazzo
Senior Systems Analyst
St George’s, University of London

 

 

Storytelling at Velocity

The second day of O’Reilly Velocity conference was definitely about storytelling: keynotes and sessions were both descriptions of performance-enhancement projects or accounts of particular experiences in the realm of systems management, and in all honesty, many of these stories resonate with our daily experience running IT Services in an academic environment. I will give a general summary, but also mention the names of the speakers I’ve found most useful.

Evolution in the Internet of Things age
An attention-catching keynote by Scott Jenson, Google’s Physical Web project lead, the first session was centred on a curious observation: most attention about web performances has traditionally been focused on the “body”, the page itself, while the most interesting and performance-challenged part is actually the address bar.

Starting from this point, Scott has illustrated how the web is evolving and what its characteristics will be especially in the Internet of Things age. He advocated for this to be an “open” project, rather than Google’s.

Another excellent point he has made is that control should be given back to the users. This was illustrated by a comparison between a QR code and an iBeacon : the former requires the user to take action; the latter is proactive to a passive user. Although we like to think of proactive applications, it only takes us to walk into a room full of them to understand being in control can be a good thing.

PaaS for Government as a Platform
Most of the conference talks have centred on monitoring and analytics as a way to manage performances. Among the most interesting talks, Anna Shipman of the UK Government Digital Service (GDS) illustrated how they are choosing a Platform-as-a-Service supplier in order to implement their “Government-as-a-Platform” vision.

I’ve argued a lot in the past that UK Academia will need, sooner or later, to go through a “GDS moment” to get back to innovation in a way it can control – as opposed to outsource in bulk – and this talk was definitely a reminder of that.

Rise of the bot
As with yesterday’s Velocity sessions, some truly mind-boggling statistics have been released today. One example is that that many servers are overwhelmed by web crawlers or “bots” – the automated software agents that index websites for search engines. In his presentation From RUM to robot crawl experience!  Klaus Enzenhofer of Dynatrace told the audience that he spoke to several companies for which two thirds of all traffic they receive is Google Bots. “We need a data centre only for Google”, they say.

Analytics for web performance
There has been quite a lot of discussion around monitoring vs. analysis. In his presentation Analytics is the new monitoring: Anomaly detection applied to web performance Bart De Vylder of CoScale argued for the adoption of data science techniques in order to build automatic analysis procedures for smart, adaptive alerting of anomalies. This requires an understanding of the domain of the anomalies in order to plan how to evolve the monitoring, considering for example seasonal variations in web access.

Using alerts
On a similar note was the most oversubscribed talk of the day, a 40 minute session by Sarah Wells of the Financial Times which saw over 200 attendees (with many trying to get a glimpse from outside the doors). Sarah told the audience about how it is very easy to be overwhelmed by alerts: in the FT’s case, they perform 1.5M checks per day generating over 400 alerts per day. She gave an account of their experience trimming down these figures. Very interestingly, the FT has adopted the cloud as a technology, but they haven’t bought it from an external supplier: they’ve built it themselves, with great attention to performance, cost, and compliance, surely a strategy that I subscribe to.

Conference creation
I also attended an interesting non-technical session by another Financial Times employee, Mark Barnes, who explained how they conceived the idea of an internal tech conference and how they effectively run it.

Hailed an internal success and attended by their international crowd, the conference idea came from an office party and reportedly has helped improve internal communications at all levels. As a conference/unconference organiser myself (OpenDataCamp, UkHealthCamp, WhereCampEU, UKGovCamp, and more), having this insight from the Financial Times will be invaluable for future events.

I’m continuing to fill in this Google doc with technical information and links from the sessions I attend, so have a look if you’re interested.