Wimbledon’s use of AI to engage fans

At one o’clock this coming Monday, Roger Federer will walk out on to Centre Court to begin the defence of his Wimbledon Championship.  I particularly remember his semi-final match last year.  I was in the bunker where I runs the technology for Wimbledon, and about eight minutes after the match had finished, Wimbledon had produced a two-minute video highlights package of the match.  This was the first time that a sports highlights had been generated automatically.

The rise of video

Wimbledon continues to extend its appeal to a time-poor, younger demographic, and sharing short videos is a key element of the strategy to drive engagement on its digital platforms.  Video views were up 75% year-on-year to 201 million in 2017, of which 14.4 million were such match highlights.  Automatic generation accelerates production so that Wimbledon has first mover advantage, and it enables scale.

IMG_1120 29June

It is achieved using artificial intelligence (AI): learning player reactions in analysis of video, detecting crowd reactions by applying AI to audio, and fusing both with statistical analysis of the data to identify the most important points in the match.  Meta data is used to generate captions that tell the story of the match in the highlights package which Wimbledon then shares with fans through its digital platforms and on social media.

AI becomes the artist

This is an example of technology innovation using AI, Cloud and Data at Wimbledon – 2018 is the twenty-ninth year of IBM’s partnership – that I described yesterday at the Cloud and Data Summit held at Landing Forty Two in London.

Cloud and Data Summit

I opened my talk with a video of the poster that Wimbledon created using AI to celebrate the 150th Anniversary of the All England Lawn Tennis and Croquet Club (AELTC).  AI has become the artist to create a poster.  It looks like a water colour but is actually a mosaic made up of 9,000 images.  These were selected from over 300,000 images in the AELTC’s archive using artificial intelligence to match image recognised content and colour tone.  You too can watch how 150 years of archive photography has been used to stitch together a single beautiful image.

Social engagement

I told the story of data, how it is captured courtside by tennis professionals who can quickly read a match.  They aim to accurately capture all the data associated with every point within a second.  It’s about making data simple and building a trusted foundation that allows insights to be scaled on demand.

Wimbledon combines such insights with analysis of conversations and what is trending about the Championships on social media.  It uses Watson AI to exploit 23 years of articles, press and blogs – 11.2 million words have been analysed – so that it can share facts, video clips and stories with fans in the moment.

Digital resilience

IBM runs Wimbledon’s applications in the Cloud.  Four IBM public cloud and three private cloud data centres around the world are used, offering elasticity and resilience.  The software-defined operating environment allows capacity to scaled up quickly for The Championships.  Easy access to Wimbledon’s digital platforms is sustained through huge fluctuations in demand, such as a spike in interest in an epic match.  Capacity is quickly deprovisioned when no longer required to optimise the cost of infrastructure.

Over 200 million security events were halted during The Championships in 2017.  IBM correlates and normalises security event data to prioritise them and remove false positives.  Security analysts make use of threat intelligence from IBM’s X-Force research on vulnerabilities and malicious IPs, etc.  A knowledge graph is generated to help security analysts understand what is happening.  Watson for Cyber Security offers assistance through its application of AI on the corpus of security research, information on events, security notices, blog posts and more.  The result is a reduction in the time taken to analyse a threat from sixty minutes to one.

AI assistant

Wimbledon launched “Fred” last year, an AI assistant that helps visitors prepare for and make the most of The Championships.  This year, Wimbledon continues to put content where its audience is.  “Fred” powers the new Wimbledon Messenger, a service for millions of other fans available through Facebook Messenger.

Wimbledon’s digital platforms provide the window into The Championships for many fans.  A fabulous experience is enabled by AI that is powered by the IBM Cloud to exploit data.  Experience a little of this for yourself by downloading the Wimbledon app or visiting wimbledon.com/mobile.

 

Becoming a data-driven organisation

Organisations struggle to become data-driven if they retain traditional siloed business functions.  The hand-offs resulting from their differing business goals and inter-communication overheads incur too much inertia.

The real question is:  How do you become outcome driven?  It requires those who interact with customers to understand what is happening in context – being informed – to be empowered to make decisions and to be equipped to act according to the business goal.


It takes an end-to-end approach to become an outcome-driven organisation


I have shown how to build a slice of a data pipeline in previous posts on my blog.  This end-to-end approach is the enabler of shared situational awareness.  Data is available from source in a shared platform, which in turn feeds information to all parts of the business.  However, vertical organisation silos also need to be dissolved in favour of outcome-driven value streams.  Those at the front line must be able to see all the way back to the start of the information cycle safely within the organisation’s information governance policies.  Everyone then has improved and timelier shared awareness.

Each area of the business that interacts with customers operates as a business value stream.  These streams enshrine the concept of bringing the work to the people, rather than shipping people to the work.  This increases quality and employee engagement and reduces internal conflict.


Consuming higher value services releases business capacity


Teams are assembled for value streams.  They are multi-disciplinary and obviate the need for traditional IT programmes and shared services.  The maintenance burden of sustaining existing IT systems is reduced because migrating workloads to the cloud means that previously highly sought after, shared technical expertise can be dedicated to each business area.  Each business area can concentrate on optimising its outcomes.

Teams are able to find and access the information they need using the data platform and configure a pipeline to produce the insights they need for decision making.  This employs techniques including data analysis, identifying patterns, algorithm development, and more.  The pipeline can be augmented by AI and machine learning for greater automation and accuracy.

Micro-services architectures provide teams with the technical capabilities to act, but that is the subject of a future post.  Suffice to say that this offers a step change in automation and agility.

Such automation enables business operations to react more quickly to changes.  It frees up time for people to learn new skills, for better quality engagement with each customer and to focus on tasks that rely on imagination, intuition and empathy.


The profile of technical skills an organisation needs to compete has shifted


Each business area will use the platform to easily create, maintain, grow, shrink and decommission its own systems.  They will be able to exploit automation, sophisticated analytics and machine learning.  As I have shown in previous posts, the barriers to deployment are so low they will be able to start small, experiment and enhance capabilities in days or weeks on the platform without creating unsupportable or under the desk IT.

Only then can you truly become data driven and maximise the benefits of a data pipeline.

Simplifying data science

Cloud computing is changing the way IT services are accessed and consumed.  We are seeing that the dependence on infrastructure expertise is diminishing by engaging higher up the stack.

In my previous post, I showed how to ingest ship positioning data into a Cloudant NoSQL database using Node-RED on the IBM Cloud.  This time I shall show you how an analyst or data scientist can find and access information quickly so that they can spend more of their time using their expertise to derive insight, discover patterns and develop algorithms.

I use the Catalog service to create a connection to and description of my Cloudant data.  The connection simplifies access to data sources for analysts, and I can associate data assets including their descriptions and tags with that connection so that data can be easily found.  The catalogue, connections and data assets are subject to access control and I can implemented governance policies, showing lineage, for example.

Let’s see how we create the catalogue entries on the Watson Data Platform in the IBM Cloud.

See how to share data assets using a catalogue.

Analysts are able to access data assets in the Catalog in notebooks using the Data Science Experience (DSX).  Create a project and simply add the required assets by picking them from the catalog.  I can then create a Jupyter notebook within DSX and generate the Python code to connect to the data asset I need.  Furthermore, DSX automatically provisions a Spark instance for my analysis when I create (or reopen) the notebook.

All this only takes a few minutes as I show here.

See how to use a data pipeline for analysis.

This degree of automation is achieved by concentrating on configuring services that make up an overall data pipeline.  The links between the services simplify the tasks of analysts to find and access data from environments they are familiar with.  The dependency on IT resource to provide and manage data platforms is removed because the analytics engine is provisioned as required, and released once the analysis is complete.

In addition, analysts can share their work, and teams can be built to work on problems, assets and notebooks together.  Data science has become a team sport.

I shall describe how such an end-to-end data pipeline might be implemented at scale in the next post in this series.  In the meantime, try out the Watson Data Platform services for yourself on the IBM Cloud at dataplatform.ibm.com.

This is the third in a series of posts on building an end-to-end data pipeline.  You can find my notebook and the other data pipeline artifacts on GitHub.

Ingesting IoT data without writing code

Cloud computing is changing the way IT services are accessed and consumed.  We are seeing that the dependence on infrastructure expertise is diminishing by engaging higher up the stack.

I described an end-to-end data pipeline in my first post.  I shall now show how to build the data capture and ingest processing as a flow in Node-RED purely through configuration without writing a line of code.  Node-RED offers flow-based programming for the Internet of Things and is available at nodered.org and on the IBM Cloud.

ingest-flow

My flow implements a straightforward pattern.  Firstly, I have a node that reads data off an MQTT feed, then I undertake some data wrangling, which in this case lifts the json message payload to the top level of the document.  As we shall see in a subsequent post, this processing could be arbitrarily complex analytics and manipulation.  Finally, I write the documents into a Cloudant NoSQL database.

See how on YouTube.

The video shows how I am able to provision the Node-RED flow environment and the Cloudant database as one pre-configured service, ready for immediate use.

We shall see how we access the data I have captured for analysis in my third post.  In the meantime, try implementing the flow for yourself using the Node-RED Starter service on IBM Cloud.

This is the second in a series of posts on building an end-to-end data pipeline.  You can find my Node-RED flow and the other data pipeline artifacts on GitHub.

Building an end-to-end data pipeline

Becoming a data-driven organization sounds so simple.  But fulfilling the vision of making smarter decisions takes more than simply providing analysts with tools.

In this series of blog posts, I shall show you how to build an end-to-end data pipeline.  It allows information to be captured from source, processed and analysed according to business need.  I shall configure the pipeline using cloud services as a business user, thereby removing the dependency on traditional IT infrastructure and the set up and maintenance of data platforms.

My pipeline is made up of three main elements today, though I have plans to augment it with more complex processing using additional cloud services.

Ships are required to broadcast their positions.  Messages are picked up by a beacon in southern England.  This AIS data is processed at the edge by an MQTT broker and broadcast by topic.  This is the data source for my demonstration.

data-pipeline

  1. The first step is to pick up the AIS json data feed published by the MQTT broker and process it for ingest into a database. I have done this without writing any code.  I configured nodes in Node-RED to construct a flow, which inserts the AIS data into a Cloudant database.  I used the Node-RED boilerplate service in the IBM Cloud which includes a bound Cloudant service.
  2. Secondly, I used the catalog service in the Watson Data Platform on the IBM Cloud to create a connection and a data asset in my catalog. The catalog allows me to describe and share data assets so that they are easy for people to find and use subject to access controls and governance policies.
  3. Then I access the catalog from the Data Science Experience (DSX) to populate my Jupyter Notebook with the access to my database. DSX provisions a Spark instance automatically for my analysis, which is to plot the positions of ships on a graph.

Data scientists typically work individually, struggle to find the data they need and are often unaware of the assets, code and algorithms that already existing in their organisations.

These challenges are overcome by using cloud native data pipeline services available on the IBM Cloud.  The analyst is able to get started on analysis within minutes of deciding what data is needed to tackle a business problem.  It is easy to find and access using the catalog and the enabling infrastructure to execute analytics on large amounts of data is provisioned automatically for them when they create a notebook.  (Furthermore, the Spark instance is de-provisioned when the notebook is closed.)  Data assets and notebooks can be shared so that data science becomes a team sport.

Get ready to try for yourself by signing up to the Watson Data Platform on the IBM Cloud at dataplatform.ibm.com.

Other posts in this series include:

Acknowledgements: thanks to Dave Conway-Jones, Richard Hopkins and Joe Plumb for their contributions.

Data is not the new oil

You’ve heard it many times and so have I:  “Data is the new oil”

Well it isn’t.  At least not yet.

I don’t care how I get oil for my car or heating.  I simply decide what to cook and where to drive when I want.  I’m unconcerned which mechanism is used to refine oil or how oil is transported, so long as what comes out of the pump at the garage makes my car go.  Unless you have a professional interest or bias I suspect you’re much the same.

Why can’t it be the same with data?

Well for a start, the consumer of data is often all too aware of the complexity of the supply chain and the multiple skills and technologies that it takes to get them the data they wish to consume.  Systems take forever to create and are inflexible in the wrong places.   The ability to aggregate data is over-constrained by blanket security rules that enforce sensible policies, but result in slow moving or over bureaucratic processes and systems.

Today’s cloud technologies have helped, but even here, data services are aimed at developers as the consumer of data, not the end user of it.

The consumers of the new oil would love to be ignorant of where it came from, but they are all too aware and involved in the supply chain that they try and coax to do what they want.

Even with today’s cloud technologies, data services are predominantly created for developers, not the true consumers who understand the data.

Wouldn’t it be wonderful if those who make business decisions could find naturally described information when they wanted?  If they could use it as they wish without regard for the underlying infrastructure?  All with the confidence that access controls and data protection measures are built in.  Enforcing governance policies within the platform builds trust and helps achieve regulatory compliance, such as GDPR.

These are characteristics of a data pipeline: services that ingest data from sources, govern, enrich, store, analyze and apply it.  How data is stored is no longer of concern.  Data is available to all without aggravation.

With their latest cloud platforms, companies like IBM are delivering platforms that do precisely this.  IBM has even published a Data Science Experience that enables a data scientist to build their own pipelines with a rich palette of ingest, machine learning and storage technologies.

We take oil for granted.  Can you say the same for the data you need to drive your business forward?

Try out the Data Science Experience on the IBM Cloud.