Blog & News

The Grand Challenge for NLG: Making Data Accessible to Humans

By Ehud Reiter | November 16, 2021
Blog Image - 15 The Grand Challenge for NLG Making Data Accessible to Humans

Twenty years ago, the UK Computing Research Committee asked me what my vision and “grand challenge” was for artificial intelligence (AI) and natural language generation (NLG). I told them that I wanted NLG and AI to be used to humanise and explain data, so I gave the specific example of a tool that explained personal medical data. For example, if Mary Smith wanted to understand her health better, I would want to empower Mary with a tool that could analyse her medical records, extract key insights based on her concerns, and present those insights to her as an easy-to-read narrative, using language Mary could easily relate to.

I am convinced such tools are even more essential in 2021. For example, there is now a massive amount of medical data available to the public, including information from sensors on phones and watches, but this data isn’t useful if no one understands it. Doctors have also told me about cases where patients get stressed when viewing their health data because they spot something unusual and think there may be a problem, when all they are seeing is simply natural variation and fluctuation in their medical history.

It is time to design tools that can summarise, explain, and contextualise data in a way that makes sense to Mary and the rest of us. This should also apply to all sorts of data, not just medical.

I would love to see NLG used to develop such solutions. Arria, has developed revolutionary technology that extracts key insights from data and presents it as narratives to users. But the industry needs to build on this work, to achieve this vision. We are required to work with massive amounts of data (such as medical records), which may include free text comments from doctors (etc) as well as numbers and create narratives which work for people with varying levels of industry knowledge and literacy. We also need to make NLG systems interactive, so that Mary and other users can ask the right questions and focus on what they care about. NLG systems should learn from their users, so they can get better at identifying and communicating key insights.

These are certainly some challenges, but I am also seeing real progress being made – both from universities (including my own NLG research group at Aberdeen) as well as from companies producing products such as Arria Answers. I am convinced that we are on right track to developing the universal system I dreamed about 20 years ago! It is exciting for me to watch this vision begin to take shape, and I am honoured to be able to contribute to it, in a company that is helping to define the humanising of data that works for everyone.

If you’d like to stay on top the current state of Natural Language Generation and what our visionaries and data scientists see in the future, sign up for our blog updates.

 

 

 

 

 

MORE BLOG AND NEWS