Blog & News

The 4 key questions (plus one bonus question!) to ask when it comes to NLG

By Greg Williams | August 26, 2019
2019-08-26-FourQuestions

After our previous post about the groundbreaking work that data reporters at RADAR AI and BBC News Labs are doing to bring attention, context, accuracy, and timeliness to hyperlocal news stories, Gary Rogers, RADAR AI’s CEO, gave us a nice shout out on LinkedIn. Gary called Arria NLG Studio “a great tool in [RADAR AI’s] data-driven workflow, helping us turn UK public data into hundreds of strong, relevant local news stories every week for publications across the country.” Now, there’s a lot for Arria to like about that comment, but there’s also one key takeaway for those contemplating how best to bring Natural Language Generation into their business or organization: Notice that Gary didn’t thank us for doing any work for him . . . because we don’t do any work for him. “All” we’ve done—and don’t get me wrong, it’s quite a lot—is build the advanced NLG platform that empowers clients such as RADAR AI to achieve analytical and linguistic feats in previously unimaginable volume on their own. We supply the platform, and our clients put it to their best use.

Imagine a scenario where, instead, a group of generalist American writers at Arria received data from Gary and his UK team and then, a few days or weeks later, we shipped stories to them for review and correction, thus beginning an iterative process of several cycles. Well, that wouldn’t work very well for the RADAR AI use case, and not simply because of the analyse/analyze colour/color problem. Also for more substantive reasons, such as the loss of immediacy, voice, and perspective . . . which brings me to the first question you should ask any NLG technology vendor who comes knocking:

1) Is this a tool that my writers and subject matter experts can use directly, so that it reflects their language, expertise, and sensibility?

Please note that this question goes double if that NLG vendor is internal—your own technology department vying for the job! Just because they say they can build your NLG platform “in-house” doesn’t mean they’ll hand over a tool that non-technologists can use. It’s not truly “in-house” unless it’s in the hands of your writers and subject matter experts, without dependency on a technical intermediary. It’s understandable that your tech team might want to give it a try: NLG is hot stuff, and carries revolutionary implications for the flow of business information. But building an NLG platform requires a permanent dedicated team of technologists that only a specialized firm can support. So, your in-house technology team is not in the running.

The answer to question number one needs to be yes. If it is no, then your firm may be expected to ship data to the vendor, whose writers will create narratives that are sure to be inferior to those produced by your writers and subject matter experts. There is nothing to be gained from producing inferior work in massive volume. And no need to mention the obvious data security concerns with such a system, except to suggest that if the data isn’t important enough to protect, it might not be important enough to analyze and describe in the first place . . . which brings me to the second question:

2) Do you offer multiple deployment options, including deployment entirely on-premise?

If you’re dealing with your firm’s important, confidential data, you might choose a dedicated private cloud or an entirely on-premise installation. As soon as you hear, “Yes, we can deploy on premise,” try this follow-up question: “Great! And can you confirm that the architecture of that implementation requires absolutely no calls to external servers, even for BI dashboards?” Your CTO doesn’t want to hear, “Can we punch a hole in the firewall?” and will have no problem saying, “Nope.” At Arria, to accommodate a range of security options, we offer installations entirely on-premise, in a dedicated private cloud, or in the public cloud. No matter which option is right for you, there is still the question of data source and output format, which brings me to the third question:

3) Is your NLG platform open, API-based, so that it can make use of any source of data and can output to any presentation layer?

Whatever NLG use case or data source you have in mind right now, you will probably think of new use cases and find new data sources over time. And similarly, the output is likely to be varied, perhaps taking the form of a dashboard caption, a standalone report, words spoken by a digital virtual assistant on your phone or in your car, or words and body language expressed by a hologram or digital human. Arria’s API-based system is flexible to cover all eventualities of data source and presentation layer.

With the first three questions, we’ve established that the successful vendor needs to put an NLG platform into the hands of your subject matter experts and writers, must offer multiple deployment options, and must be open, API-based. But we haven’t yet contemplated the quality of the output. Remember, the end consumer of your NLG words cares nothing about questions one, two, or three. All of that is invisible preliminary to the only thing that matters to your reader or listener: the language that is the final and, indeed, the only expression of the entire endeavor. The answers to the first three questions can all be correct, but—this is critical for NLG project leaders to internalize as a cold hard truth—if the output is awkward, bad, grammatically incorrect, or obviously machine written, then the project is an absolute failure . . . which brings me to question number four:

4) Is your NLG platform smart enough to help my team with math and writing?

The winning platform should put sophisticated mathematical and linguistic functions at your writers’ fingertips. Think about the complications that arise from constructing prose based on data that has been assembled over time by a succession of technologists, each with different levels of attention to detail, different language backgrounds, and varying levels of foresight as to where the data they are storing might someday appear. At their best, databases are dirty and incomplete. Arria comes with a boatload of language functions to help protect the “N” part of your NLG. For example, Arria recognizes whether a country name—often the only indicator of nationality in a structured data set—should be preceded by the determiner “the.” Arria also assigns the appropriate indicator of nationality. As a result, within Arria someone who is from “THE” UK, is referred to as “British,” though neither the determiner “the” nor the word “British” is found in the underlying data.

Similarly, native to Arria’s platform are mathematical functions that save your writers from having to plead for time from your technologists to calculate, for example, standard deviations, mean, median, and the like. Your technologists have more important work, and your writers should be writing.

Bonus Question

Ask any vendor who purports to compete with Arria, or any internal technology group that says they can build your NLG system:

Are you willing to have a bake-off against Arria?

If the answer is yes, then great! For obvious reasons, we love a bake-off. Let us know and we’ll be glad to accommodate.

MORE BLOG AND NEWS