Context is key to making computers better conversationalists

When communicating, context is king. A breakthrough in modelling context in human communication could make computers better conversationalists, according to cognitive scientists at Stanford University.

“[Michael] Frank, [head of Stanford University’s Language and Cognition Lab] and colleague Noah Goodman, also a cognitive scientist from Stanford, have developed a mathematical encoding of what they call “common knowledge” and “informativeness” in human conversation. “We have a vastly powerful predictive model of the world,” says Goodman. “When somebody goes to understand a statement that somebody else has made, they’re making the best guess about the meaning of that statement, incorporating all these factors like informativeness and context.”

By “putting numbers to” a theory of communication that dates back to the 1960s, they have come up with a model that not only describes part of the mutual understanding shared between human speakers, but also lays the groundwork for the next generation of our AI interlocutors, from pocket voice assistants like Apple’s Siri and Android’s Iris to automated customer-service bots. “We’ve created a formalism for trying to predict what speakers are talking about and shown that it makes pretty good predictions,” says Frank. But the developers of Iris, for instance, also confirm that context-based understanding will give the edge in their field.”

Read article

Leave a Reply