When it comes to wordplay, computer scientist Bonnie Dorr is a master. An associate professor at the University of Maryland, she works with language — sometimes several languages at once.
Her focus these days is inventing software that can summarize assorted documents — any means of communication, written or verbal — by machine, do it as quickly and efficiently as possible, and even do it cross-lingually. This would allow a person to take documents in Arabic, for example, translate them by machine into English and then summarize the contents in English to determine which documents should be sent on for what she calls “high-quality human translation.”
Professionals elsewhere are working on grouping documents selectively in clusters, she says. Her expertise is directed at finding ways to produce summaries of documents relevant to a particular need for easy access.
“Machine translations still have issues,” Ms. Dorr cautions. “If you get a general idea of the document first, you have a better idea of what you want.”
The field of multilingual machine translation has advanced rapidly in the past 20 years. Carnegie Mellon University’s School of Computer Science has a separate Center for Machine Translation within a new Language Technologies Institute. Several dozen computer systems exist for translating as many as 38 languages, but the quality of each varies.
Machine translation and summarization are different realms, Ms. Dorr says, “but each uses components of the other’s technology, so we do a lot of different things with each of those.”
Summarization — her primary focus for the past few years — is “taking documents and trying to find a very short summary for them of, say, 20 words, or about 72 characters, including punctuation and spacing.”
It has taken a long while to get to this point, with many efforts trying out competing methods. The most successful one she describes as a hybrid, a combination of linguistic and numerical approaches. A numeric approach employs statistics — basically, counting words — while linguistics largely uses symbols.
“The statistical approach is better at getting content, the linguistic at getting the form,” she says.
The work is being done at the University of Maryland laboratory known as CLIP (short for Computational Linguistics and Information Processing), part of the university’s Institute for Advanced Computer Studies. Ms. Dorr’s colleagues include members of the departments of linguistics and library science.
Chinese and Arabic are the top languages with which they work, but Spanish and Korean also are used. The “documents” involved include broadcast news as well as print forms of communication.
“Interestingly, Arabic and Spanish have more in common structurally than I realized,” Ms. Dorr says. “And Chinese is closer to English in some ways. It’s configurational — having a subject, verb and object the way we tend to have [in English]. … Arabic and Spanish are not in the same family, but there seem to be similarities at least lexically. In English, you would say, ‘I fear,’ but in Spanish you would say, ‘I have fear of it.’ And you might find something like that in Arabic.”
When people ask how the system handles figures of speech such as metaphors, she — metaphorically — throws up her hands. “We have a hard enough time with non-subtleties,” she says. “We have so many things to work on that are pretty difficult. Different word orders, for instance, so that in one language, you have prepositions, and in another, you have case markers.”
Needless to say, the technology and others allied with it greatly interest government agencies involved in defense, intelligence and security matters, many of which are Ms. Dorr’s sponsors — especially the Department of Defense’s Advanced Research Projects Agency. She doesn’t claim to know about methods commonly used by such agencies to make sense of the information flow, but she expects they are more labor-intensive than necessary.
“Having a summary would be better than any mechanisms in use now,” she says.
The system she and her team are developing also could be equally useful for anyone writing lengthy reports requiring a preliminary processing of varied information, she points out: “You could read a short summary of those documents online and browse through them quickly before going into detail.”
At some point in the CLIP team’s research, it entered a competition, known as an evaluation forum, in which its software system outperformed a human being engaged in a similar task.
If the achievement sounds coldhearted, consider the enormous stream of data available on a continuous basis from sources such as the Internet and observational satellites and the challenges involved in making sense of the data. Even when their applications prove successful, she doesn’t foresee a time when human beings will be out of the loop.
“Any sort of generation [of information] is going to be linked to summarization. Having the ability to do things automatically will provide significant assistance to humans,” Ms. Dorr says.
Much information collected by agencies involved in security matters “just isn’t looked at at all,” asserts Larry Davis, head of the University of Maryland’s Department of Computer Science.
“Too much comes in, and there aren’t enough people to look at it all.” The idea behind projects such as the one that engages Ms. Dorr, he states, “is to improve both efficiency and comprehension by summarizing and collecting all articles on a certain topic and then giving it to an analyst. It’s really important.”
Other academic groups work on similar goals and share funding sources. The government’s National Institute of Standards and Technology helps by setting up methods and scoring mechanisms for the evaluation forums that take place among them several times a year.
At Georgetown University, which pioneered machine translation of language in the 1950s, associate professor Inderjeet Mani in the Department of Computational Linguistics is tackling the problem of narrative progression — developing software systems that can make sense of elements such as time when a machine is charged with assessing or processing disparate material.
“It’s the whole problem of knowledge acquisition,” he says. “You can put in the linguistic matters, but you also need knowledge of the world, and you have to know how to give the computer that knowledge.”
He sees computer systems in the future being able to reason through and make sense of “narratives,” such as how patients in clinical settings cope with various kinds of therapeutic measures.
Please read our comment policy before commenting.