Evaluating Automatic Resolution of Ambiguity in Text

  • Ahlswede, Thomas (PI)

Grant Details

Description

A necessary part of language understanding by people or computersis to determine the meaning of ambiguous words. Efforts at automatic disambiguation of text have so far been evaluated by ad hoc, manual checking of their results. As more disambiguation systems are developed and put to use, a rigorous method of evaluation becomes necessary. This work will study text disambiguation by human informants, to aid in developing both a formal model of the process and a metric or correctness for automatic disambiguation. The study will include the selection of representative texts and development from these of a disambiguation test to be administered to a large sample (over 100) of informants, from university students to professional lexicographers. Analysis of the test results will provide insight into the human disambiguation process that will contribute to the evaluation and refinement of current models of disambiguation, both human and computation, and eventually to thedevelopment of new models. The test results themselves will form a statistical profile of the disambiguation of English text, as a benchmark for efforts at automatic disambiguation.

StatusFinished
Effective start/end date09/1/9102/28/94

Funding

  • National Science Foundation: $57,502.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.