text
stringlengths
1k
1k
year
stringclasses
12 values
No
stringlengths
1
3
i Syntactic Processing Martin Kay Xerox Pals Alto Research Center In computational linguistics, which began in the 1950's with machine translation, systems that are based mainly on the lexicon have a longer tradition than anything else---for these purposes, twenty five years must be allowed to count as a tradition. The bulk of many of the early translation systems was made up by a dictionary whose entries consisted of arbitrary instructions In machine language. In the early 60's, computational llnsulsts---at least those with theoretical pretentlons---abandoned this way of doing business for at least three related reasons: First systems containing large amounts of unrestricted machine code fly in the face of •II principles of good programming practice. The syntax of the language in which linguistic facts are stated is so remote from their semantics that the oppor
1979
1
Semantics of Conceptual Graphs John F. Sowa IBM Systems Research Institute 205 East 42nd Street New York, NY 10017 ABSTRACT: Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approxi- mate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory. I. Surface Models Semantic networks are often used in AI for representing meaning. But as Woods (1975) and McDermott (1976) ob- served, the semantic networks themselves have no well-defined semantics. Standard predicate calculus does have a precisely defined, model theoretic semantics; it is adequate for des
1979
10
ON THE AUTOMATIC TRANSFORMATION OF CLASS MEMBERSHIP CRITERIA Barbara C. Sangster Rutgers University This paper addresses a problem that may arise in c]assificatzon tasks: the design of procedures for matching an instance with a set ~f criteria for class membership in such a way as to permit the intelligent handling ~f inexact, as well as exact matches. An inexact match is a comparlson between an instance and a set of criteria (or a second instance) which has the result that some, but not all, of the criteria described (or exemplified) in the second are found to be satisfied in the first. An exact match is such a comparison for which all of the criteria of the second are found to be satisfied in the first. The approach presented in this paper is t~ transform the set of criteria for class membership into an exemplary
1979
11
A SNAPSHOT OF KDS A KNOWLEDGE DF_,LIVERY SYSTEM James A. Moore end William C. Mann USCIlnformaUon Sciences Institute Marina del Ray, CA June, 1979 SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication with non-linguistic activities by • gcel-pursuin~ planner, end the use of dynamic models of speaker and hearer to shape the text to the task at hand. STATEMENT OF THE PROBLEM The task of KDS is to generate English text under the following constraints: 1. The source of information Is a semantic net, having no a priori st
1979
12
The Use of Ooject-Specl flc Knowledge in Natural Language Processing Mark H. Bursteln Department of Computer Science, Yale University 1. INTRODUCTION it is widely reco~nlzed that the process of understandln~ natural language texts cannot be accomplished without accessin~ mundane Knowledge about the world [2, 4, 6, 7]. That is, in order to resolve ambiguities, form expectations, and make causal connections between events, we must make use of all sorts of episodic, stereotypic and factual knowledge. In this paper, we are concerned with the way functional knowledge of objects, and associations between objects can be exploited in an understandln~ system. Consider the sentence (1) Jonn opened the Oottle so he could pour the wine. Anyone readin~ this sentence makes assumptions about what happened which go far beyond what is stated. For
1979
13
H~ADING WITH A PURPOSE Michael Lebowitz Department of Computer Science, Yale University 1. iNTRODUCTION A newspaper story about terrorism, war, politics or football is not likely to be read in the same way as a gothic novel, college catalog or physics textbook. Similarly, tne process used to understand a casual conversation is unlikely to be the same as the process of understanding a biology lecture or TV situation comedy. One of the primary differences amongst these various types of comprehension is that the reader or listener will nave different goals in each case. The reasons a person nan for reading, or the goals he has when engaging in conversation wlll nave a strong affect on what he pays attention to, how deeply the input is processed, and what information is incorporated into memory. The computer model of understanding
1979
14
DISCOURSE: CODES AND CLUES IN CONTEXTS Jane J. Robinson Artificial Intelligence Center SRI International, Menlo Park, California Some of the meaning of a discourse is encoded in its linguistic forms. Thls is the truth-conditional meaning of the propositions those forms express and entail. Some of the meaning is suggested (or 'implicated', as Grice would say) by the fact that the encooer expresses just those propositions in just those linguistic forms in just the given contexts [2]. The first klnd of meaning is usually labeled 'semantics'; it is decoded. The second Is usually labeled 'pragmatlcs'; it is inferred from clues provided by code and context. Both kinds of meaning are related to syntax in ways that we are coming to understand better as work continues in analyzing language and constructing processing models for communlcatlon. We are also coming to a
1979
15
Paraphrasing Using Given and New Information in a Question-Answer System Kathleen R. McKeown Department of Computer and Information Science The Moore School University of Pennsylvania, Philadelphia, Pa. 19104 ABSTRACT: The design and implementation of a paraphrase component for a natural language questlon-answer system (CO-OP) is presented. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar used by the paraphraser to generate questions. I • INTRO~ION In a natural language interface to a database query system, a paraphraser can be used to ensure that the system has correctly understood the user. Such a paraphraser has been developed as part of the CO-OP system [ KAPLAN 79]. In CO-OP,
1979
16
WHERE qUESTIONS Benny Shanon The Hebrew University of Jerusalem Consider question (i), and the answers to it, (2)-(h)~ (i) Where is the Empire State Building? (2) In New York. (3) In the U.S.A. (h) On 3hth Street and 3rd Avenue. When (i) is posed in California (2) is the appropriate answer to it. This is the case even though (3) and (h) are also true characterizations of the location of the Empire State Building. The pattern of appropriateness alters, however, when the locale where the question presented changes. Thus, when (i) is asked in Israel, (3) is the appropriate answer, whereas when it is asked in Manhattan, (I~) is the answer that should be given. The foregoing observations, originally made by Rumelhart (197h) and by Norman (1973), suggest the following. First, it is not enough for answers to questions to be (semantically) true, they have to be (pragmatically) appropriate as well. Second, appro
1979
17
The Role Of Focussing in Interpretation of Pronouns Candace L. Sidner Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 ;rod Bolt, Beranek and Newman, Inc. 50 Moulton Street Cambridge" MA 02138 In this p;,per I [ discuss the formal relationship between the process of focussing and interpret;ition of pronominal anaphora. The discussion of focussing extends the work of Grosz [1977]. Foct,ssing is defined algorithmical]y as a process which chooses a focus of attention in a discourse and moves it around as the speaker's focus ch'mges. The paper shows how to use the focussing algorithm by ;m extended example given below. DI-I Alfred a,ld Zohar liked to play baseball. 2 They played it everyday after school before dinner. 3 After their game, the two usually went for ice cream cones. 4 They tasted really good. 5 Alfred always had the vanilla super scooper, 6 while Zohar tried t
1979
18
The Structure and Process of Talking About Doing James A. Levln and Edwin L. Hutchins Center for Human Information Processing University of Callfornia, San Diego People talk •bout what they do, often •t the same tame a• they are doing. This reporting has •n important function in coordinating aotlon between people working together on real eve~/day problems. Zt is also •n important acts'ca o£ data for social scientists sttu~ylng people's behavior. Xn this paper, we report on some •tudle• we are doing on report dialogues. We describe two kinds of phenomena we have identified, outline a preliminary process model that int•grat•• the report generation with the processes that are generating the actions being reported upon, and specify a systematic methodology For extracting relevant evidence bearing on these phenomena t~om text trenscrlpts of talk about doing to use in evaluating the model. ~ O Z W ~ W Reports of problm
1979
19
TOWARDS A SELF-EXTENDING PARSER Jaime G. Carbonell Department Of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 Abstract This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as Implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words Is addressed In terms of semantic constraint Integration. 1. Introduction Natural language analysis, like most other subfields of Artificial Intelligence and Computational Linguistics, suffers from the fact that computer systems are unable to automatically better themselves. Automated learning ia considered a very difficult proble
1979
2
DIKSIGN FOR I)IALOGUE COMPREHENSION William C. Mann USC Information Sciences Institute Marina del Rey, CA April, 1979 This paper describes aspects of the design of a dialogue comprehension system, DCS, currently being Implemented. It concentrates on a few design innovations rather than the description of the whole system. The three areas of innovation discussed are: 1. The relation of the DCS design to Speech Act theory and Dialogue Game theory, Z. Design assumptions about how to identify the "best" interpretation among several alternatives, and a method, called Preeminence Scheduling, for implementing those assumptions, 3. A now control structure, tlearsay-3, that extends the control structure of llearsay-l[ and makes Preeminence Scheduling fairly straightforward. I. Dialogue Games, Speech Acts and DCS -- Examination of actual human dialogue reveals structure extending over • ~overal turns and
1979
20
Plans, Inference, and Indirect Speech Acts I James F. Allen Computer Science Department University of Rochester Rochester, NY Iq627 C. Raymond Perrault Computer Science Department University of Toronto Toronto, Canada MSS IA7 Introduction One of the central concerns of a theory of pra~atics is to explain what actions language users perform by making utterances. This concern is also relevant to the designers of conversational language understanding systems, especially those intended to cooperate with a user in the execution of some task (e.g., the Computer Consultant task discussed in Walker [1978]). All actions have effects on the world, and may have preconditions which must obtain for them to be successfully executed. For actions whose execution causes the generation of linguistic utterances
1979
21
APPLICATIONS DAVID G, HAYS HeXagram Truth, like beauty, is in the eye of the beholder, Z offer a few remarks for the use of those who seek a point of view from which to see truth in the six papers assigned to this session. Linguistic computation is the fundamental and primitive branch of the art of cumputatlon~ as I have remarked off and on. The insight of yon Neumann~ that operations and data can be represented in the same storage device, is the linguistic insight that anything can have a name in any language. (Whether anything can have a definition is a different question.) I recall surprising a couple of colleagues with this r~ark early in the 1960s, when I had to point out the obvious fact that compillng and interpreting are linguistic procedures and therefore that only in rare instances does a computer spend more time on mathematics than on linguistics. By now we all take the central position of our subject ma
1979
22
EUFID: A FRIENDLY AND FLEXIBLE FRONT-END FOR DATA MANAGEMENT SYSTEMS Marjorie Templeton System Development Corporation, Santa Monica, CA. EUFID is a natural language frontend for data management systems. It is modular and table driven so that it can be interfaced to different applications and data manage- ment systems. It allows a user to query his data base in natural English, including sloppy syntax and mis- spellings. The tables contain a data management system view of the data base, a semantic/syntactic view of the application, and a mapping from the second to the first. We are entering a new era in data base access. Computers and terminals have come down in price while salaries have risen. We can no longer make users spend a week in class to learn how to get at their data in a data base. Access to the data base must be easy, but also secure. In some aspects, ease and security go together because, when we move the
1979
23
WORD EXPERT PARSING l Steven L. Small Department of Computer Science University of Maryland College Park, Maryland 20742 This paper describes an approach to conceptual analysis and understanding of natural language in which linguistic knowledge centers on individual words, and the analysis mechanisms consist of interactions among distributed procedural experts representing that knowledge. Each word expert models the process of diagnosing the intended usage of a particular word in context. The Word Expert Parser performs conceptual analysis through the Interactlons of tl~e individual experts, which ask questions and exchange information in converging on a single mutually acceptable sentence meaning. The Word Expert theory is advanced as a better cognitive model of natural language understanding than the traditional rule-based approaches. The Word Expert Parser models parts o~ tSe theory, and the impor
1979
3
Schank/Riesbeck vs. Norman/Rumelhart: What's the Difference? Marc Eisenstadt The Open University Milton Keynes, ENGLAND This paper explores the fundamental differences between two sentence-parsers developed in the early 1970's: Riesbeck's parser for $chank's'conceptual dependency' theory (4, 5), and the 'LNR' parser for Norman and Rumelhart's 'active :~emantic network' theory (3). The Riesbeck parser and the I,NR parser share a common goal - that of trsnsforming an input sentence into a canonical form for later use by memory~inference~paraphrase processes, l,'or both parserz, this transformation is the act of 'comprehension', although they appear to go about it in very (Jifferent ways. Are these differences real or apparent? Riesbeck's parser i~ implemented as n production system, in which input text can either ssti~{y the condition side of any production rule within ~ packet of currently-active rules, or else interrupt pro
1979
4
TOWARD A COMPUTATIONAL THEORY OF SPEECH PERCEPTION Jonathan Allen Research Laboratory of Electronics & Dept. of Electrical Engineering and Computer Science Massachusetts Institute of Technology, Cambridge, MA 02139 ABSTRACT In recent years,a great deal of evidence has been collec- ted which gives substantially increased insight into the nature of human speech perception. It is the author's belief that such data can be effectively used to infer much of the structure of a practical speech recognition system. This paper details a new view of the role of structural constraints within the several structural do- mains (e.g. articulation, phonetics, phonology, syntax, semantics) that must be utilized to infer the desired percept. Each of the structural domains mentioned above has a sub- stantial "internal theory" describing the constraints within that domain, but there are also many interactions between structural domains which must
1979
5
UNGRAMHATICALITY AND EXTRA-GRAMMATICALITY IN NATURAL LANGUAGE UNDERSTANDING SYSTEMS Stan C. Kwasny as The Ohio State University Columbus, Ohio 1. Introduction Among the components included in Natural Language Understanding (NLU) systems is a grammar which specifies much of the linguistic structure of the utterances that can be expected. However, it is certain that inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence "extra-grammatical". These might be rejected, but as Wilks stresses, "...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances." [WIL76] This paper investigates several language ph
1979
6
GENF~ALIZED AUGMENTED TRANSITION NETWORK GRAMMARS FOR GENERATION FROM SD£%NTIC NETWORKS Stuart C. Shapiro Department of Computer Science, SUNY at Buffalo I. YNTRODUCTYON Augmented transition network (ATN) grammars have, since their development by Woods [ 7; ~, become the most used method of describing grammars for natural language understanding end question answering systems. The ad- vantages of the ATN notation have been su,naarized as "I) perspicuity, 2) generative power, 3) efficiency of representation, 4) the ability to capture linguistic regularities and generalities, and 5) efficiency of operation., [ I ,p.191 ]. The usual method of utilizing an ATN grammar in a natural language system is to provide an interpreter which can take any ATH graam~ar, a lexi- con, and a sentence as data and produce either a parse of a sentence or a message that the sentence does not conform to the granunar. A compiler has been written [2;3 ] whi
1979
7
KNOI~LEDGE ORGANIZATION AND APPLICATION: BRIEF COMIIENTS ON PAPERS IN THE SESSION Aravind K. Joshi Department of Computer and Information Science The Moore School University of Pennsylvania, Philadelphia, PA 191O4 Comments: My brief comments on the papers in this session are based on the abstracts available to me and not on the complete papers. Hence, it is quite possible that some of the comments may turn out to be inappropriate or else they have already been taken care of in the full texts. In a couple of cases~ I had the benefit of reading some earlier longer related reports, which were very helpful. All the papers (except by Sangster) deal with either knowledge representation, particular types of knowledge to be represented, or how certain types of knowledge are to be used. Brackman describes a lattice-like structured inheritance network (KLONE) as a language for explicit representation of natural language conceptual in
1979
8
Taxonomy, Descriptions, and Individuals in Natural Language Understanding Ronald J. Brachman Bolt Beralmek and Newman Inc. KLONE is a general-purpose language for representing conceptual information. Several of its pr~linent features -- semantically clean inheritance of structured descriptions, taxonomic classification of gpneric knowledge, intensional structures for functional roles (including the possibility of multiple fillers), and procedural attachment (with automatic invocation) make it particularly useful in computer-based natural language understanding. We have implemented a prototype natural language system that uses KLONE extensively in several facets of its operation. This paper describes the system and points out some of the benefits of using KLONE for representation in natural language processing. Our system is the beneficiary of two kinds of advantage from KLONE. First, the taxonomic character of t
1979
9
ON THE SPATIAL USES OF PREPOSITIONS Annette Herskovlts Linguistics Department, Stanford University At first glance, the spatial uses of prepositions seem to constitute a good semantic domain for a computational approach. One expects such uses will refer more or less strictly to a closed, explicit, and precise chunk of world knowledge. Such an attitude Is expressed in the following statement: "Given descriptions of the shape of two objects, given their location (for example, by means ox coordinates in Some system of reference), and, In some cases, the location of an observer, one can select an appropriate preposition." This paper shows the fallacy of this claim. It addresses the problem of interpreting and generating "locative predications" (expressions made up of two noun-phrases governed by a prepos
1980
1
ON THE INDEPENDENCE OF DISCOURSE STRUCTURE AND SEMANTIC DOMAIN Charlotte Linde ~ J.A. Goguen + * I. THE STATUS OF DISCOURSE STRUCTURE Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974, 1979, 1980a,198Cb; Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discov- ering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts
1980
10
The Parameters of Conversational Style Deborah Tannen Georgetown University There are several dimensions along which verbalization responds to context, resulting in individual and social differences in conversational style. Style, as I use the term, is not something extra added on, like decora- tion. Anything that is said must be said in some way; co-occurrence expectations of that "way" constitute style. The dimensions of style I will discuss are: I. Fixity vs. novelty 2. Cohesiveness vs. expressiveness 3. Focus on content vs. interpersonal involvement. Fixity vs. novelty Any utterance or sequence must be identified (rightly or wrongly, in terms of interlocuter's intentions) with a recognizable frame, as it conforms more or less to a familiar pattern. Every utterance and interaction is formulaic, or conventionalized, to some degree. There is a continuum of formulaicness from utterly fixed strings of words (situati
1980
11
PHRASE STRUCTURE TREES BEAR MORE FRUIT THAN YOU WOULD HAVE THOUGHT* Aravind K. Joshi and Leon "S." Levy Department of Computer and Bell Telephone Laboratories Information Science Whippany, NJ 07981 The Moore School/D2 University of Permsylvania Philadelphia, PA 1910B EXTENDED ABSTRACT** There is renewed interest in examining the descriptive as well as generative power of phrase s~-~uctur~ gram- mars. The primary motivation has come from the recent investigations in alternatives to t-~ansfor~ational gremmmrs [e.g., i, 2, 3, 4]. We will present several results and ideas related to phrase structure trees which have significant relevance to computational lin- guistics. We %~_nT to accomplish several objectives in this paper. I. We will give a hrief survey of some recent results and approaches by various investigators including, of course, our own work~ indicating their interr~laticn- ships. Here we will review
1980
12
CAPTURING LINGUISTIC GENERALIZATIONS WITH METARULES IN AN ANNOTATED PHRASE-STRUCTURE GRAMMAR Kurt Konolige SRI International = 1. Introduction Computational models employed by current natural language understanding systems rely on phrase-structure representations of syntax. Whether implemented as augmented transition nets, BNF grammars, annotated phrase-structure grammars, or similar methods, a phrase-structure representation makes the parsing problem computatlonally tractable [7]. However, phrase-structure representations have been open to the criticism that they do not capture linguistic generalizations that are easily expressed in transformational grammars. This paper describes a formalism for specifying syntactic and semantic generalizations across the rules of a phrase-structure grammar (PSG). The for
1980
13
Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition Robert Cregar Berwick MIT Artificial Intelligence Laboratory, Cambridge, MA 1. Introduction: Constraints And Language Acquisition A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge - the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified - perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it." [2, page 218] In particular, ahhough the final psycholinguis
1980
14
A Linear-time Model of Language Production: some psychological implications (extended abstract) David D. McDonald MIT Artificial Intelligence Laboratory Cambridge, Massachusetts Traditional psycholinguistic studies of language production, using evidence from naturally occurring errors in speech [1][2] and from real-time studies of hesitations and reaction time [3][4] have resulted in models of the levels at which different linguistic units are represented and the constraints on their scope. This kind of evidence by itself, however, can tell us nothing about the character of the process that manipulates these units, as there are many a priori alternative computational devices that are equally capable of implementing the observed behavior. It will be the thesis of this paper that if principled, non- trivial models of the language production process are to be constructed, they must be informed by comput
1980
15
PROBLEM SOLVING APPLIED TO LANGUAGE GENERATION Douglas I~: Appelt Stanford University, Stanfo,d, Califorlda SR I International 111enlo Park. California This research was supported at SRI htternational by the Defense Advanced Reseat~ch Projects Agency under contract N00039-79-C-0118 ~¢ith the Naval Electronic Systems Commaw t The views and conchtsions contained in this document are those of the author and should not be interpreted as representative of the official policiex either expressed or bnplied, of the Defense Advanced Research Projects Agency, or the U. S. Goverttment. The author is gratefid to Barbara Grosz, Gary ttendrix and Terry Winograd for comments on an earlier draa of this paper. I. Introduction Previous approaches to designing language understanding systems have considered language generation to be tile activity of a highly specialized linguistic facility that is largely indcpendcnt of other co
1980
16
Interactive Discourse: Influence of the Social Context Panel Chair's Introduction Jerry R. Hobbs SRI International Progress on natural language interfaces can perhaps be stimulated or directed by imagining the ideal natural language system of the future. What features (or even design philosophies) should such a system have in order to become an integral part of our work environments? What scaled-down versions of these features might be possible in the near future in "simple service systems" [2]? These issues can be broken down into the following four questions: i. What are the significant features of the environment in which the system will reside? The system will be one participant in an intricate information network, depend- ing on a continually reinforced shared complex of knowl- edge [9]. To be an integral part of this environment, the system must possess some of the shared knowledge and perhaps must partici
1980
17
PARALANGUAGE IN COMPUTERMEDIATED COMMUNICATION John Carey Alternate Media Center New York University This paper reports on some of the components of person to person communication mediated by computer conferenc- ing systems. Transcripts from two systems were analysed: the Electronic Information and Exchange System (EIES), based at the New Jersey Institute of Technology; and Planet, based at Infomedia Inc. in Palo Alto, California. The research focused upon the ways in which expressive communication is encoded by users of the medluml. i. INTRODUCTION The term paralanguage is used broadly in this report. It includes those vocal features outlined by Trager (1964) as well as the prosodic system of Crystal (1969). Both are concerned with the investigation of linguistic phenomena which generally fall outside the boundaries of phonology, morphology and lexical analysis. These phenomena are the voice qualities and tones wh
1980
18
Expanding the Horizons of Natural Language Interfaces Phil Hayes Computer Science Department, Carnegie-Mellon University Pittsburgh, P A 15213, USA Abstract Current natural language interfaces have concentrated largely on determining the literal "meaning" of input from their users. While such decoding is an essential underpinning, much recent work suggests that natural language interlaces will never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication, such as robust communication procedures. This toaper defends that view. but claims that direct imitation of human performance =s not the best way to =mplement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satistying human communication needs. Th
1980
19
UNDERSTANDING SCENE DESCRIPTIONS AS EVg~NT SIMULATIONS I David L. Waltz University of Illinois at Urbana-Champaign The language of scene descriptions 2 must allow a hearer to build structures of schemas similar (to some level of detail) to those the speaker has built via perceptual processes. The understanding process in general requires a hearer to create and run "event ~ " to check the consistency and plausibility of a "picture" constructed from a speaker's description. A speaker must also run similar event simulations on his own descriptions in order to be able to judge when the hearer has been given sufficient information to construct an appropriate "picture", and to be able to respond appropriately to the heater's questions about or responses to the scene description. In this paper I explore some simple scen
1980
2
THE PROCESS OF COMMUNICATION IN FACE TO FACE VS. COMPUTERIZED CONFERENCES; A CONTROTT.~n EXPERIMENT USING BALES INTERACTION PROCESS ANALYSIS Start Roxanne Kiltz, Kenneth Johnson, and Ann Marie Rabke Upsala College INTRODUCTION A computerized conference (CC) is a form of co~znunica- tion in which participants type into and read frc~ a computer terminal. The participants may be on line at the same time--termed a "synchrononous" conference, or may interact anynchronous~. The conversation is stored and mediated by the computer. How does this form of communication change the process and outcome of group discussions, as compared to the "normal" face to face (FtF) medium of group discussion, where participants communicate by talking, listening and observing non-verbal behavior, and where there is no lag between the sending and receipt of communication signals? This paper briefly ~*mmarizes the resUltS of a controlled laboratory ex
1980
20
WHAT TYPE OF INTERACTION IS IT TO BE Emanuel A. Schegloff Department of Sociology, U.C.L.A. For one, like myself, who knows something about human interaction, but next to nothing about computers and human/machine interaction, the most useful role at a meeting such as this is to listen, to hear the troubles of those who work actively in the area, and to respond when some problem comes up for whose solution the prac- tices of human interactants seems relevant. Here, therefore, I will merely mention some areas in which such exchanges may be useful. There appear to be two sorts of status for machine/tech- nology under consideration here. In one, the interac- tants themselves are humans, but the interaction between them is carried by some technology. We have had the tel- ephone for about lO0 years now, and letter writing much longer, so there is a history here; to i t are to be add- ed video technology, as in some of the work reporte
1980
21
THE COMPUTER AS AN ACTIVE COMMUNICATION MEDIUM John C. Thomas IBM T. J. Watson Research Center PO Box 218 Yorktown Heights, New York 10598 I. THE NATURE OF COMMUNICATION goals r4imetacomments that direct the conversation[~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communica- tion, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the mes- sage. A good medium of communication, in this view, is one that adds minimal noise to the signal. Mes- sages are considered good
1980
22
WHAT DISCOURSE FEATURES AREN'T NEEDED IN ON-LINE DIALOGUE Eleanor Wynn Xerox Office Products Division Palo Alto, California It is very interesting as a social observer to track the development of computer scientists involved in AI and natural language-related research in theoretical issues of mutual concern to computer science and the social study of language use. The necessity of writing programs that demonstrate the validity or invalidity of conceptualizations and assumptions has caused computer scientists to cover a lot of theoretical ground in a very short time, or at least to arrive at a problem area, and to see the problem fairly clearly, that is very contemporary in social theory. There is in fact a discrepancy between the level of sophistication exhibited in locating the problem area (forced by the specific constraints of programming work) and in the theorizations concocted to solve the problem. Thus we find comp
1980
23
Parsing w. A. Martin Laboratory for Computcr Science Massachusetts Institute of Technology Cambridge, Massachu.~tts 02139 [.ooking at the Proceedings of last year's Annual Meeting, one sccs that the session most closely parallcling this one was entitled Language Structure and Par~ing. [n avcry nice prescnu~fion, Martin Kay was able to unite the papers of that scssion uudcr a single theme. As hc stated it. "Illcre has been a shift of emphasis away from highly ~tmctured systems of complex rules as the principal repository of infi~mmtion about the syntax of a language towards a view in which the responsibility is distributed among the Icxicoo. semantic parts of the linguistic description, aod a cognitive or strategic component. Concomitantly, interest has shiRed from al!lorithms for syntactic analysis and generation, in which the central stnlctorc and the exact seqtlencc of events are paramount, to systems iu which
1980
24
If The Parser Fails" Ralph M. Weischedel University of Delaware and John E. Black" W. L. Gore & Associates, Inc. The unforgiving nature of natural language components when someone uses an unexpected input has recently been a concern of several projects. For instance, Carbonell (1979) discusses inferring the meaning of new words. Hendrix, et.al. (1978) describe a system that provides a means for naive users to define personalized paraphrases and that lists the items expected next at a point where the parser blocks. Weischedel, et.al. (1978) show how to relax both syntactic and semantic constraints such that some classes of ungrammatical or semantically inappropriate input are understood. Kwasny aod Sondheimer (1979) present techniques for understanding several classes of syntactically ill-formed input. Codd, et.al. (1978) and Lebowitz (1979) present alternatives to top-down, left-to-right parsers as a means of deali
1980
25
Flexible Parsing Phil Hayes and George Mouradian Computer Science Department, Carnegie-Mellon University Pittsburgh. P A 15213, USA Abstract' When people use natural language in natural settings, they often use it ungrammatically, rnisSing out or repeating words, breaking-oil and restarting, speaking in Iragments, etc.. Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wislles tc accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibiiilies that :',uch a system should provide. We go, on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural lanai.age input to a limited-domain computer system. 1. The Importance of Flexible Parsing When people use natural lang
1980
26
Parsing in the Ahsmmee ofa Comldete Lexicon Jim Davidson and S. Jerrold Kaplan Computer Science Departmen~ Stanford University Stanfor~ CA 94305 I. Introduction It is impractical for natural language parsers which serve as front ends to large or changing databases to maintain a complete in-core lexicon of words and meanings. This note discusses a practical approach to using alternative sources of lexical knowledge by postponing word categorization decisions until the parse is complete, and resolving remaining lexical anthiguities usiug a variety of informatkm available at that time. il. The Problem A natutal language parser working with a database query system (c.g~ PLANES [Waltz et al, 1976], LADDER [Hcndrix, 1977], ROBOT [Harris, 1977], CO-OP [Kaplan, 19791) encounters lexical diflicultics not present in simpler applications. In pprticular, the description of the domain of discourse may be quite large (millions of words), and vari
1980
27
On Parsing Strategies and Closure' Kenneth Church MIT Cambridge. MA 02139 This paper proposes a welcome hypothesis: a computationally simple device z is sufficient for processing natural language. Traditionally it has been argued that processing natural language syntax requires very powerful machinery. Many engineers have come to this rather grim conclusion; almost all working parers are actually Turing Machines (TM), For example, Woods believed that a parser should have TM complexity and specifically designed his Augmented Transition Networks (ATNs) to be Turing Equivalent. (1) "It is well known (cf. [Chomsky64]) that the strict context-free grammar model is not an adequate mechanism for characterizing the subtleties of natural languages." [WoodsTO] If the problem is really as hard as it appears, then the only solution is to grin and bear it. Our own position is that parsing acceptable sentences is simpler because there
1980
28
STPJkTEGX?. SELECTION FOR AN ATN STNTACT~C PARSER Giacomo Ferrari and Oliviero Stock Istituto dl tingulstica Computazionale - C~TR, Plsa Performance evaluation in the field of natural language processing is generally recognised as being extremely complex. There are, so far, no pre-established criteria to deal x~th this problem. I. It is impossible to measure the merits of a grammar, seen as the component of an analyser, in absolute terms. An "ad hoc" grammar, constructed for a limited set of sentences is, without doubt, more efficient in dealing with those particular sentences than a zrammer constructed for a larger set. Therefore, the first rudimentary criterion, when evaluating the relation~hlp between a grammar and a set of sentences, should be to establish whether this grammar is capable of analysing these s
1980
29
ON THE EXISTENCE OF PRIMITIVE MEANING UNITS Sharon C. Salveter Computer Science Department SUNY Stony Brook Stony Brook, N.Y. 11794 ABSTRACT Knowledge representation schemes are either based on a set of primitives or not. The decision of whether or not to have a primitive-based scheme is crucial since it affects the knowledge that is stored and how that knowledge may be processed. We suggest that a knowledge representation scheme may not initially have primitives, but may evolve into a prlmltive-based scheme by inferring a set of primitive meaning units based on previous experience. We describe a program that infers its own primitive set and discuss how the inferred primitives may affect the organization of existing information and the subsequent incorporation of new information. i. DECIDING HOW TO REPRESENT KNOWLEDGE A crucial decision in the design of a knowledge repre- sentation is whether to base it on primi
1980
3
PHRAN - A Knowledge-Based Nature] Language Understender Robert Wilensky and Yigal Arena University of California at Berkeley Abstract We have developed an approach to natural language processing in which the natural language processor is viewed as a knowledge-based system whose knowledge is about the meanings of the utterances of its language. The approach is orzented around the phrase rather than the word as the basic unit. We believe that this paradi~ for language processing not only extends the capabilities of other natural language systems, but handles those tasks that previous systems could perform in e more systematic and extensible manner. We have construqted a natural language analysis program called PHRAN (PHRasal ANalyzer) based in this approach. This model has a number of advantages over existing systems, including the ability to understand a wider
1980
30
ATN ~AM~AR HDDELI!~G ]17 APPLIED LII~UISIqCS ABSTRACT: Au~mentad TrarmitiOn Network grm.n~rs have significant areas of ~mexplored application as a simula- tion tool for grammar designers. The intent of this pa- per is to discuss some current efforts in developing a gr=m.~ testing tool for the specialist in linguistics. ~e scope of the system trader discussion isto display structures based on the modeled grarmar. Full language definition with facilitation of semantic interpretation is not within the scope of the systems described in this paper. Application of granrar testing to an applied linguistics research envi~t is enphasized. Exten- sions to the teaching of linguistics principles and to refinemmt of the primitive All{ f%mctions are also con- sidered. i. Using ~t~od¢ 5bdels in Experimental Gr=r-~r Design Application of the A~q to general granmar modeling for simulation and comparative purposes was first sug- gested by ~,
1980
31
Interactive Discourse: Looking to the Future Panel Chair's Introduction Bonnie Lynn Webber University of Pennsylvania In any technological field, both short-term and long- term research can be aided by considering where that technology might be ten, twenty, fifty years down the pike. In the field of natural language interactive systems, a 21 year vision is particularly apt to con- sider, since it brings us to the year 2001. One well- known vision [I] of 2001 includes the famous computer named Hal - one offspring, so to speak, of the major theoretical and engineering breakthrough in computers that Clarke records as having occurred in the early 1980's. This computer Hal is able to understand and converse in perfect idiomatic English (written and spoken) with the crew of the spacecraft Discovery. And not just task-oriented dialogues, mind you! Hal is a far cry from today's prototype natural language query systems, int
1980
32
PROSPECTS mOR PRACTICAL NATURAL LANGUAGE SYSTEMS Larry R. Harris Artificial Intelligence Corporation Newton Centre, ~ass. 02159 As the author of a "practical" NL data base query system, one of the suggested topics for this panel is of particular interest to me. The issue of what hurdles remain before NL systems become practical strikes particulary close to home. %s someone with a more pragmatic view of NL processing, my feeling is, not surprisingly, that we already have the capability to construct practical ~:L systems. Significant enhancement of existing man- machine communication is possible within the current NL technology if we set our sights appropriately and are willing to take the additional effort to craft systems actually worthy of being used. The missing link isn't a utopian parsin
1980
33
FUTURE PROSPECTS FOR COMPUTATIONAL LINGUISTICS Gary G. Hendrix SRI International Preparation of this paper was supported by the under contract N00039-79-C-0118 with the Naval expressed are those of the author. Defense Advance Research Projects Agency Electronic Systems Command. The views A. Introduction For over two decades, researchers in artificial intelligence and computational linguistics have sought to discover principles that would allow computer systems to process natural languages such as English. This work has been pursued both to further the scientific goals of providing a framework for a computational theory of natural-language communication and to further the engineering goals of creating computer-based systems that can communicate with their~ human users in human terms. Although the goal of fluent machine-based nautral-langusge understanding remains elusive, considerable progress has been made and futu
1980
34
NATURAL I.~IGUAGE INTERACTION WITH MACHINES : A PA~SING FAD? 0R THE WAY OF THE FU~"JRE? A. Michael Noll American Telephone and Telegraph Company Basking Ridge, New Jersey 07920 People communicate primarily by two medea: acoustic -- the spoken word; and visual N the written word. It is therefore natural chac people would expect their com--,nications with machines Co likewise use Chess two modes. To a considerable extent, speech is probably the most natural of the natural-language modes. ~ence, a fascination exists with machines thac respond to spoken commands with synthetic speech responses to create a natural-language interactive discourse. However, although vast amounts of research and development effort have been expended in the search for systems that understand human speech and respond with synthetic speech, the goal of the perfect system remains a~ elusive as ever. Syste ms for produ
1980
35
NATURAL VS. PRECISE CONCISE LANGUAGES FOR HUMAN OPERATION OF COMPUTERS: RESEARCH ISSUES AND EXPERIMENTAL APPROACHES Ben S~eiderman, Department of Computer Science University of Maryland, College Park, MD. This paper raises concerns that natural language front ends for computer systems can limit a researcher's scope of thinking, yield inappropriately complex systems, and exaggerate public fear of computers. Alternative modes of computer use are suggested and the role of psychologically oriented controlled experimentation is emphasized. Research methods and recent experimental results are briefly reviewed. i. INTRODUCTI ON The capacity of sophisticated modern computers to manipulate and display symbols offers remarkable oppor- tunities for natural language co~nunication among people. Text editing systems are used to generate business or personal letters, scientific research papers, newspaper articles, or other textual data
1980
36
NATURAL LANGUAGE AND COMPUTER INTEBFACE DESIGN MURRAY TUROFF DEPARTMENT OF COMPU%'z~ AND IiVFORMATION SCIENCE IIEW JERSEY INSTITUTE OF TECHNOLOGY SOME ICONOCLASTIC ASSERTIONS Considering the problems we have in communicating with other h~rmans using natural language, it is not clear that we want to recreate these problems in dealing with the computer. While there is some evidence that natur- al language is useful in communications among humans, there is also considerable evidence that it is neither perfect nor ideal. Natural language is wordy (redun- dant) and imprecise. Most b,*m,m groups who have a need to communicate quickly and accurately tend to develop a rather well specified subset of natural language that is highly coded and precise in nature. Pilots and po- lice are good examples of this. Even working groups within a field or discipline tend over time to develop a jargon that minimizes the effort of communica
1980
37
WORD, PHRASE AND SENTENCE Kob't F. Sinnnons Univ. of Texas, Austin Among the relative verities of natural language processing are the facts that morphemes and words are primary semantic units, and that their co-ocurrence in phrases and sentences provides cues for selecting sense meanings. In this session, two psycholinguistic studies show some aspects of how human subjects process words while reading. A study of medical vocabulary shows that medical words are highly associated by co-occurrance in medical definitions. Another report shews the effectiveness of keyword identification and selection of prominent sentences to organize abstracts for retrieval. A fifth study argues that analysis of existing natural language dictionaries can be expected to contribute importantly to what is needed for text understanding programs. The
1980
38
REPRESENTATION OF TEXTS FOR INFORMATION RETRIEVAL N.J. Belkin, B.G. Michell, and D.G. Kuehner University of Western Ontario The representation of whole texts is a major concern of the field known as information retrieval (IR), an impor- taunt aspect of which might more precisely be called 'document retrieval' (DR). The DR situation, with which we will be concerned, is, in general, the following: a. A user, recognizing an information need, presents to an IR mechanism (i.e., a collection of texts, with a set of associated activities for representing, stor- ing, matching, etc.) a request, based upon that need hoping that the mechanism will be able to satisfy that need. b. The task of the IR mechanism is to present the user with the text(s) that it judges to be most likely to satisfy the user's need, based upon the request. c. The user examines the text(s) and her/his need is satisfied completely or partially or not at all.
1980
39
Metaphor - A Key to Extensible Semantic Analysis Jaime G. Carbonell Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Interpreting metaphors is an integral and inescapable process in human understanding of natural language. This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. It is argued that the method reduces metaphor interpretation from a reconstruction to a recognition task. Implications towards automating certain aspects of language learning are also discussed, t 1. An Opening Argument A dream of many computational linguists is to produce a natural language analyzer that tries its best to process language that "almost but not quite" corresponds to the system's grammar, dictionary
1980
4
WORD AND OBJECT IN DISEASE DESCRIPTIONS* M.S. Blois, D.D. Sherertz, M.S. Tuttle Section on Medical Information Science University of Calif(rnia, San Francisco Experiments were conducted on a book, Current Medical Information and Terminolog~, (AMA, Chicago, 1971, edited by Burgess Gordon, M.D.), which is a compendium of 3262 diseases, each of which is defined by a collection of attributes. The original purpose of the book was to introduce a standard nomenclature of disease names, and the attributes are organized in conventional medical form: a definition consists of a brief description of the relevant symptoms, signs, laboratory findings, and the like. Each disease is, in addition, assigned to one (or at most two) of eleven disease categories which en- umerate physiological systems (skin, respiratory, card- iovascular, etc.). While the editorial style of the book is highly telegraphic, with many attributes being expressed as
1980
40
REQUIREMENTS OF TEXT PROCESSING LEXICONS Kenneth C. Litkoweki 16729 Shea Lane, Gaithersburg, Md. 20760 Five years ago, Dwight Bolinger [1] wrote that efforts to represent meaning had not yet made use of the insights of lexico- graphy. The few substantial efforts, such as those spearheaded by Olney [2,3], MelOCuk [4], Smith [5], and Simmons [6,7], made some progress, but never came to fruition. Today, lexicography and its products, the diction- aries, remain an untapped resource of uncer- tain value. Indeed, many who have analyzed the contents of a dictionary have concluded that it is of little value to linguistics or artificial intelligence. Because of the size and complexity of a dictionary, perhaps such a conclusion is inevitable, but I believe it is wrong. To avoid becoming
1980
41
Chronometric Studies of Lexical Ambiguity Resolution Mark S. Seidenberg University of Illinois at Urbana-Champaign Bolt, Beranek and Newman, Inc. Michael g. Tanenhaus Wayne State University Languages such as English contain a large number of words with multiple meanings. These words are commonly termed "lexlcal ambiguities", although it is probably more accurate to speak of them as potentially ambiguous. Determining how the contextually appropriate reading of a word is identified presents an important and unavoidable problem for persons developing theories of natural language processing. A large body of psycholingulstlc research on ambiguity resolution has failed to yield a consistent set of findings or a general, non-controverslal theory. In this paper, we review the results of six experiments which form the basis of a model of ambiguity resolution in context, and at th
1980
42
Real Reading Behavior Robert Thibadeau, Marcel Just, and Patricia Carpenter Carnegie-Mellon University Pittsburgh, PA 15213 Abstract The most obvious observable activities that accompany reading are the eye fixations on various parts of the text. Our laboratory has now developed the technology for automatically measuring and recording the sequence and duration of eye fixations that readers make in a fairly natural reading situation. This paper reports on research in progress to use our observations of this real reading behavior to construct computational models of the cognitive processes involved in natural reading. In the first part of this paper we consider some constraints placed on models of human language comprehension imposed by the eye fixation data. In the second part we propose a particular model whose processing time on each word of the text is proportional to human readers' fixation durations.t Some Observations
1980
43
An Experiment in Machine Translation INTRODUCTION Although funding for Machine Translation (MT) research virtua11y ended in the U.S. with the release of the ALPAC report [1] in 1966, there has been a continuing interest in this field. Rapid evolution of science and technology, coupled with increased world-wlde exposure of their products, demands more and more speed in trans- lation (e.g., in the case of operation and maintenance manuals). Unfortunately, this rapid evolution has made translation an even more d i f f i c u l t and time-consuming task. The large surplus of (presumably qualified) translators cited by the ALPAC report simply does not exist in many technical areas; the current state of affairs Finds instead a critical shortage. In addition, the proportion of scientific and technical literature • published in English is diminishing. As qualified human translators becom
1980
44
Metaphor Comprehension - A special mode of language processing? (Extended Abstract) Jon M. Slack Open University, U.K. The paper addresses the question of whether a complete language understanding system requires special procedures in order to comprehend metaphorical language. To answer this question it is necessary to delineate the processes involved in metaphor comprehension and to det- ermine the uniqueness of such processes in the context of existing language understanding systems. I. DEFINING THE PROBLEM For the purposes of this paper a metaphor is defined as a linguistic input containing elements which result in a mismatch at the semantic level which the language understanding system attempts to interpret. For example, the sentence Billboards are warts on the landscape . 1 results in a semantic mismatch represented by the sentence Billboards are not a member of the category
1980
5
Interactive Discourse: Influence of Problem Context Panel Chair's Introduction Barbara Grosz SRI International The purpose of the special parasession on "Interactive Man/Machine Discourse" is to discuss some critical issues in the design of (computer-based) interactive natural language processing systems. This panel will be addressing the question of how the purpose of the interaction, or "problem context" affects what is said and how it is interpreted. Each of the panel members brings a different orientation toward the study of language to this question. My hope is that looking at the question from these different perspectives will ex- pose issues critical to the study of language in gener- al, and to the construction of computer systems that can co~nunicate with people in particular. Of course, the issue of the influence of "problem context" is separable from the issue of how one might get a computer system to ta
1980
6
SHOULD COMPUTERS WRITE SPOKEN LANGUAGE? Wallace L. Chafe University of California, Berkeley Recently there has developed a great deal of interest in the differences between written and spoken language. I joined this trend a little more than a year ago, and have been exploring not only what the specific differences are, but also the reasons why they might exist. The approach I have taken has been to look for differences between the situations and processes involved in speaking on the one hand and writing on the other, and to speculate on how those differences might be responsible for the observable differences in the output, ~at happens when we write and what happens when we speak are different things, both psychologically and socially, and I have been trying to see how what we do in the two situations leads to the specific things that we find in writing and speaking. I occasionally interact with the UNIX computer system at
1980
7
Signalling the Interpretation of Indirect Speech Acts Philip R. Cohen Center for the Study of Reading University of Illinois, & Bolt, Beranek and Newman, Inc. Cambridge, Mass. This panel was asked to consider how various "problem contexts" (e.g., cooperatively assembling a pump, or Socratically teaching law) influence the use of language. As a starting point, I shall regard the problem context as establishing a set of expectations and assumptions about the shared beliefs, goals, and social roles of those participants. Just how people negotiate that they are in a given problem context and what they know about those contexts are interesting questions, but not ones I shall address here. Rather, I shall outline a theory of language use that is sensitive ¢o those beliefs, goals, and expectations. The theory is being applied to charact
1980
8
PARASESSION ON TOPICS IN INYEZRACIXVE DISCOURSE INFLUENCE OF THE PROBLEM CONTEXT* Ar,avind K. Joshi Department of Computer and Infornmtion Science Room 268 Moore School University of Pennsylvania Philadelphia, PA 19104 My consents are organized within the framework suggested by the Panel Chair, Barbara Grosz, which I find very appropriate. All of my conlnents pertain to the various issues raised by her; however, wherever possible I will discuss these issues more in the context of the "infor- mation seeking" interaction and the data base doma/n. The primary question is how the purpose of the inter- action or "the problem context" affects what is said and how it is interpreted. The ~ separate aspects of this question that must be considered are the func- tion and the domain of the discourse. I. Types of interactions (functions) : i. 1 We are concerned here about a computer system par- ticipating in a restricted kind of
1980
9
A Practical Comparison of Parsing Strategies Jonathan Slocum Siemens Corporation INTRODUCTION Although the literature dealing with formal and natural languages abounds with theoretical arguments of worst- case performance by various parsing strategies [e.g., Griffiths & Petrick, 1965; Aho & Ullman, 1972; Graham, Harrison & Ruzzo, Ig80], there is little discussion of comparative performance based on actual practice in understanding natural language. Yet important practical considerations do arise when writing programs to under- stand one aspect or another of natural language utteran- ces. Where, for example, a theorist will characterize a parsing strategy according to its space and/or time requirements in attempting to analyze the worst possible input acc3rding to ~n arbitrary grammar strictly limited in expressive power, the researcher studying Natural Language Processing can be justified in concerning himself more with issues of
1981
1
What Makes Evaluation Hard? 1.0 THE GOAL OF EVALUATION Ideally, an evaluation technique should describe an algorithm that an evaluator could use that would result in a score or a vector of scores that depict the level of performance of the natural language system under test. The scores should mirror the subjective evaluation of the system that a qualified judge would make. The evaluation technique should yield consistent scores for multiple tests of one system, and the scores for several systems should serve as a means for comparison among systems. Unfortunately, there is no such evaluation technique for natural language understanding systems. In the following sections, I will attempt to highlight some of the difficulties 2.0 PERSPECTIVE OF THE EVALUATION The first problem is to determine who the "qualifie
1981
10
EVALUATION OF NATURAL LANGUAGE INTERFACES TO DATA BASE SYSTEMS Bozena Henisz Thompson California Institute of Technology INTEODUCT~ON Is evaluation, like beauty, in the eye of the beholder? The answer is far from simple because it depends on who is considered to be the proper beholder. Evaluacors may range from casual users to society as a whole, with sys- tem builders, sophisticated users, linguists, grant pro- viders, system buyers, and others in between. The members of thls panel are system builders and linguists -- or rather the t~ao fused into one -- but, I believe, interested in all or almost all actual or potential bodies of evaluators. One of our colleagues expressed a forceful opinion while being a member of a similar panel at last year's ACL conference: "Those of us on this panel and other researchers in the field simply don't have the right t
1981
11
TWO DISCOURSE GENERATORS William C. Mann USC Information Sciences Institute WHAT IS DISCOURSE GENERATION? The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text. Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independent
1981
12
A GRAMMAR AND A LEXICON FOR A TEXT-PRODUCTION SYSTEM Christian M.I.M. Matthiessen USC/Information Sciences Institute ABSTRACT In a text-produqtion system high and special demands are placed on the grammar and the lexicon. This paper will view these comDonents in such a system (overview in section 1). First, the subcomponente dealing with semantic information and with syntactic information will be presented se!:arataly (section 2). The probtems of relating these two types of information are then identified (section 3). Finally, strategies designed to meet the problems are proDose¢l and discussed (section 4). One of the issues that will be illustrated is what happens when a systemic linguistic approach is combined with a Kt..ONE like knowledge representation • a novel and hitherto unexplored combination] 1. THE PLACE OF A GRAMMAR AND A LEXICON IN PENMAN This gaper will view a grammar and a lexicon as integral parts of a text produc
1981
13
Language Production: the Source ofthe Dictionary David D. McDonald University of Massachusetts at Amherst April 1980 Abstract Ultimately in any natural language production system the largest amount of human effort will go into the construction of the dictionary: the data base that associates objects and relations in the program's domain with the words and phrases that could be used to describe them. This paper describes a technique for basing the dictionary directly on the semantic abstraction network used for the domain knowledge itself, taking advantage of the inheritance and specialization machanisms of a network formalism such as r,L-ON~ The technique creates eonsidcrable economies of scale, and makes possible the automatic description of individual objects according to their position in the semantic net. Furthermore, because the process of deciding what properties to use in an object's description is now given over to a common proced
1981
14
Analo~es in Spontaneous Discourse I Rachel Relc bman Harvard University and Bolt Beranek and Newman Inc. Abstract This paper presents an analysis of analogies based on observations of oatural conversations. People's spontaneous use of analogies provides Inslg~t into their implicit evaluation procedures for analogies. The treatment here, therefore, reveals aspects of analogical processing that is somewhat more difficult to see in an experimental context. The work involves explicit treatment of the discourse context in which analogy occurs. A major focus here is the formalization of the effects of analogy on discourse development. There is much rule-llke behavior in this process, both in underlying thematic development of the discourse and in the surface lir~ulstlc forms used in this development. Both these forms of regular behavior are discussed in terms of a
1981
15
I~IGATION OF PROCESSING STRATEGIES FOR THE STRUCTURAL ANALYSIS OF ARGOMF/Trs Robin Cohen Department of Computer Science University of Toronto Toronto, Canada M5S IA7 2. THE UNDERSTANDING PROCESS This paper outlines research on processing strategies being developed for a language understanding systerN, designed to interpret the structure of arguments. For the system, arguments are viewed as trees, with claims as fathers to their evidence. Then understanding becomes a problem of developing a representative argtmlent tree, by locating each proposition of the argument at its appropriate place. The processing strategies we develop for the hearer are based on expectations that the speaker will use particular coherent transmission strategies and are designed to be fairly efficient (work in linear time). We also comment on the use by the speaker of linguistic clues to indicate structure, illustrating how the hearer can int
1981
16
What's Necessary to Hide?: Modeling Action Verbs James F. Alien Com purer Science 1)epartmen t University of Rochester Rochester, NY 14627 Ahstract This paper considers what types of knowledge one must possess in order to reason about actions. Rather than concentrating on how actions are performed, as is done in the problem-solving literature, it examines the set of conditions under which an action can be said to have occurred. In other words, if one is told that action A occurred, what can be inferred about the state of the world? In particular, if the representation can define such conditions, it must have good models of time, belief, and intention. This paper discusses these issues and suggests a formalism in which general actions and events can be defined. Throughout, the action of hiding a book from someone is used as a motivating example. I. Introductio, This paper suggests a formulation of events and actions that
1981
17
A Rule-based Conversation Participant Robert E. Frederking Computer Science Department, Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract The problem of modeling human understanding and generation of a coherent dialog is investigated by simulating a conversation participant. The rule-based system currently under development attempts to capture the intuitive concept of "topic" using data structures consisting of declarative representations of the subjects under discussion linked to the utterances and rules that generated them. Scripts, goal trees, and a semantic network are brought to bear by general, domain-independent conversational rules to understand and generate coherent topic transitions and specific output utterances. 1. Rules, topics, and utterances Numerous systems have been proposed to model human use of language in conversation (speech acts[l], MICS[3], Grosz [5]). They have attacked the pro
1981
18
SEARCH AND INFERENCE STRATEGIES IN PRONOUN RESOLUTION : AN E~ERIMENTAL STUDY Kate Ehrlich Department of Psychology UnlversiCy of Massachusetts Amherst, ~ 01003 The qusstlun of how people resolve pronouns has the various factors combine. been of interest to language theorists for a long time because so much of what goes on when people find referents for pronouns seems to lie at the heart of comprehension. However, despite the relevance of pro- nouns for comprehension and language cheorT, the processes chat contribute to pronoun resolution have proved notoriously difficult Co pin down. Part of the difficulty arises from the wide range of fac=ors that can affect which antecedent noun phrase in a tex~ is usderstood to be co-referentlal with a particular pronoun. These factors can range from simple number/gender agreement through selectional rescrlc~ions co quite complex "knowledge chat has been acquired from the CaxC (see
1981
19
COM PUTATIONAL ('Obl PLEXITY AND LEXICAL FUNCTIONAL GRAMMAR Robert C. Berwick MIT Artificial Intelligence Laboratory, Cambridge, MA 1. INTRODUCTION An important goal of ntodent linguistic theory is to characterize as narrowly as possible the class of natural !anguaooes. An adequate linguistic theory should be broad enough to cover observed variation iu human languages, and yet narrow enough to account for what might be dubbed "cognitive demands" -- among these, perhaps, the demands of lcarnability and pars,ability. If cognitive demands are to carry any real theoretical weight, then presumably a language may be a (theoretically) pos~ible human language, and yet be "inaccessible" because it is not leanmble or pa~able. Formal results along these lines have already been obtained for certain kinds of'rransformational Generative Grammars: for example, Peters and Ritchie [I] showed that Aspeel~-style unrest~ted transtbrmational grammars can gene
1981
2
PERSPECTIVES ON PARSING ISSUES Jane J. Robinson, Chair Artificial Intelligence Center SRZ International Nowhere is the tension between the two areas of our field--computatlon and llnguistlcs--more apparent than in the issues that arise in connection with parsing natural language input. This panel addresses those issues from both computational and linguisric perspectives. Each panelist has submitted a position paper on some of the questions that appear below. The questions are loosely grouped in three sections. The first concentrates on the computational aspect, the second on the linguistic aspect, and the third on their interactions. A preliminary definition: For purposes of providing common ground or possibly a common point of departure at the outset, I will define parsln~ as the assigning of labelled syn
1981
20
SOME I33UE3 IH P&RSING AHD NATURAL LINGUAGE UNDERSTANDING Robert J. Bobrow Bolt Beranek and ~ewman Inc. Bonnie L. Webber Department of Computer & Information Science University of Pennsylvania Lan&ua~e is a system for ancodln~ and trans~tttlnK ideas. A theory that seeks to explain llnKulstlc phenomena in terme of this fact is a fun~t~1 theory. One that does not • £sses the point. [10] PREAMBLE Our response to the questions posed to this panel is influenced by a number of beliefs (or biasesl) which we have developed in the course of building and analyzin~ the operation of several natural language understanding (NLU) systems. [I, 2, 3, 12] While the emphasis of the panel i~ on parslnK, we feel that the recovery of the syntactic structure of a natural lan~unKe utterance must be viewed as part of a larger process of reeoverlnK the meaning, intentions and goals under
1981
21
PARSING Ralph Grishman Dept. of Computer Science New York University New York, N. Y. One reason for the wide variety of views on many subjects in computational linguistics (such as parsing) is the diversity of objectives which lead people to do research in this area. Some researchers are motivated primarily by potential applications - the development of natural language interfaces for computer systems. Others are primarily concerned with the psychological processes which underlie human language, and view the computer as a tool for modeling and thus improving our understanding of these processes. Since, as is often observed, man is our best example of a natural language processor, these two groups do have a strong commonality of research interest. Nonetheless, their divergence of objective must lead to differences in the way they regard the component processes of natural language understanding. (If - when human proc
1981
22
A View of Parsing Ronald M. Kaplan Xerox Pale Alto Research Center The questions before this panel presuppose a distinction between parsing and interpretation. There are two other simple and obvious distinctions that I think are necessary for a reasonable discussion of the issues. First, we must clearly distinguish between the static specification of a process and its dynamic execution. Second, we must clearly distinguish two purposes that a natural language processing system might serve: one legitimate goal of a system is to perform some practical ~sk efficiently and well. while a second goal is to assist in developing a scientific understanding of the cognitive operations that underlie human language processing. 1 will refer to pa~rs primarily oriented towards the former goal as Practical Parsers (PP) and refer to the others as Performance Model Parsers (PMP). With these distinctions in mind. let me now turn to the questions at hand. 1.
1981
23
PERSPECTIVES ON PARSING ISSUES Christopher K. Riesbeck Yale University COMPUTATIONAL PEESPECTT VE IS IT USEFUL TO DISTINGUISH PARSING FROM INTERPRETATION? Since most of this position paper viii be attacking the separation of parsing from interpretation, let me first make it clear that I do believe in syntactic knowledge. In this I am more conservative than other researchers in interpretation at Berkeley, Carnegie-Mellon, Colombia, the universities of Connecticut and Maryland, and Yale. But believing in syntactic knowledge is not the same as believing in parsers! The search for a way to assign 8 syntactic structure to a sentence largely independent of the meaning of that sentence has led to a terrible misdirection of labor. And this effect has been felt on both sides of the fence. We find ourselves looking for ways to red
1981
24
PRESUPPOSITION AND IMPLICATURE IN MODEL-THEORETIC PRAGMATICS Douglas B. Moran Oregon State University Model-theoretic pragmatics is an attempt to provide a formal description of the pragmatics of natural language as effects arising from using model-theoretic semantics in a dynamic environment. The pragmatic phenomena considered here have been variously labeled ~resupposition [I] and eonven¢ional implicature [6]. The models used in traditional model-theoretic semantics provide a complete and static representation of knowledge about the world, llowever, this is not the environment in which language is used. Language is used in a dynamic environment - the participants have incomplete knowledge of the world and the understanding of a sentence can add to the knowledge of the listener. A formalism which allows models to contain incomplete knowledge and to which knowledge can be added has been developed [2, 3, 12]. In model-
1981
25
SOME COMPUTATIONAL ASPECTS OF SITUATION S~21ANTICS Jon Barwise Philosophy Department Stanford Unlverslty~ Stanford, California Departments of Mathematics and Computer Science University of Wisconsin, Madison, Wisconsin Can a realist model theory of natural language be computationally plausible? Or, to put it another way, is the view of linguistic meaning as a relation between expressions of a natural language and things (objects, properties, etc.) in the world, as opposed to a relation between expressions and procedures in the head. consistent with a computational approach to understanding natural language? The model theorist must either claim that the answer is yes, or be willing to admit that humans transcend the computatlonally feasible in their use of language? Until recently the only model theory of natural language that was at all well developed was Montague Grammar. Unfortunately,
1981
26
A SITUATION SEMANTICS APPROACH TO THE ANALYSIS OF SPEECH ACTS 1 David Andreoff Evans Stanford University 1. INTRODUCTION During thc past two decades, much work in linguistics has focused on sentences as minimal units of communication, and the project of rigorously characterizing the structure of sentences in natural language has met with some succcss. Not surprisingly, however, sentcnce grammars have contributed little to the analysis of discourse, Human discourse consists not just of words in sequences, hut of words in sequences directed by a speaker to an addressee, used to represent situations and to reveal intentions. Only when the addressee has apprehcndcd both these aspects of the message communicated can the message be interpretecL The analysis of discourse that emerges from Austin (1962), grounded in a theory of action, takes this view as ccntral, and thc concept of thc speech act follows naturally. An utterance may have a c
1981
27
PROBLEMS IN LOGICAL FORM Robert C. Moore SRI International, Menlo Park, CA 94025 I INTRODUCTION Decomposition of the problem of "language understanding" into manageable subproblems has always posed a major challenge to the development theories of, and systems for, natural-language processing. More or less distinct components are conventionally proposed for handling syntax, semantics, pragmatics, and inference. While disagreement exists as to what phenomena properly belong in each area, and how much or what kinds of interaction there are among these components, there is fairly widespread concurrence as to the overall organization of linguistic processing. Central to this approach is the idea that the processing of an utterance involves producing an expression or structure that is in some sense a repre
1981
28
0.0 INTRODUCTION A CASE FOR RULE-DRIVEN SEMANTIC PROCESSING Marcha Palmer Department of Computer and Information Science University of Pennsylanla The primary cask of semantic processing is to provide an appropriate mapping between the synCactlc consClCuanCs of a parsed sentence and the arguments of the semanclc predlcaces implied by the verb. This is known as the Alignment Problem.[Levln] Sectloo One of thls paper gives an overview of a generally accepted approach to semantic processing that goes through several levels of representation to achieve this mapping. Although somewhat inflexible and cumbersome, the different levels succeed in preservln S the context sensitive information provided by verb semantics. Section Two presents the author's rule-driven approach which is mor
1981
29
Corepresentational Grammar and Parsing English Comparatives Karen P#an University of )linnesota SEC. 1 INTRODUCTION SEC. 3 COREPRESENTATIONAL GRAMMAR (CORG) Marcus [3] notes that the syntax of English comparative constructions is highly complex, and claims that both syntactic end semantic information must be available for them to be parsed. This paper argues that comparatives can be structurally analyzed on the basis of syntactic information alone via a strictly surface-based grammar. Such a grammar is given in Ryan [5], based on the co- representational model of Kac Ill. While the grammar does not define a parsing algorithm per se, it nonethe- less expresses regularities of surface organization and its relationship to semantic interpretation that an ade- quate parser would be expected to incorporate. This paper will discuss four problem areas in the description of comparatives and will outline the sections of the grammar of [
1981
3
A TAXONOMY FOR ENGLISH NOUNS AND VERBS Robert A. Amsler Computer Sciences Department University of Texas, Austin. TX 78712 ABSTRACT: The definition texts of a machine-readable pocket dictionary were analyzed to determine the disambiguated word sense of the kernel terms of each word sense being defined. The resultant sets of word pairs of defined and defining words were then computaCionally connected into t~o taxonomic semi- lattices ("tangled hierarchies") representing some 24,000 noun nodes and 11,000 verb nodes. The study of the nature of the "topmost" nodes in these hierarchies. and the structure of the trees reveal information about the nature of the dictionary's organization of the language, the concept of semantic primitives and other aspects of lexical semantics. The data proves that the di
1981
30
1. Introduction INTERPRETING NATURAL LANGUAGE DATABASE UPDATES S. Jermld Kaplan Jim David,son Computer Science Dept. Stanford University Stanford, Ca. 94305 Although the problem of querying a database in natural language has been studied extensively, there has been relatively little work on processing database updates expressed in natural language. To interpret update requests, several linguistic issues must be addressod that do not typically pose difficulties when dealing exclusively with queries. This paper briefly examines some of the linguistic problems encountered, and describes an implemented system that performs simple natural language database update& The primary difficulty with interpreting natural language updates is that there may be several ways in which a particular update can be performed in the underlying database. Many of these options, while literally correct and semantically meaningful, may correspond to bizarre
1981
31
Dynamic Strategy Selection in Flexible Parsing Jaime G. Carbonell and Philip J. Hayes Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Robust natural language interpretation requires strong semantic domain models, "fall-soff" recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi.strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well
1981
32
A Construction-Specific Approach to Focused Interaction in Flexible Parsing Philip J. Hayes Carnegie-Mellon University Pittsburgh, PA 15213 Abstract ~ A flexible parser can deal with input that deviates from its grammar, in addition to input that conforms to it. Ideally, such a parser will correct the deviant input: sometimes, it will be unable to correct it at all; at other times, correction will be possible, but only to within a range of ambiguous possJbilities. This paper is concerned with such ambiguous situations, and with making it as easy as possible for the ambiguity to be resolved through consultation with the user of the parser - we presume interactive use. We show the importance of asking the user for clarification in as focused a way as possible. Focused interaction of this kind is facilitated by a construction. specific approach to flexible parsing, with specialized parsing techniques for each type of construction, and
1981
33
CONTROLLED TRANSFORMATIONAL SENTENCE GENERATION Madeleine Bates Bolt Beranek and Newman, Inc. Robert Ingria Department of Linguistics, MIT I. INTRODUCTION This paper describes a sentence generator that was built primarily to focus on syntactic form and syntactic relationships. Our main goal was to produce a tutorial system for the English language; the intended users of the system are people with language delaying handicaps such as deafness, and people learning English as a foreign language. For these populations, extensive exposure to standard English constructions (negatives, questions, relatlvization, etc.) and their interactions is necessary. • The purpose of the generator was to serve as a powerful resource for tutorial programs that need examples of particular constructions and/or related sentences to embed in exercises or examples for the student
1981
34
TRANSPORTABLE NATURAL-LANGUAGE INTERFACES TO DATABASES by Gary G. Hendrlx and William H. Lewis SRI International 333 Ravenewood Avenue Menlo Park, California 94025 I INTRODUCTION Over the last few years a number of application systems have been constructed that allow users to access databases by posing questions in natural languages, such as English. When used in the restricted domains for which they have been especially designed, these systems have achieved reasonably high levels of performance. Such systems as LADDER [2], PLANES [10], ROBOT [1], and REL [9] require the encoding of knowledge about the domain of application in such constructs as database schemata, lexlcons, pragnmtic grammars, and the llke. The creation of these data structures typically requires considerable effort on the part of a computer profes
1981
35
Chart Parsing and Rule Schemata in PSG Henry Thompson Dept. of Artificial Intelligence, Univ. of Edinburgh, Hope Park Square, Meadow Lane, Edinburgh, EH8 9NW INTRODUCTION MCHART is a flexible, modular chart parsing framework I have been developing (in Lisp) at Edinburgh, whose initial design characteristics were largely determined by pedagogical needs. PSG is a gr---n-tical theory developed by Gerald Gazdar at Sussex, in collaboration with others in both the US and Britain, most notably Ivan Sag, Geoff Pull,--, and Ewan Klein. It is a notationally rich context free phrase structure grumm~r, incorporating meta-rules and rule schemata to capture generalisations. (Gazdar 198Oa, 1980b, 1981; Gazdar & Sag 1980; Gazdar, Sag, Pullum & Klein to appear) In this paper I want to describe how I have used MCHART in beginning to construct a parser for gr-mm-rs express- ed in PSG, and how aspects of the chart parsing approach in gen
1981
36
PERFORMANCE COMPARISON OF COMPONENT ALGORITHMS FOR THE PHONEMICIZATION OF ORTHOGRAPHY Jared Bernstein Larry Nessly Telesensory Speech Systems University of North Carolina Palo Alto, CA 94304 Chapel Hill, NC 27514 A system for converting English text into synthetic speech can be divided into two processes that operate in series: I) a text-to-phoneme converter, and 2) a phonemic-input speech synthesizer. The conversion of orthographic text into a phonemic form may itself comprise several processes in series, for instance, formatting text to expand abbreviations and non-alphabetic expressions, parsing and word class determination, segmental phonemicization of words, word and clause level stress assignment, word internal and word boundary allophonic adjustments, and duration and fundamental frequency settings for phonological units. Comparing the accuracy of different algorithms for text-to-phoneme conve
1981
4
PHONY: A Heuristic Phonological Analyzer* Lee A. Becket Indiana University DOMAIN AND TASK PHONY is a program to do phonological analysis. Within the generative model of grammar the function of the phonological component is to assign a phonetic representation to an utterance by modifying the underlying representations (URs) of its constituent morphemes. Morphemes are the minimal meaning units of language, i.e. the smallest units in the expression system which can be correlated with any part of the content system, e.g. un+tir+ing+ly. URs are abstract entities which contain the idiosyncratic information about pronounciations of morphemes. (1) PHONOLOGICAL Underlying COMPONENT Phonetic Representations ............ > Representations (URs) (rules) Phonological analysis attempts to determine the nature of the URs and to discover the general principles or rules that relate them to the phoneti
1981
5
EVALUATION OF NATURAL LANGUAGE INTERFACES TO DATABASE SYSTEMS: A PANEL DISCUSSION Norman K. Sondheimer, Chair Sperry Univac Blue Bell, PA For a natural language access to database system to be practical it must achieve a good match between the capabilities of the user and the requirements of the task. The user brings his own natural language and his own style of interaction to the system. The task brings the questions that must be answered and the database domaln+s semantics. All natural language access systems achieve some degree of success. But to make progress as a field, we need to be able to evaluate the degree of this success. For too long, the best we have menaged has been to produce a list of typical questions or linguistic phenomena that a system correctly processed. Missing has been a discussion of the
1981
6

Dataset Card for "ACL-papers-truncated"

More Information needed

Downloads last month
51