23/11/11
GENERATIVE LINGUISTICS
Semantics is the branch of linguistics and logic concerned with meaning, The meaning of a word, phrase, sentence, or text.
Generative semantics is a description of a language emphasizing a semantic deep structure that is logical in form, that provides syntactic structure, and that is related to surface structure by transformations.
(Merriam Webster)
Generative semantics is the name of a research program within linguistics, initiated by the work of various early students of Noam Chomsky:
Generative semanticists took Chomsky's concept of Deep Structure and ran with it, assuming that deep structures were the sole input to semantic interpretation. This assumption, combined with a tendency to consider a wider range of empirical evidence than Chomskian linguists, led generative semanticists to develop considerably more abstract and complex theories of deep structure than those advocated by Chomsky and his students — and indeed to abandon altogether the notion of “deep structure” as a locus of lexical insertion.
Throughout the late 1960s and 1970s, there were heated debates between generative semanticists and more orthodox Chomskians. The generative semanticists lost the debate, insofar as their research program ground to a halt by the 1980s. However, this was in part because the interests of key generative semanticists such as George Lakoff had gradually shifted away from the narrow study of syntax and semantics.
A number of ideas from later work in generative semantics have been incorporated into:
In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories.
The first was the distinction between competence and performance. Chomsky noted the obvious fact that people, when speaking in the real world, often make linguistic errors. He argued that these errors in linguistic performance were irrelevant to the study of linguistic competence (the knowledge that allows people to construct and understand grammatical sentences).
The second idea related directly to the evaluation of theories of grammar. Chomsky distinguished between grammars that achieve descriptive adequacy and those that go further and achieved explanatory adequacy. A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar that achieves explanatory adequacy has the additional property that it gives an insight into the underlying linguistic structures in the human mind; that is, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented.
A transformational grammar has 3 major kinds of rules:
Phrase structure rules and lexicon
If we wanted to divide the sentence the sentence the astronaut can walk into its constituent parts, it would be:
Transformations and the structure of the auxiliary
The structure of the auxiliary: (someone has eaten the garlic toast) the auxiliary word is a form of the verb to have.
AUX -> T + (M) + (HAVE + EN) + (BE + ING)
Should have been walking
Generative semantics is a description of a language emphasizing a semantic deep structure that is logical in form, that provides syntactic structure, and that is related to surface structure by transformations.
(Merriam Webster)
Generative semantics is the name of a research program within linguistics, initiated by the work of various early students of Noam Chomsky:
- John R. Ross
- Paul Postal
- and later James McCawley.
- George Lakoff was also instrumental in developing and advocating the theory
Generative semanticists took Chomsky's concept of Deep Structure and ran with it, assuming that deep structures were the sole input to semantic interpretation. This assumption, combined with a tendency to consider a wider range of empirical evidence than Chomskian linguists, led generative semanticists to develop considerably more abstract and complex theories of deep structure than those advocated by Chomsky and his students — and indeed to abandon altogether the notion of “deep structure” as a locus of lexical insertion.
Throughout the late 1960s and 1970s, there were heated debates between generative semanticists and more orthodox Chomskians. The generative semanticists lost the debate, insofar as their research program ground to a halt by the 1980s. However, this was in part because the interests of key generative semanticists such as George Lakoff had gradually shifted away from the narrow study of syntax and semantics.
A number of ideas from later work in generative semantics have been incorporated into:
- Cognitive linguistics
- Head-Driven Phrase Structure Grammar (HPSG)
- Construction Grammar
- and indeed into Mainstream Chomskian linguistics
In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories.
The first was the distinction between competence and performance. Chomsky noted the obvious fact that people, when speaking in the real world, often make linguistic errors. He argued that these errors in linguistic performance were irrelevant to the study of linguistic competence (the knowledge that allows people to construct and understand grammatical sentences).
The second idea related directly to the evaluation of theories of grammar. Chomsky distinguished between grammars that achieve descriptive adequacy and those that go further and achieved explanatory adequacy. A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar that achieves explanatory adequacy has the additional property that it gives an insight into the underlying linguistic structures in the human mind; that is, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented.
A transformational grammar has 3 major kinds of rules:
- Syntactic rules: which specify the deep structure into a surface structure of the sentence and then transform that deep structure into a surface structure.
- Semantic rules: which provide an interpretation for the sentence.
- Phonological rules: which specify information necessary in pronouncing the sentence.
Phrase structure rules and lexicon
If we wanted to divide the sentence the sentence the astronaut can walk into its constituent parts, it would be:
Transformations and the structure of the auxiliary
The structure of the auxiliary: (someone has eaten the garlic toast) the auxiliary word is a form of the verb to have.
AUX -> T + (M) + (HAVE + EN) + (BE + ING)
Should have been walking
COGNITIVE LINGUISTICS
Dr. Fillmore has been extremely influential in the areas of syntax and lexical semantics. He was a proponent of Noam Chomsky's theory of generative grammar during its earliest transformational grammar phase. He was one of the founders of cognitive linguistics, and developed the theories of Case Grammar (Fillmore 1968), and Frame Semantics (1976).
He was one of the first linguists to introduce a representation of linguistic knowledge that blurred this strong distinction between syntactic and semantic knowledge of a language. He introduced what was termed case structure grammar and this representation subsequently had considerable influence on psychologists as well as computational linguists.
Grammar Case is a system of linguistic analysis, focusing on the link between the valence, or number of subjects, objects, etc., of a verb and the grammatical context it requires.
The system was created by the American linguist Charles J. Fillmore in (1968), in the context of Transformational Grammar. This theory analyzes the surface syntactic structure of sentences by studying the combination of deep cases (i.e. semantic roles) -- Agent, Object, Benefactor, Location or Instrument -- which are required by a specific verb.
For instance, the verb "give" in English requires an Agent (A) and Object (O), and a Beneficiary (B); e.g. "Jones (A) gave money (O) to the school (B).
According to Fillmore, each verb selects a certain number of deep cases which form its case frame. Thus, a case frame describes important aspects of semantic valency, of verbs, adjectives and nouns. Case frames are subject to certain constraints, such as that a deep case can occur only once per sentence. Some of the cases are obligatory and others are optional. Obligatory cases may not be deleted, at the risk of producing ungrammatical sentences.
A fundamental hypothesis of case grammar is that grammatical functions, such as subject or object, are determined by the deep, semantic valence of the verb, which finds its syntactic correlate in such grammatical categories as Subject and Object, and in grammatical cases such as Nominative, Accusative, etc.
Fillmore puts forwards the following hierarchy for a universal subject selection rule:
Agent < Instrumental < Objective
That means that if the case frame of a verb contains an agent, this one is realized as the subject of an active sentence.
Case grammar is an attempt to establish a semantic grammar. (Most grammars by linguists take syntax as the starting-point).
Using a modified form of valency theory Fillmore suggests that the verb establishes a set of cases in a sentence: these are like slots, which usually need not all be filled. For example, consider these sentences:
1. Mary opened the door with a key.
2. Mary opened the door.
3. A key opened the door.
4. The door opened.
In (1) the semantic cases are: Mary - agent; the door - object; a key - instrument.
In (2) they are as in (1), except that there is no instrument.
In (3) the cases are: a key - instrument; the door - object.
In (4) the only case is the door - object.
In other words, to open requires at the minimum that the object be specified in a sentence.
Suscribirse a:
Entradas (Atom)