This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that al- legedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We ana- lyze the components of this sort of argument carefully, and examine four exem- plars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position.
Only for the stout-hearted:
Everaert, M. B., Huybregts, M. A., Chomsky, N., Berwick, R. C., & Bolhuis, J. J. (2015). Structures, not strings: linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12), 729-743.
(And a critique, but is it valid?):
Dąbrowska, E. (2015). What exactly is Universal Grammar, and has anyone seen it? Frontiers in Psychology, 6, 852.
From my understanding of this article, there is a misinterpretation amongst the authors, as they often confound UG and OG. Indeed, the authors state that the poverty of stimulus (POS) is caused by the lack of positive evidence. However, the issue is more so with the lack of negative evidence for UG, in which children cannot learn UG without producing or hearing UG corrections. Additionally, the authors also fail to properly draw a distinction between OG and UG, and fall short of pointing out the fact that OG is not innate.
ReplyDeleteMelika, that's correct. This was surprising to me, because Geoff Pullum is quite smart.
Delete(He was the one who debunked the myth of the "Eskimo Snow Words," which is connected with the Strong Whorf-Sapir Hypothesis. But he may not have been quite right about that either. Ask me about it if interested - or ask GPT!)
Pullum's conflation of UG and OG (or his failure to distinguish them)is a symptom of the hugely divisive effect Universal Grammar had on linguistics. This is still not resolved. Read the reading on Chomsky's Universe. And alas Chomsky's health may be failing now.
It's worth pointing out that Pullum's mini-LLM-like database already missed the mark about the POS, because no child hears so many words when learning language. But now ChatGPT has come along to show that even that was wrong, because even after the Big Gulp ChatGPT makes no UG errors (and almost no OG errors).
Hello ! I asked chat gpt about the controversy surrounding the Eskimo snow words and it does not want to give me their opinion. However I do think that Pullum is 100% right. The structure and vocabulary of a language can shape and influence the way its speakers perceive and think about the world.
DeleteWas quite disappointed Pullum also ignored the difference between UG and OG! Once again, the failure to distinguish the two leads to misunderstanding. However, I'm trying to put my disappointment aside and think about why Prof Harnad would assign this reading -- likely it is intended to prompt a discussion about the misconceptions surround UG, although, it feels like we've already addressed most of them in the last few readings.
DeleteThat being said, it is quite shocking that these educated and informed authors seem to keep getting it wrong. At the same time, it is nice to know we would be able to explain it to these authors in a pretty simple way: There is a distinction between OG and UG. UG differs from OG in that UG cannot be learned due to the poverty of the stimulus. If kids can't make errors in UG and learn from them, it can be assumed that UG is innate. On the other hand, children learn OG from their environment and from the feedback they receive. Unless one studies syntax, we don't know what the rules are for UG. OG can be learned by unsupervised learning but UG cannot since UG errors would have to be made and corrected.
Linguists have learned UG by discussing UG violations and trying to figure out why they are wrong. Their own UG tells them they are wrong but all they can conclude is that these violations cannot be explained by OG. We don't really know UG since we don't haven't studied Chomskian linguistics. After the last few readings, the question remains: How and why did UG evolve? And if we think about gradualism, how could UG evolve gradually (since you can't have language with only part of UG)?
Marine, how did you query GPT? It seems to give a reasonable response about Pullum's Snow Terms paper, but that was in Week 6b, not 9b. This Pullum paper, Week 9b, is not about Snow Terms, it's about POS: What's that?
DeleteWeek 6b was about the Whorf-Sapir Hypothesis that language influences perception. The strong version of that Hypothesis is wrong, but the weak version, Learned CP (what's that) is right. But Pullum's Snow Term paper in 6b didn't say that either, but rather argued against the Whorf-Sapir Hypothesis altogether: Please sort our the two Pullum papers and set the record straight,
Miriam, an excellent response. The reason I still assign Pullum is not just to show how linguists keep getting UG wrong, but because of ChatGPT: Before GPT the answer to Pullum would have been (1) kids don't read all the texts Pullum cites; they just say and hear what they say and hear in a couple of years of home and school chat; and (2) Pullum's many texts don't contain UG errors either (except maybe those written by learners of English as a second languge)! Today, it's evident from GPT's Big Gulp that GPT does not make UG errors either (and GPT has swallowed Pullum's texts in its Big Gulp plus a lot more).
So not only is Pullum wrong, but wrong too are those who think GPT has learned UG from its Big Gulp: It certainly hasn't. It's just aping what others do and don't say, from a "stimulus" that's bigger than what any person could read or hear in thousands of lifetimes, let alone a child, in a few years of home and school talk.
In a sense GPT does learn OG from its Big Gulp, because it makes almost no OG errors either (and there are plenty of OG errors in its Big Gulp). But there would have to be a lot more OG errors to make them re-appear in what GPT says, the way racism and aggression do re-appear in what GPT says (because there's enough of those in there for GPT to parrot, or ape -- enough so that OpenAI has to train GPT to suppress it!).
While I don’t disagree with the sentiments above, I find it hard to whole heartedly disagree with Pullum because I don’t think the distinction between UG and OG has been clearly defined enough in arguments for POS. I also think what Pullum is arguing for is being misunderstood: “What we claim will not be found is a construction that is learned early and occurs centrally and productively in adult speech but slumps to zero frequency in performance when young children are around.” Pullum et al. from the get-go stated that they were not against nativist accounts but more so the extent of the claims they have made based on the evidence that has been provided. In this article, they make four specific counterarguments for APS. Although I cannot confirm that these are the best and clearest candidates like the authors say, I do agree with the points they have made in regards to how the linguistic frameworks used to push APS are not as rare as their supporters claim, and how more definitive guidelines need to be drawn to state how many instances are enough (for a child to pay attention VS exposure). Without this, “it is simply not clear what would support and what would undermine” these claims. This also causes epistemological problems like that of Baker and Chomsky (if instances are so rare, how will it ever be known whether someone has the correct generalisation), which also undermines an argument that could otherwise seem more convincing (at least to skeptics like myself). I also agree with the paper’s conclusion that research into data-driven learning procedures is bound to undercut APS, simply by revealing that there is a lot more that can be learnt than APS claim. This is not to say that I don’t believe in UG, but in that I am skeptical of the extent of its innateness (like Pullum). This is supported by literature on “unsupervised learning” in computational linguistics, which I have also learnt in some other courses, in which patterns in language that were not only more in numbers but also in complexity were detectable.
DeleteConsidering I am the minority, I would love to hear people’s input to this, especially if that would include a clear definition of / distinction between UG and OG.
The way I understand it, the commenters in this thread seem to be saying that Pullum and Scholz have misunderstood Universal Grammar because the POS arguments they cite have nothing to do with universal grammar. But I'm confused - if UG is a set of grammatical rules that we always follow, even when learning a language as a baby, then don't the arguments they cite actually have to do with UG after all? For example, children don't appear to make mistakes in choosing which auxiliary verb to move to the front of a sentence in order to make it interrogative. True, this auxiliary-initial interrogative form only exists in certain languages, but if we think of this property as a parameter of UG which can be switched on and off depending on the target language (as Pinker theorized in this week's other reading), then isn't this a POS argument about UG?
DeleteIf I'm being candid, like Pullum, I think I have long conflated UG with OG, but primarily because I haven't been introduced to the distinction before. Only in these last few weeks have I drawn a distinction between the two, and in light of this new understanding I think much of what Pullum argues is correct. I will caveat that by saying: much of what Pullum argues is correct, with respect to OG. Pullum essentially argues that there is enough information in the linguistic environment for a child to learn its native/mother tongue, and he shows that much of the evidence used to support the APS can actually be refuted by looking at the corpus of data a language learner has access to. Well, of course there is enough information in the linguistic environment to learn your native/mother tongue. Otherwise, there would be no language speakers at all! However, that is OG, and there is no question that OG is learned from experience and unsupervised learning because OG is necessarily language specific. It would be ridiculous to imagine that the innateness of language, that UG, encompasses every facet of every language on Earth (i.e., OG). No: OG must be learned, because it is language specific. So, in this sense, Pullum’s argument is correct: the conclusion from the APS that OG is innate is wrong. However, as Pullum and I have both done, that is the incorrect conclusion to take from the APS. The correct conclusion to take from the APS is that UG, which encompasses all and only the truths and rules of language as a whole and not the truths and rules of any specific language, is innate because there is not enough information in the linguistic environment to afford learning it. Binding Theory, X-Bar Theory, Merge and Move operations, etc. are all things that cannot be abstracted as rules from the linguistic environment because they are impoverished in that environment. This paper and this discussion thread has really cleared up the distinction between OG and UG, and where the APS should be properly applied. That being said, I am of the same mind as Jocelyn. I think there is still a lot to be gained from computational linguistics and data-driven learning procedures, and its utility should not be squandered merely because it doesn’t consider innate UG (which I am still somewhat skeptical about).
DeleteJocelyn, please read the other replies, about the OG/UG distinction, how generative linguistis piece together the structure of UG, and the difference between OG violations/correction (which are unproblematic) and UG violations/corrections (which do not happen at all, except for adult generative linguists) and why.
DeleteAya, the POS argument is that no one makes UG errors, only OG errors. Not "not enough" UG errors: no UG errors at all. The "not enough" notion comes from mixing up OG errors with UG errors. Parameter-setting is learned, and, like most learning, that learning is not instantaneous; and parameter-setting errors are OG, even though the parameters are being set on UG. On the analogy with phonology, Japanese and Chinese children do not lose the R/L distinction overnight.
Stevan V, you seem to have gotten the point about no POS for OG but POS for UG. That does not imply that learning-theory or computational linguistics are wrong for OG.
But you haven't explained the grounds for your doubts about the unlearnability and hence the innateness of UG. If UG is learnable after all, what is the distinction between OG and UG? And what is "POS for UG"?
I'm intrigued by the juxtaposition of the lack of positive evidence against the necessity for negative evidence in the acquisition of UG. The authors delineate a compelling case for empirical scrutiny into linguistic nativism. Yet, as you point out, there's a subtle yet profound difference between the insufficiency of positive evidence and the absence of negative evidence. Such distinction is not just semantic but foundational to our understanding of language acquisition. It begs the question of how we reconcile these evidence types in the framework of UG, and what implications does this have for linguistic theories that rely heavily on innate structures? Perhaps we're not as impoverished in stimulus as previously thought, but rather, in methods to discern what constitutes as sufficient evidence for UG.
DeleteThe authors argue that empirical work is necessary to support linguistic claims. They suggest that this empirical work should involve studying the primary linguistic data and providing evidence that cannot be easily dismissed. They emphasize the need for concrete evidence rather than self-congratulatory assurances or vague promissory notes. The authors also mention the importance of corpus linguistics, which involves analyzing large collections of language data, to determine if crucial evidence is present. They suggest that defenders of linguistic nativism should embrace both corpus linguistics and data-driven learning procedures to accomplish their task. However, they caution that such research may be self-undercutting, as it could reveal that more can be learned through data-driven learning than previously thought, potentially weakening arguments for stimulus poverty.
ReplyDeleteMarie-Elise, has just ChatGPT confirmed, refuted or bypassed Pullum's challenge?
DeleteI have lots of questions regarding Chat-GPT and its potential implications for OG/UG. Pullum argues that addressing the POS argument – which he defines as insufficient positive evidence - necessitates the establishment of the entire corpus of words heard by an infant. Chat-GPT appears to bypass Pullum’s interpretation of the POS as it is fed a massive amount of data incomparable to what a child typically hears, potentially negating the notion of lack of positive evidence as it has access to an extensive "big gulp" of information.
DeleteHowever, if we consider the proper interpretation of the POS argument, which involves the absence of negative examples of UG grammar, Chat-GPT seems perplexing to me. It makes no UG errors, even when all the data in the big gulp adheres to UG grammar and does not contain negative examples of UG. Does this suggest that through an unrealistic volume of data, one could deduce the rules of UG, effectively making it OG? In other words, since LLMs learn through deep learning, back-propagation and statistical relevance, without "innate" mechanisms comparable to humans, does it imply that, through a big gulp, UG can be learned? Are those questions even relevant, given that no child has access to such a big input of data? Additionally, since there are certain examples of what is not UG written by linguists that would have been included in the big gulp, could they have been useful in showing Chat-GPT what is not UG?
Natasha, another insightful series of observations. It gets to the heart of what GPT is doing, which we don't fully understand. No, GPT has not learned UG. There is something about the way it manages to squeeze out of the Big Gulp so much that makes sense to us, even though it lacks both UG and grounding, that is extremely interesting, though still perplexing. Yes, it's something about what can be gotten out of such a Big Gulp that no human could ever swallow -- but that many grounded humans with their innate UGs produced. "Stevan Says" that this tells us something about the nature of language itself. But it's too early to say just what that something is. Maybe you can figure it out!
DeleteI hope I am not straying too far into fantasy land with this comment, but I have been inspired after reading the other replies to posit another way to conceptualize UG.
DeleteAs many others have pointed out, there is a conflation of OG and UG in this paper. OG is learned, in an unsupervised and/or trial-and-error fashion, whereas UG cannot be learned because of an absence of any negative examples of UG that a child learning a language would feasibly encounter (APS). Despite this, GPT, through the big gulp, somehow also follows UG. As Dr. Harnad points out, GPT is not “learning” UG since it, equally, does not receive any negative examples from the big gulp.
Here is where I posit another conceptualization of UG. If OG is learned and UG is never violated, then maybe it is best to conceptualize UG as the “MinSet” for grammar rules. It seems to me that just like the MinSet constitutes the minimum number of words needed to be grounded to define all others in a dictionary, UG might constitute a minimal set of grammar rules that serve as a necessary foundation on which to build all other grammar rules (OGs).
In this conceptualization, it is conceivable that GPT would follow UG since UG (the foundation of OGs) could have been an emergent consequence of swallowing the big gulp, the equivalent of the dictionary in my MinSet analogy. Therefore, it is not possible for humans to learn UG due to the POS, but a LLM, given enough data, could learn to ape UG in the same way one could learn a MinSet given an entire dictionary.
This is what I thought of when Dr. Harnad said that GPTs ability to follow UG through the big gulp reveals something about the nature of language.
From reading this week’s article, and also skimming Dąbrowska’s (2015) “What exactly is Universal Grammar, and has anyone seen it?”, it seems as though there is a lot of confusion surrounding what UG actually is, which as Professor Harnard has said above, can lead to the conflation of UG and OG. Many of the arguments made in this week’s paper against POS and the universality of grammar rules really just capture OG learning. For example, the authors provide the example of how British dialects of English pluralize non-head elements more than American dialects. This example simply captures how speakers of a language will learn OG for their specific dialect, and does not negate the presence of universal generalizations of language more generally.
ReplyDeleteI agree there's a conflating of UG with OG, but what I did find convincing about Dąbrowska’s argument against universality is that for all proposed universals, there is a linguistic counterexample. This includes phrase structure rules, major phrasal categories and ways of distinguishing between subjects and objects. If this critique is correct in saying there are absolutely no universal rules across all languages, this undermines the strength of a nativist argument.
DeleteJessica, good points.
DeleteAdrienne, I am not technically competent to weigh Dąbrowska’s "counterexamples", but I find something (veggie-)fishy about them. I doubt that a few inexplicable exceptions (if they really are exceptions, and really inexplicable) would invalidate UG any more than a few "broke"s instead of "breaked"s or "mice"s instead of "mouses"s would invalidate English OG past-tense or pluralization rules. It would just show that both innate, all-language UG rules and learned one-language OG rules can "satisfice" (i.e., tolerate a few exceptions). (But I think it is much more likely that Dabrowska lacks the technical expertise (just as I do) to assess whether the exceptions are really exceptions. Don't forget that it is not just non-linguists (like Pinker) from other disciplines, but linguists (like Pullum) lacking UG expertise who keep making the OG/UG conflation.
Jessica, I really appreciate your comment, but I still think that we can distinguish OG from UG.
DeleteFrom my understanding, the poverty of the stimulus argument stands in the absence of mistakes and of corrections. Since we don't clearly know, in UG, what the rules are, it becomes complicated to reinforce and correct them. Therefore, this POS is solely appliable for OG.
Additionally, UG and OG have distinct characteristics. UG can be defined as these grammatical rules that we don't know about (hidden rules), that we've always followed and that are generalized to all languages. UG is not fixable, probably the main reason why POS is not an argument that can take account of UG. However, we can notice UG violations even if incapable of explaining why they are wrong. Regarding OG, it is the grammar that we've learned, the rules we know about, and that can be fixed through supervised and unsupervised learning, imitation, and grammar learning. In terms of structures, while OG is the surface structure, the sentence level, UG is the hidden structure, the hierarchy level.
I don't know if all these aspects of UG and of OG are right, but this is what I understood from them, and it seems not that hard to understand and distinguish both of them. However, I'm still curious about whether there is the presence of a critical period for UG, if we argue that UG can't be fixed...?
This part of the course makes me think of something we learn in our developmental psychology classes about how children learn language. Research shows that babies can learn the patterns of language based on the statistical probability that a specific sound is likely to be heard immediately after another sound. In a study by Saffran et al. (1996), they showed that infants can tell the difference between nonsense words they’d heard before, versus new words, based on the transitional probabilities between the words (i.e. likelihood a certain syllable will be followed by another). This showed that infants can segment speech based on statistical information to figure out language (i.e. data driven theory of language), and thus, shows that there is NOT a special learning structure required to learn language. In my view, this evidence would disprove the presence of a UG – does anyone else have another take?
ReplyDeleteHi Kristi, I also took a development class and read the Saffran et al. (1996) paper. Interestingly, in 2001 Saffran et al. also found(using preferential looking paradigms) that infants of similar age could accurately predict musical-tone pairs(similar to the language one where they could predict which tone came next). Other studies have shown the benefits of music in language learning and acquisition, too with one(1993 study by Trehub & Trainor) showing that temporal patterning, pitch range and contour were all encoded by infants when shown specific musical sequences. I agree with you and to me, this again shows that it is not just one specific structure for language learning, as it can be helped by music which activates all sorts of different parts of the brain and involves structures that are different from structures activated when hearing non-musical sounds. Open to hearing other views though so please let me know if you disagree!
DeleteKristi and Josie, these are questions worth pondering, but be wary about statistical and spandrel (side-effect claims to invalidate UG based on a few fragments of UG, especially if they come from critics in fields lacking expertise in generative linguistics. History keeps showing that these claims tend to turn out to be based on oversimplifications or technical misunderstandings. (But I can't substantiate this, because I too lack the technical UG expertise. But these arguments do call to mind the "angle-transectors" -- amateur mathematicians who keep coming up with methods to trisect angles even though it has been proved to be impossible. Generative linguistics, however, being an empirical discipline rather than a branch of mathematics, cannot prove the existence of UG; they can only provide evidence supporting it. Empirical science, including cogsci, is always underdetermined, not certain.)
DeleteIf I understood correctly from this reading, the idea about universal grammar (UG) is that it is inborn and thus it would never be possible for a child to make an error or to be corrected when they are following the UG grammar rules because no learning was necessary for the child to acquire this perfect UG. So, does this mean that children never hear UG errors because they were born and sort of programmed to have perfect UG? What about adults? Can we, as adults who have learned OG and been exposed to different environments and situations throughout our life, where we have received feedback, hear UG errors? Is it even possible to make or detect UG errors since this is something innate?
ReplyDeleteFrom what I understand about universal grammar, children don’t hear UG mistakes because of the fact that adults in their first language don’t make UG mistakes. This relates back to how we know that UG is innate, since there would be no way for children to detect errors and make corrections since these mistakes are not made. While we are not making UG error mistakes, it is still possible for adults to detect these mistakes. These situations can sometimes be seen with speakers who are using a second language because you are not able to fully develop perfect UG for languages after your first. UG errors that are made can in this way be detected intuitively by most adults. Thus, while these mistakes are almost never made, there are many situations where if they are made, we would be able to tell that something was fundamentally incorrect with the sentence but would potentially not be able to identify that it was from a UG error.
DeleteHi! I am a bit confused by the UG errors you talking about. From my understanding, the UG is structure, an algorithm, I think, that does not carry any information itself. It is inborn or essence of human being (Sorry to use such a metaphysical interpretation). The errors and correction only have contribution during we learning the OG (Prof clarified to my 9a skywritings).
DeleteAlso, from my experiences, the way for adult to learn language (second language in this case, as we are barely able to master first language if we don't start in the first few years of our lives) has quite different entry point, memorizing the alphabet and simple words and grammar. For this reason, I think it is not the topic we need pay much attention on. If I misunderstood anything, please don't hesitate to point it out.
Valentina, Jenny & Evelyn, all good points. But there is one set of adults you are forgetting: Generative linguists, whose job is to reverse-engineer UG! They are deliberately testing hypotheses about what might be among the rules of UG (which are mostly structural), and they have made quite a lot of progress, even though the reverse-engineering is not complete yet -- and there are rival hypotheses (as there are in all scientific fields). What is unique in the case of generative linguistics is that the linguists'' own ears and brains are their guides to what is and is not a UG violation when they are testing their UG hypotheses. This is what makes people in other fields sceptical: How can your ears guide you? Aren't they biased by your hypotheses?
DeleteBut the ears of (adult, first-language) non-linguists also detect UG violations, even though they don't know why they sound wrong. The perception is the same as when we hear an OG error -- often not knowing why it sounds wrong. But it sounds wrong. And this is also part of the reason for the conflation of UG and OG.
(And, yes, there is sometimes disagreement about whether something sounds ok or wrong. But here. too, the disagreements are the exceptions. On most UG violations both linguists and nonlinguists agree that they sound wrong.)
Strange field, linguistics. But language itself is stranger still...
This reading discussed the nature of language acquisition, drawing distinctions between ‘poverty of stimulus’, which is when a child does not naturally gain exposure to linguistic rules but rather is taught them explicitly, and ‘absence of stimulus’, although this distinction seemed fairly unnecessary to me. However, the paper failed to make a distinction between OG and UG. Valentina, from class and other readings, I have gathered that it is not impossible for children to make UG errors, rather it is just very very uncommon. Universal grammar has to do with the fundamental structure of language; children are born with a capacity to adopt this system nearly flawlessly, thus they rarely make structural UG errors. Furthermore, the rare UG error is not undetectable to adult humans. As professor Harnad stated, adults are able to notice that something is off but then cannot understand exactly what, because UG rules are not explicitly taught but rather embedded in the structure of language.
DeleteIt seems to me that the poverty of the stimulus is similar to a categorization problem. The experience plays a huge role in the learning of a language, and the question appears to be how can children know what is ungrammatical when they are only exposed to what is supposed to be the “right” way to speak a specific language. In fact, what enables children to know what is / isn’t grammatically correct relies on their experience in the world, what they hear, and how they interpret the feedback that we give them. Nevertheless, the fact that children would receive enough feedback and corrections to improve their grammatical skills is against the principle of Universal Grammar, so sensorimotor capacities can’t be enough to address this issue.
ReplyDeleteAdiaen, yes, grammar learning is not just LIKE category learning, it IS category learning. And that's exactly what OG learning is: There are 4 ways to learn categories (what are they?). The most important of them is "supervised" learning: trial, error, feedback as to whether you have DONE the correct thing or the incorrect thing (as on the mushroom island).
DeleteBut, except in learning trivially obvious categories, you need to be able to do both the correct thing and the incorrect thing (producing positive and negative evidence), so that corrective feedback can signal to your neural nets when you are right or wrong, so they can detect which features distinguish right from worng, till you are making no more mistakes, because you are recognizing and eating only the edible mushrooms. With OG, you are not only able to recognize when someone else has said something ungrammatical, you can also learn to produce only grammatical utterances (hence this is a mirror capacity).
But that's all OG. With UG, there's something missing that makes the category learning impossible: What is it?
An example of grammar learning as category learn that stood out to me in particular was that of Pullum’s response to Kimball’s argument regarding the sequencing of auxiliaries. He states that subcategorisation of heads for certain complement types might be learnable for from positive examples, in the case that “phrases can be classified into types and heads can be identified as selecting certain types”.
DeleteTo answer Dr Harnad's question above, I believe the thing missing from UG that makes category learning impossible is the fact that every single instance of language we come across is a member of the category we are trying to define, that category being "grammatical language". You would expect that this would lead kids to make overestimations of what utterances are grammatical and which are not but they never seem to do so (at least not when it comes to UG as far as I'm aware). How you learn to do the right thing without knowing what the wrong thing to do is?
DeleteJocelyn, I don't know the technical details, but even in OG learning, some rules can be learned from imitating (positive) examples even if negative examples are never heard or corrected. In nonlinguistic category learning, some features may be simpler and more salient. I'd guess that for every feature that can be learned from positive examples alone, there are a huge number of features that cannot.
DeleteStephen, yes, you can't learn what's "the right thing to do with what" when you can't do the wrong thing with anything. If our eyes could not see the inedible mushrooms, or our arms could not reach them, that would mean the edible/inedible category was innate too.
Pullum & Scholz offer a critique of a few authors’ works that support the Argument from the Poverty of Stimulus, such as Chomsky. They do not argue that the APS is false and unfounded, rather, that the evidence presently offered by APS supporters is refutable or weak. For example, the claim of unusualness states that some sentences with unusual (but correct) grammar, don’t occur with enough frequency for a child to be able to learn it correctly. Many children wouldn’t receive enough relevant input from their environment to learn this unusual grammar. Therefore, APS supporters conclude there must be some innate knowledge of grammar — some UG. However, Pullum & Scholz’s argue that analysis from small corpuses such as in the CHILDES database of transcripts from TV programs, which many children watch, refute this claim: there is in fact evidence that children receive relevant input of unusual sentences. This means that instance of sentences with unusual grammar could have been learned, and therefore could fall into the OG category, as opposed to UG.
ReplyDeleteAnais, I'm not sure what point you are making, or attributing to P&S: What is at issue is not "unusual" sentences, either heard by or produced by the child). What is at issue is the complete absence of UG errors, either heard or produced. Plus a conflation of UG with OG (which DOES have heard, produced, and corrected OG errors). What is POS?
DeletePullum and Scholz's paper was a joy to read as they talked about my two favourite streams of Cogsci (psyc and ling) and I loved their witty remarks (ex: the Eminem reference and their joke about APS standing For 'the Argument selected by Pullum and Scholz'). I really appreciated that they pointed out how people have essentially been using the POS argument as a weasel word; they have been citing it (i.e., the conclusion) without knowing exactly what Chomsky proposed (i.e., "the structure of the reasoning that is supposed to get us to this conclusion" (p.12)). I think this happens a lot in fields like Cogsci and am always pleasantly surprised when someone dares to say something about it.
ReplyDeleteI also liked how they broke APS down as a proof and identified the premise that has to be proven as true for POS to be T (p.18). I enjoy reading literature reviews and criticisms of past research as I believe that there is much to be learned from other people’s mistakes.
I agree with Pullum and Scholz that we need to investigate data-driven (language) learning in detail; it is certainly easier to do now with the advances in the domain of AI and language learning algorithms. We could maybe use Google Home (which is surely always listening to us) to see what children are exposed to and look into it. In other words, we have the means to construct a full input corpus of a child, so we could do it and see where it leads us (other than a lawsuit).
Aashiha, I don't want to spoil your enjoyment, but P&S, despite their wit, got it mostly wrong. Please read the other replies and come back and tell us whether you get it now. What did they get wrong?
DeleteI would argue that while Pullum and Scholz offer insightful critiques, they may have overlooked some complexities of Chomsky's theory. Chomsky's notion of Universal Grammar (UG) is not simply a set of specific linguistic rules, but a deeper, more abstract set of principles governing the structure of languages. Pullum and Scholz's focus on empirical data and their argument against the poverty of the stimulus might downplay the nuanced, abstract nature of UG that Chomsky proposes. It's not just about observable data or specific linguistic rules but about the underlying principles that make language acquisition possible. I think their paper brings valuable perspective by challenging the empirical basis of certain linguistic theories, but it might miss the broader, more theoretical aspects of Chomsky's arguments on UG and language acquisition.
DeleteIn the supplementary reading, “What exactly is Universal Grammar and has anyone seen it”, I was quite surprised by how strongly the author argued against UG. For example, it is mentioned that one of the counter-arguments is that languages differ from each other in profound ways and they are very diverse. Firstly, the author contradicts herself by stating that “deep universals” may exist and that looking at more surface level we can observe differences. It seems that Dabrowska is actually giving arguments in favor of UG. Indeed, that we, billions of humans have some rules in common is already enough to acknowledge the existence of UG.
ReplyDeleteGarance, yes, Dabrowska doesn't get it, and is not alone. And she is usiing "universal" as a weasel-word (probably out of non-understanding. Feed her paper to GPT. Ask for a summary. Ask GPT what, if anything, is wrong with it; and then teach GPT what really is wrong with it, as on the mid-term.
DeleteThis article clearly defines the Argument from the Poverty of the Stimulus (APS). It clearly lays out what criteria would have to be met in order to make the claim that people have innate knowledge about the structure of language which they could not have attained through the linguistic data they were exposed to as children. Four prominent examples of such possible un-learned linguistic structures are given, and then systematically deconstructed.
ReplyDeleteThe authors encourage using mathematical learning theory to try to uncover the limits of data-driven learning. If the structure of all natural languages can fall beneath the mathematical limits of data-driven learning, I believe there would be grounds to claim there is no linguistic knowledge that couldn’t be obtained via data-driven learning. The authors note how surprisingly powerful unsupervised learning in computational linguistics has proven to be. This suggests that as we come to better understand the limits of data-driven learning, we might find that the structure of all natural languages fall under its scope.
This paper clarified my conception of the APS from my exposure to it in past classes, and made me reconsider my (generally very nativist) view of language…
Dani, first explain how you could learn which mushrooms on the island are edible through unsupervised learning. Then explain how you could learn UG without ever hearing or producing or being corrected for UG errors.
DeleteThe former would have to occur through evolution, as one can only make a mistake once in a lifetime. The learning would have to occur over generations, with information being stored internally (as external exchange of information would be supervised). It follows that the same process would be necessary for UG (if it is unsupervised).
DeleteNico, kid-sib did not understand your reply. Evolution is supervised learning: if you have a genetic trait and it kills you or prevents your reproduction, then that trait will not be passed on. So the answer to how you could learn mushrooms unsupervised is that you couldn't (unless the inedible mushrooms all grew out of reach -- but then there would be nothing to learn). What is "information being stored internally"?
DeleteThe authors clearly have some misconceptions as to what the difference between UG and OG is as many have pointed out above as well as the nature of POS. I am confused as to why they feel the need to make a distinction between a poverty and an absence (surely an absence of a stimulus is also a poverty?). I also despair at their attack on UG from a lack of empirical data. There are many areas of generative linguistics which I feel suffer from a case of studying what they would prefer language to be rather than what it is but UG research is soundly based on observations of how children actually use language. In many cases they seem to want to have their cake and eat it too. They want empirical evidence but anything that could have an alternative explanation is not good enough. They want data-driven explanations but are unsatisfied the data that has been collected. I find the arguments against UG often, as this one does, bend themselves out of sorts to avoid the simplest explanation.
ReplyDeleteMarie, you're right, of course. Curious how these misunderstanding and non-understanding persist. The idea of innate language rules must rattle or rile people for some reason...
DeletePullum calls for more evidence of absence of negative evidence. Reminds one of the joke about the optimistic kid digging for the pony under the pile of poop: "There must be a pony here somewhere..."
I was confused by section 4.3 on the anaphoric "one", particularly towards the end in which they discuss how the context is used to determine the rules of where the word can occur. I was wondering if the mechanism that uses the context to derive the rule is UG, since it seems as though we innately end up using some process that takes in context to arrive at the rule and they don't seem to explain what that might be.
ReplyDeleteHi Omar,
DeleteI think the point of section 4.3 was to demonstrate how Baker’s argument that the anaphoric “one” being an example of UG is wrong. Baker’s argument is that the referent of the anaphoric “one” can never be a single noun: he gives the example, (11) John has a blue glass but Alice doesnít have one. where the anaphoric “one” refers not only to the single noun “glass” but rather to the entire phrase “blue glass.” This is what Baker uses as support for UG: he claims that it should be extremely improbable for a child to encounter this knowledge that “one” can stand for sequences consisting of more than just a single noun.
Pullum critiques from several different points:
First, Pullum says that it would be incorrect to say that “
he refers to a number of different corpora to demonstrate that this anaphoric the referent of the anaphoric “one” can never be a single noun: this is shown in sentences like (14b), “ (The bid ...) is higher than the one for Delta II. ” where ‘one’ refers to “bid,” which is clearly a single noun.
Second, Pullum refers to a number of different corpora to demonstrate the prevalence of cases where “one” stands for a sequence containing more than just one noun. Thus, he says, Baker’s claim that such uses of “one” would be too rare for a child to encounter enough to learn it is most likely incorrect.
Omar, I think it's not so much context but the underlying (UG) structure that's at issue with anaphora, which involves long-distance relations between words in linear surface straucture.
DeleteAnd Ohrie's analysis sounds right. Pullum is disputing one specific UG hypothesis -- of one linguist -- not UG.
This might be a bad hypothesis. But this whole topic this week: how could we possibly get UG has made me dig back into previous classes to find the answer. We know that UG is not evolutionarily advantageous. But could it be a spandrel??
ReplyDeleteThis is inspired by my discussion with Adrienne last week. What if UG is a spandrel of categorization? I'm not sure exactly how to support this idea but I feel as though it could be. Hopefully responses might help me to understand my own intuition!
***I think my comment was deleted, so I’m reposting it***
DeleteThat's an interesting idea about UG possibly being a spandrel of categorization. The idea that UG could be a byproduct of categorization aligns with the idea that evolutionary processes might have shaped cognitive abilities in unexpected ways. However, I am not sure we can apply the spandrel metaphor to UG.
From my understanding of the concepts presented in this reading and other skywritings, the reason we believe UG is innate is due to the fact that a child only ever produces UG positive output, never negative, so there is never anything to correct. Additionally, the basic differentiation between OG and UG is that OG is learned throughout childhood where UG is innate due to the reasoning mentioned above. This also supports why Og is easier to provide theories for rather than UG. While there are examples of UG- that are created, it is not heard in daily conversation where a child would pick it up; however, there are constant examples of OG+ and OG-. Since OG is not innate, the positive and negative examples are needed in order to learn it because if someone is only presented with negative examples or only with positive, they won’t be able to fully distinguish between them. Looking at positive and negative examples side by side is vital for learning new things.
ReplyDeleteAs you said, children predominantly produce UG-positive output, thereby never providing evidence for UG-negative instances. This shows the notion that UG is an innate linguistic framework distinct from OG, which is acquired gradually during childhood. Your explanation rightly emphasizes the necessity of both UG-positive and UG-negative examples for a comprehensive understanding of UG, and the critical role such examples play in the learning process. I agree with the significance of contrasting positive and negative examples side by side for effective learning. Exposure to both types of examples allows for a more nuanced comprehension of language structures.
DeleteEmma, if you mean a spandrel, B, is a byproduct or side-effect of A, and, in this case, UG is a byproduct or side-effect of categorization (by which I assume you mean category learning), you need to explain how,UG is a side-effect (just as you would haw to explain how UG is a side-effect of yawning!
DeleteDelaney, you're right, but the reasoning is not that UG rules are unlearnable by trial and error, like OG rules, because UG rules are innate, but that UG rules are innate because they are unlearnable by trial and error -- because no one makes UG errors. So our genes and brains must somehow know them already. The hard question is: how and why did that capacity evolve?
Julide, we need both positive and negative examples (members and non-members) to be able to learn a category, because we need to detect and abstract the features (or rules) that distinguish the members from the non-members.
Hi Professor, that’s true the contrast between members and non-members provides a crucial foundation for detecting and abstracting the underlying features/rules that define a category. That's why we need both positive and negative examples in the learning process. This double exposure enables a deeper comprehension of the complex laws guiding language. I forgot to mention that in my skywriting.
DeleteDespite their conflation of UG and OG, and their reduction of the POS to the lack of positive evidence for certain syntactic structures (inaccessibility claims), I agree with them on the fact that the POS couldn’t serve as definitive proof of the existence of UG. Hence, the APS suggests the existence of UG, but it remains unproven. I feel that additional work is indeed needed to corroborate linguistic nativism, especially regarding the success of LLMs in acquiring OG without UG and feedback (unsupervised learning). Their existence proves that acquiring language through data-driven learning is possible, even though they are cheating by relying on a massive amount of data, which is not the case for children.
ReplyDeleteHi Joann, it’s true that the POS can’t serve as definitive proof of the existence of UG – further, UG cannot be proven! This is natural for anything in the realm of empirical science, including linguistics, such that any theory will remain undetermined. It is important, like you mentioned, that linguistic work is continued so that UG can be better corroborated and understood, since it isn’t ubiquitously accepted. This is especially true now with the confounding evidence that UG rules are adhered to quite well by ChatGPT, which is data-driven. To me personally, this fact about ChatGPT isn't surprising – it certainly doesn’t suffer from a poverty of (positive) stimulus, so it doesn’t need, per se, to understand the rules of UG in order to follow them by mimicry. What would be more interesting is if an LLM could be trained on far less data, perhaps of a similar type and quantity as a language-learning child, and still produce language adhering to UG, without errors and without receiving negative feedback. If our intuition about nativism is true, we expect that data alone would not be enough.
DeleteAdam, I think that there are other differences between the way that ChatGPT uses statistics and probabilities to produce language and the way that children acquire language that go beyond just the presence of UG. The minimal grounding set necessary for a T2 to be able to produce coherent responses is very interesting, but the lack of sensorimotor grounding completely separates language acquisition from UG. In my opinion, the ability to ground allows for the successful use of UG.
Deleteoann, does ChatGPT speak, or just remix? Has it learned language, as a child does, and passed T2, or just learned to remix the Big Gulp and interact remarkably (and inexplicably) well with users? But in any case, couldn't the complete lack of UG errors (and almost complete lack of OG errors) be because of a "Spandrel", namely, the Big Gulp?
DeleteAdam, good points, but even much smaller LLMs seem to be protected from making UG (and OG) errors by the statistics of their text database. And is it irrelevant to "language learning" that children's language is T3-grounded, whereas LLMs' "language" is not? I'm amazed (and curious) about GPT's capacity to interact verbally and interpretably, with people, but, apart from effects -- on my perception -- arising from my mind-reading mirror-neurons (effects that would be even more enhanced if GPT had facial expressions to accompany its texts), I am not ready to believe that GPT understands language: Are you?
(Megan, I agree!)
I do not believe that ChatGPT can speak because language requires feeling. ChatGPT just spits out remixed propositions from the Big Gulp. It appears to “speak” correctly because it uses the natural languages that have been produced by humans (OGs learned through UG capacity), its correct use of UG is a spandrel of obtaining the Big Gulp. If you ask ChatGPT to create propositions that are not logical it cannot produce more than a few, it sticks to the subject/predicate form, this appears to be UG but it is not because ChatGPT did not have the capacity to learn. Learning requires categorization and symbol grounding for meaning of referents. Learning through sensorimotor capacities that birds can fly, allows the child to have the feeling that birds can fly and then the child can produce propositions that are UG correct.
DeleteHi everyone,
ReplyDeleteSince one of the main goals of cognitive science is to reverse engineer cognition, I was wondering how do nativist and non-nativist theories of language help reach this goal? Language is essential for T3 to do everything we can do as we’ve discussed in class but do we really have to understand how we acquire language to understand its underlying mechanism? And what would be the implications for our conceptualization of T3 if we discover that some language abilities are innate? I guess i don’t really understand what it would change. How would knowing whether or not language is innate bring us closer/further from our goal?
Hi Lili! I had the same question as you after finishing the reading. It reminds me a bit about the mirror neurons, and how we discussed if it was enough to know that we have mirroring capacity without needing to know about the specific neurons. Whenever we have these discussions in class I tend to think CogSci benefits from knowing as much as possible about the brain, even if it seems extraneous, like mirror neurons, so I would argue that knowing if the nativist or non-nativist view is correct would be very helpful. In the case of language, I think it comes down to importance. The Turing Test is founded on a robots capacity for language, so in my opinion this is one of the most important functions to understand in reverse-engineering consciousness, whereas a robots capacity to mirror is slightly less integral to consciousness. I believe that knowing which theory of language is correct would allow us to better grant linguistic ability to the robot. I don’t really have an answer for how knowing if language is innate or not would change our understanding of T3 robots, but I think that the importance in understanding language acquisition lies in being able to build linguistic ability in our hypothetical robot, rather than just understanding language.
DeleteHi Lili and Lillian! I have also had some similar thoughts/questions surrounding the importance of language, and the existence of UG, for solving the easy problem of cognitive science. I think that an understanding of language, and how it has developed and is acquired, is essential in order to understand how we, as thinkers, are able to do what we are able to do. I think that an understanding of the underlying mechanisms is beneficial to the construction of a T3 robot because if we understand the innate starting point for language acquisition in humans, we can begin to develop a T3 robot that can learn language through these underlying rules and mechanisms.
DeleteI also had a similar question on what we would get out of understanding the mechanism and evolutionary benefits of different aspects of human language. To me it seemed a particularly useful aspect of UG is the fact that we develop it without any external feedback or input; we never make UG mistakes and therefore no feedback is necessary or prompted to UG. However, comprehensive language is still produced by LLMs like chatGPT that not only rely on external feedback but that would simply not be able to produce any sentences without an extremely large amount of language input. LLMs may then still not answer the easy question of how humans do what we do, if we know that how LLMs do things is not how humans do things. Yet could this give some insight into the hard problem; since GPT does not have an innate UG, perhaps comparing the other elements of cognition that it also lacks will help identify the role or necessity of UG to humans being able to do what they do and why.
DeleteLili, first, what generative linguists are doing is reverse-engineering UG. If the capacity to acquire language requires that the structural constraints of UG be built in rather than learned, then they would have to be built into the reverse-engineering of a T3 or T4 too. That in itself is no threat to cogsci; it's just a part of the ("Easy") job.
DeleteLiliian, reverse-engineering (human) T3 capacity will require UG (language-acquisition capacity) whether it's built in or acquired. So it's part of cogsci's job description either way. Reverse-engineering mirror-capacity is part of it too (especially since language perception and production is a mirror-capacity too). How much T4 will help is yet to be seen.
Shona, There's also still a lot of pre-linguistic reverse-engineering to be done on cognitive capacities we share with nonhuman species.
Rebecah, I'm not sure how UG, or whatever else GPT cannot do, is likely to cast light on HP; but it might cast some light on EP, in the way Searle's CRA did in showing that cognition cannot be just computation.
The reading discusses the poverty of the stimulus argument. It critiques the argument's claims and the lack of empirical evidence supporting it. Chomsky's APS suggests that children do not receive enough linguistic input to learn complex grammatical rules and that they must have innate linguistic knowledge. The authors remain open to the possibility of innate language acquisition mechanisms but still express the need for emirical research and data-driven learning investigations. The part where I had questions about is that the authors provide counterexamples to the APS, demonstrating that children are exposed to enough linguistic data to learn complex grammar rules through data-driven learning. But I had a hard time understanding how a child could be exposed to all of the data, and to consider that every child masters at least one language, does every single child get exposed to all of the data?
ReplyDeleteSelin, please read the other replies so you clearly understand the OG/UG distinction. How does that answer your question?
DeleteHi Selin, according to my understanding, children are in fact not exposed to all the data necessary to acquire language. The OG/UG distinction is that OG is the language that people actually produce, and we may make mistakes in OG, whereas UG is the language that is programmed in us genetically, and we do not see speakers make UG mistakes. The most important thing is that because speakers do not make UG mistakes, children are not given any negative evidence, so they would have no way of learning UG (you can't learn from positive feedback alone, think of the mushrooms). So children definitely do not have all of the data necessary to learn a language.
DeleteChildren don't have all of the data of course, however do they not make UG mistakes? In another psychology class that I took, I learned that no matter how much you correct a child's grammar mistakes, the child will still make that same mistake, and she/he will come to fix that mistake by herself/himself; the takeaway in that content was that whether you correct a kid or not, eventually they will all learn to speak correctly, or speak without mistakes. So they might not have any negative evidence come from the outside, but they might observe that they are saying something differently than the people around them.
DeleteI was wondering whether, since neural networks have the ability to ‘learn’ in a similar fashion as humans do, if we could learn about poverty of stimulus by using NNs. We could ‘feed’ the neural net an environment which we deem similar to that of a child learning a language and see what it is able to learn.
ReplyDeleteAimée, language is not just passive hearing; it requires production too; and grounding. NNs are the way LLMs like GPT learn. Even if you cut the Big Gulp down to the size of what a child hears and says, the LLM, which treats only words, and not the referents of the words in the world, missing the grounding unless it is at least a T3 robot. (That, of course, can have NNs learning to detect distinguishing features in its sensorimotor interaction with the referents of its words.
DeleteFrom what I understood is that UG is an innate capacity to learn language not how language is learned. Pullum speaks a lot about OG and linguistics, but doesn’t say anything about the genetic APS of UG. I was looking into genetic factors that lead to Specific Language Impairment (SLI), which is associated with the CNTNAP2 gene. CNTNAP2 is involved with neural development, synaptic function and axon guidance. We also know from Pinker’s article (9a) that UG is associated with L1 and is only properly formed in early childhood. We also know that infants have the more neuronal connections than adults and undergo synaptic pruning. To me this draws a connection between a default(encoded in all human DNA) excessive number of synaptic connections (where we find the physiological basis of UG) and humans lose the capacity of UG as children undergo synaptic pruning and experience-dependent learning (which natural language(s) is/are in the environment). Children may replace UG with data-driven learning and the brain decides keeping UG neuronal connections is not a priority. Kind of a use it or lose it phenomenon.
ReplyDeleteKaitlin, I think your synaptic-pruning hunch has a better chance in explaining the Chinese/Japanese R/L effect (what is that?) than UG. It might possibly also be relevant to the critical period for L1 parameter-setting on UG (or rather, what happens afterward, with L2).
DeleteMy understanding of POS is that it consists of 3 factors: 1. We come across ungrammatical sentences but still learn what is correct. 2. We experience a finite amount of stimuli but acquire infinite ability. 3. We gain knowledge for which we have no overt evidence.
ReplyDeleteGiving 2 to a T3 is possible if it is also given the ability to learn and generate new connections. However, 1 and 3 seem trickier to me. How would T3 learn what is correct in these conditions? Answering this would require reverse engineering UG.
I also found the argumentation in this article interesting. The POS assumes children learn things without evidence, but this may very well not be the case. If they use evidence, this would make reverse engineering slightly simpler.
Nicole, please always read the replies to the other skywritings before posting your own.
DeleteSo from what I understand, the "poverty of the stimulus" argument refers to the idea that children are able to acquire complex language skills despite the limited input they receive during language learning. This means that the input children receive from their environment is not sufficient to account for the complexity of language that they are able to acquire. In other words, children are exposed to limited and fragmented language samples, yet they are able to acquire a rich and grammatically complex language system. If they are not explicitly taught these rules, then how do they come to possess them? One possible explanation is that children have an innate language acquisition device, as proposed by Noam Chomsky's theory of Universal Grammar. According to this theory, children are born with a set of linguistic principles and structures that guide their language acquisition process. These innate principles allow children to fill in the gaps in their input and construct a coherent and grammatically correct language system. However, the poverty of the stimulus argument challenges the idea of an innate language acquisition device. If children are only exposed to a limited and fragmented language input, then it becomes difficult to explain how they are able to acquire complex grammatical structures that are not present in their input. I was wondering, what if a part of our UG relates to identifying and extracting patterns and regularities from the linguistic input we receive to then in turn create generalizations about the structure of language?
ReplyDeleteYour observation about the "poverty of the stimulus" argument and its implications for the concept of an innate language acquisition device is perceptive. In this week's reading, we explore the idea that children actively engage with their linguistic environment, extracting patterns to create generalizations about language structure. It aligns with the learned aspect of grammar, emphasizing the role of environmental input in shaping linguistic competence. The interplay between OG and UG becomes crucial in addressing how children acquire complex language skills from limited input. It suggests that while UG provides a foundational framework, OG contributes significantly by allowing individuals to discern patterns and regularities, facilitating the construction of a grammatically rich language system - though I would consider UG a pre-grounding mechanism hence the question of how it works under POS. This perspective underscores the dynamic interaction between innate principles and learned grammar in the language acquisition process.
DeleteMallak, please read the prior replies.
DeleteKristi, what are parameter-settings on UG, and what is their relation to OG?
Hi Kirstie,
Deletethank you so much for pointing out a few errors in my comment! It really did help because I now realize that I completely dismissed the interplay between OG and UG, and it helped clarify why statistical learning in terms of language learning is based on OG. I just want a bit more clarification, could you expand further by what you mean when you say "though I would consider UG a pre-grounding mechanism hence the question of how it works under POS"? Do you mean like it aids in grounding learned grammar?
With the debate between statistical language learning and the poverty of stimulus arguments, I was wondering about those with language processing disorders or more severe degrees of autism. Although they may struggle with forming grammatical sentences, syntax or might even be non-verbal, they still have their ways of communicating. I tried searching up the cause of these disorders to possibly find if it supports either side of the debate, there seems to be not enough research done to understand the exact cause of it, but it is most probably due to connectivity and brain structure differences or genetics. If we fully understood the reason behind this, what can it tell us about language acquisition and development?
ReplyDeleteAndrae, the clinical results on language disorders have so far told us nothing concerning UG, but some do handicap OG learning.
DeleteReading Pullum's article, I see a clearer distinction between Universal Grammar (UG) and Object Grammar (OG). Pullum's arguments, which lean towards the richness of the linguistic environment (OG), seem valid in that context. However, they don't fully address the innateness of UG. This innate aspect of language, as discussed by Chomsky, suggests a built-in set of grammatical structures that transcends specific languages. Pullum's focus on empirical evidence of language acquisition and skepticism of POS seems to overlook the inherent aspects of UG, which, according to Harnad, are critical to understanding languages' universal features.
ReplyDeleteDaniel, "OG" refers to Ordinary Grammar (not Object Grammar). Explain OG and UG to kid-sib, and the role POS plays in the distinction. And please read the other replies.
DeleteIf I'm understanding UG correctly, a way to describe it in terms of mushroom island would be as follows. With UG, there's a structure which is reflected in things that you do and don't do. If you were to have a universal ability to detect mushrooms, you'd get to mushroom island and you'd know what mushrooms to eat and what mushrooms not to eat. You have innate knowledge about mushrooms such that you would simply not eat the inedible ones and you would eat the edible ones. Similarly, with UG, you know that if you say certain things you won't be understood properly so you never say them. There's no negative evidence, but we just know with our innate grammar what would be said vs what would not be said.
ReplyDeleteFiona, please see the replies about the mushroom analogy. (It is not related to knowing that "you won't be understood properly". What you know is that UG errors sound wrong if you hear it (but you don't know why), and you never say them.
DeleteIn our daily life, we only have positive evidence and negative evidence when it comes to OG. The positive evidence comes from any input that is grammatical and the negative evidence, from being corrected if we make a grammatical mistake or if we see someone being corrected for incorrect grammatical use, which reflects OG. Some linguists have found a UG that is shared across all languages, that supposedly is innate. To support this argument, the absence of negative evidence is necessary as we don’t make any UG mistakes, so there is no correction or reinforcement. That is the poverty of stimulus: if there is no instance where we would be exposed to ungrammatical sentences at the UG level, how come we know that an ungrammatical sentence at that level would be ungrammatical? The response would be that because it is an innate capacity. This contrasts with learning categories which requires negative evidence to be able to detect features that distinguish categories. Especially in supervised learning where we need external reinforcement that an object is not part of a category to extract the relevant features to the category.
ReplyDeleteMitia, that's it.
DeleteBoth the assigned reading by Pullum and Scholz and the optional critique by Dabrowska were puzzling, if not outright fundamentally wrong, in their discussions of UG and the blurring of lines between UG and OG. I found it quite comical that Dabrowska’s article dismisses UG on the basis of a lack of empirical evidence for its existence, and wrongfully draws the conclusion that there is little universality in language. It is true that UG is the hard problem of language, which opens doors to a whole lot of misinterpretations. However, as prof. Harnad wrote in “Chomsky’s Universe”, many people have attempted to prove Chomsky wrong, but unless a rival theory can adequately explain all that is explained by UG, except with rules that are learnable or evolvable, no theory is as complete as Chomsky’s. It is quite baffling to me why so many people are possessed to challenge Chomsky, but so far all I have seen is a lot of words that don’t mean much, and that have a flawed understanding of what Chomsky was trying to get at.
ReplyDeletePaniz, good synthesis. One reason there is so much misunderstanding may be that many so people feel "I've learned my language, including its grammar; there's no need for me to study generative linguistics -- unlike with studying mathematics, which I don't already know."
DeleteI think it's always important to critique and question established theories. It can keep researchers from getting too excited about a certain research avenue and divest resources from other potentially fruitful ones. While cautionary papers are necessary in science, this one veils a lack of understanding of Chomsky's concept of UG and OG under an apparently very procedural logical argument. This paper is logically sound in its critiques of several (perhaps) over-zealous arguments that claim to support POS without investing energy into what it entails. Still, it is impossible to know from this paper how much the original studies Pullum is critiquing derived their arguments from OG/UG distinctions. This is all lost here. This is a shame, because it wrongfully discredits potentially valuable studies. Especially since, based on what I've gathered from previous replies, Pullum's name is well established in cogsci.
DeleteOne way to bridge OG and UG is through the understanding one example of language developments.
ReplyDeleteIf OG from the original language is changed largely enough and spreaded widely enough, it becomes a Dialect. And if more changes keep happening, the dialect will become a new language.
Through the process, a new Parameter Setting could potentially be developed, which will still remain under the range of UG and could be picked up by learners.
Additonally, for languages like Chinese, Romanian, etc. they are considered as shallow orthography which means the writing and pronunciation are closely corresponding (as you said like minimal grounding pairs). In this sense, they are more common among all language developments around the world and in history.
For Alphabetic languages like English, and French, there is little correspondence, which is considered Deep orthography and they are less common during the development.
Based on this, as nowadays Chinese use Pinyin as an alphabetic tool for learning and predicting the pronunciation of new words. But in ancient times, people learned Chinese based on the comparisons of the same pronunciation but different words.
The other way is to cut the sound of the new word into two parts, and use the help from two already known words (for learners).
It takes the first sound part from the first known word and then takes the second sound part from the second kown word and then combine the sound as indicated by the new word sound.
For example, to learn TAN, you cut the T from Tao, and then cut the AN from gAN; so TAN = T + AN. As said, this is another form of grounding.
Eugene, these are very interesting features of writing systems, and writing systems may have a relation to minimal grounding sets and the ease of learning the meaning of new words from their written form. But they come far too late in the day to give insight into the nature, the origin or the evolution of language capacity itself, which, whether gestural or vocal, began long before writing.
DeleteThis paper argues that there is not enough empirical evidence to support the poverty of stimulus argument. The poverty of stimulus argument essentially entails that because children are still able to learn language and intricate grammatical rules without being explicitly taught or exposed to all of them, there must be an innate set of structures responsible for this, called UG. As stated by others and Prof. Harnad, the authors fail to distinguish between UG and OG and use examples to counter POS and UG, but they really are talking about a completely different set of processes encapsulated by OG. Whenever I have learned about language acquisition, whether it be in linguistics or psychology, I have only ever heard about UG and never OG. I guess I’m just confused as to why it is not more talked about because the failure to distinguish between the two leads to a complete misinterpretation of UG as a whole.
ReplyDeleteIn the mushroom island you learn categories (edible vs inedible) by supervised learning. UG is the idea that there only exists edible mushrooms (that are accessible to us), therefore it cannot be learnt because learning would require categories, and here there is only one kind of thing, so it is not even a kind. We can "artificially" create inedible mushrooms, i.e. examples that violate UG, but "naturally" it doesn't exist? It sems so strange, maybe I have it wrong. (what do I mean by "naturally"? The world of language, all its innate and environmental and learning properties?)
ReplyDeleteIn Section 4 (Empirical linguistic testing of inaccessibility claims), I found the discussion around estimations of frequency of terms and phrases very interesting. The authors mention the lack of double negative structure in written text (the ‘n’t not’ token) that would cause us to underestimate the acutual usage of this structure in all language (including spoken). To me, this suggests that there are important differences between the way we speak and the way we write which could be lost in trying to understand (or indeed, reverse-engineer) a language from the perspective of one already speaking it. I’m not sure I understand how generative linguists are able to construct UG in an unbiased manner without the bias of their preexisting knowledge of syntactical structure. Is this even what it means to reverse engineer a grammar? Any insight people have is appreciated!
ReplyDeleteConsidering new findings that indicate children are exposed to more complex language structures than initially assumed, what implications might this have for our understanding of how the brain adapts to learning languages at various stages of life? Could this lead to a reevaluation of the critical period hypothesis, particularly in terms of its implications for second language acquisition and education? In my opinion, given this is the case, it implies that the human brain is much more adaptable and capable of language learning at varying stages of life than previously believed, ultimately debunking the myth that language learning is more difficult in adulthood. Moreover, what makes me think even more, is what if this is not just the case for language acquisition. I’m sure we’ve all heard the phrase “you can’t teach an old dog new tricks”, but what if it is just dependent on the environment. What if the brain is also much more adaptable to learning new skills at varying stages of life as well.
ReplyDeleteThis is such a great discussion topic. I think that the critical period hypothesis is still valid mostly because that is the period where our brains are developing the fastest in terms of neural connections. As we grow up because we know most general stuff we start learning at a slower pace. Not because anything is wrong with our brains but because of the environment. This is just my theory, maybe there's research about it: If you keep throwing someone in new environments until they learn about it and then put them somewhere else where they can't use most of their previous learnings they will be motivated to learn to survive and adapt again. So in more realistic terms if you keep trying to learn new things and exercise your brain you also train the adaptability of your brain.
DeleteI understand the authors frustration with the seemingly immediate acceptance of Chomsky’s APS throughout the academic world, especially into non-linguistics circles where a critical lens could not understand the argument in full. I do agree with the class consensus that they missed the distinction between UG and OG and I don’t feel I have any more of substance to add than what has already been said above. In recent years and with the success of LLMs and even smaller scale ones than ChatGPT with more of a big swallow rather than a big gulp, comes an increasing likliehood (to my eyes) that the POS discussion is not done. That is, it cannot just be accepted outright and that perhaps it is not our brains that require some unheard of innate structure to deal with language but that more so there is something about language that is more fundemental to our world than we can perhaps see. I’m not sure. I disagree with Pullum and Scholz’s take on the POS but I can apprieciate their apprehension and can perhaps use that to temper the direction cognitive linguistics takes and to make sure we don’t go down a fruitless path.
ReplyDeleteAfter reading this article and questioning some of the arguments, I was reminded that there is actually some language communication between animals, but humans cannot speculate or understand it. If you want to use only an algorithm to infer each person's language and the relevant knowledge and technology to master the language, this is relatively abstract, and in terms of humanism, I personally think there is a certain degree of doubt. In fact, when taking this class or understanding the reasoning process of this algorithm for a certain thing, I will think about whether this violates some humanism or whether it brings any consequences to the original world of mankind, will it have some bad effects? To speculate on the algorithm of linguistics, to speculate that humans have had a culture and a history of tens of thousands of years since ancient times, will this approach violate some principles that we humans do not know in the first place?
ReplyDeleteVocabulary tests where people have to determine whether a word is real or not somewhat show both data-driven learning and innately primed learning. If a word is grounded in one’s vocabulary or they know they have come across it before, the person would know it is a real word. In cases where they are uncertain, the person would have to determine if it sounds or looks like a real word. Then, in some other cases, people just know that it is a made-up word.
ReplyDeleteseeing language firstly as a communicative tool*
ReplyDeleteWhat exactly is universal grammar, and has anyone seen it? Is a reading that goes over 10 arguments for UG and denies them, and specifically the Poverty of Stimulus on that is the most powerful and famous. POS states that children have linguistic knowledge which could not have acquired form the input available to them, but Dabrowska argues that it is more imaginative than empirical: linguists can’t imagine how a grammatical rule for example can be learned from input, so it must be innate. Rather, she offers a constructionist approach that accounts for structure regularities and idiosyncrasies in languages and focuses on the development of grammar in children based on form-meaning pairing. This development is progressive and goes from lexically specific patterns to more abstract, generalized schematic patterns. This account emphasizes empirical research and contribution of a multiple disciplines.
ReplyDeletePullum and Scholz emphasize the misrepresentation of the Poverty of the Stimulus argument in the past (something I believe they misrepresent as well as they’ve seemed to have mixed up UG and OG as others have pointed out). They highlight the essential point that individuals acquire knowledge about their language's structure even in the absence of supporting evidence from the data they encounter. Children learn aspects of their language without receiving negative evidence. They grasp the ungrammatical nature of sentences without being explicitly told about their impossibility in the language.
ReplyDeleteI was hoping to draw a parallel between Universal Grammar (UG) and the concept of feeling. UG lacks identified evolutionary advantages, and the causal mechanism in the brain enabling children to learn ungrammaticality without negative evidence remains a complete mystery to us. Similarly, the 'hard problem' regarding the nature of consciousness remains unsolved, perhaps indefinitely (will we ever know if anything else but us feels?). It’s possible that further insight into one concept can help us solve the other. If we can reverse engineer the mechanism that allows UG principles to function, maybe we can reverse engineer the mechanism that allows organisms to feel.
I am still interested in the intimate relationship between age and language development. I wonder what life is like for children who fail to develop language their whole life. This makes me wonder about the function of language, if the critical period is only a few years after birth. Again, it seems to fit somewhat with the theory of cognition as a result of living in social groups. I don’t think having society caused the development of cognition, but using language as an example, if children are not raised in the group, they lose all connections as they won’t be able to use language. If language allows for the grounding of some categories and it has a critical period at a young age, some categorisation is exclusive to childhood experiences. Which I think could be potentially confused with innate categories. I wonder how scientists differentiate innate categories and those learned at a very young age.
ReplyDeleteChomsky Normal Form (CNF) is a method of structuring context-free grammars that is convenient for different algorithms. A context-free grammar (CFG) is composed of production rules defining how to replace symbols with sequences of symbols or terminals, thereby specifying how sequences of symbols should be rewritten. This is end of weasel word time. It appears that the left-to-right sentence generation by ChatGPT might naturally align with the CNF framework, suggesting that the strategy for predicting subsequent words could also be reflective of CNF. Consequently, it might not be the case that ChatGPT has inherently grasped underlying grammatical principles, but rather due to the presence of Universal Grammar (UG), the generated output seems to adhere to CNF, which may follow UG principles. It is important to note, however, that CNF is not the methodology Chomsky employed to solve and reverse-engineering the mysteries of UG. My observation is that ChatGPT engages in a statistical form of reflection to discern UG patterns, despite it did not actually learn the language, but maybe considered as mathmatic evidence of UG.
ReplyDeleteChomsky’s idea of the “Poverty of the stimulus” concerns the question of how children seem to come to know things about their language without being given the supporting information. Chomsky’s answer to that is Universal Grammar, or an innate language structure or knowledge. At its core, this article presents the idea that despite the argument from the poverty of the stimulus potentially favoring the idea of linguistic nativism lack the necessary empirical support, and the structures underlying UG are not alone sufficient for language acquisition (cannot be learned just by mere exposure effect).
ReplyDeleteWhen asked how it learned grammar, ChatGPT (on its 1st birthday!) said: “I learned grammar through a vast and diverse dataset that includes a wide range of texts and linguistic structures. My training involved exposure to patterns, rules, and context, enabling me to generate coherent and contextually relevant language based on the information gleaned from the data.” It is referring to the big gulp. ChatGPT’s exposure to a large amount of language allows it to use UG and OG without mistakes. It is possible that children learn them in the same way (through a large amount of learning). Pullum argues that we are exposed to enough language to never make UG mistakes. ChatGPT’s abilities support this. However, when asked to choose a side, Chat GPT argues FOR UG, because of the remarkable consistency in language acquisition across linguistic environments. On the other hand, we have actually never heard UG broken, and it may not even be possible (there is no counter category). Nonetheless, it is more likely that we have an innate appeal or structuring for the ways of UG.
ReplyDeleteChat GPT's big gulp is in many ways cheating, and comparing the amount stimulus a child receives to the big gulp is like comparing a glass of water to the amazon river. Also! you said GPT said it had exposure to "rules", which likely contain English OG (and later other OGs), which itself doesn't contain UG mistakes (UG = the mistakes we dont make), I believe ChatGPT received the rules of language syntax alongside the big gulp, they didn't just dump data into a rule-less black box.
DeleteThe supplementary reading for this week: What exactly is Universal Grammar, and has anyone seen it? Refers to universal grammar, a suspect concept. It states that there are many different conceptions of what is meant when we say UG and that despite the widespread belief in UG that there is “little evidence” to support its existence and that to argue for its innateness is based on false premises. Mainly it argues that UG is not a well defined whole thing and so any predictions it makes about language cannot be falsified.
ReplyDeleteIn our discussions on categorization and learning, it appears that the counterarguments to the POS as presented in the paper may have missed the mark. The necessity for UG to be innate arises from the fact that children are not exposed to, nor do they produce, sentences that violate UG rules. This lack of exposure makes supervised learning unfeasible. It's similar to learning which mushrooms are safe to eat - without encountering both edible and non-edible types, distinguishing their characteristics is impossible. Similarly, without hearing UG non-compliant sentences, children can't learn to differentiate them from compliant ones. Thus, POS is more about the absence of this 'negative evidence' rather than a scarcity of certain positive evidences, rendering the paper's argument less relevant to our understanding of language acquisition.
DeleteThe Everaert, M. B. et al. article very nicely summarized (probably the best I have seen, I feel at last this may be able to explain linguistics to my parents) what linguists study. One of the most controversial topics addressed was that language might have evolved as an instrument of thought rather than communication, as communicative efficiency is sacrificed for computational efficiency. Warning: this is not kid-sibly because I can't concisely (in just one skywriting) make claims about linguistics without referring to some complex concepts. In phonology, morphology, syntax and semantics, there is a hierarchical structure that is distinct from the linear, acoustic (or gestural) signal that we perceive once the linguistic expression has been externalized: "what reaches the ear is ordered, but what reaches the mind is unordered". Dependencies in a hierarchical structure are necessary to explain the phenomena that we observe in language, so the linear order that we hear must not be available to "the mapping to the conceptual-intentional system". As a result, externalization of language is only a secondary property. The job of phonology is to eventually get from hierarchical structures to linear strings (as such, a phonology runs computations on hierarchical structures—some people used to think phonology only dealt with strings), so the hierarchical structures had to have been there all along, generated by an intentional-conceptual engine and syntax. Otherwise, you would either posit that A) some exclusively-linear phonology (sometimes called "cherology" for sign languages) came first, but what would a linear phonology even look like, and how would we get from there to the complex, hierarchical phonological, syntactic, etc. structures that we now have? or B) all modules of grammar came as a package deal, they evolved with one another, but from this how could you explain the evident hierarchical structure when linear strings would have been just fine—in fact, better—for communicative purposes? The communication aspect of language really seems to me to have come after the internal, hierarchical syntax. I have yet to see an alternative explanation as to why communicative efficiency is sacrificed for computational efficiency.
ReplyDeleteThis reading was fascinating since the arguments regarding the PoS advanced by Pullum and Scholz allowed me to reflect on the MINERVA 2 language acquisition model we recently studied in my cognition class. It operates based on pattern recognition and matching new inputs with stored memory traces, which aligns with the article's perspective on data-driven learning through contextual information and repeated exposure for OG (thus refuting the PoS). However, one thing I realized is that P&S as well as the MINERVA model underscore the importance of negative evidence in language acquisition. Modelling the association of word exposures across experience is not enough, as this only accounts for positive instances of both OG and UG. Only OG is learnable ; UG, lacks negative examples found through experience. Still, the absence of negative experience is crucial in grasping the innateness of UG, indicating that for these models to accurately emulate human language capabilities, they must incorporate this aspect into their framework.
ReplyDeleteThe optional reading mentions the importance of recursion in language as it “served as the formal grounding for generative grammar and the solution to the finite–infinite puzzle” and is a universal feature that differentiates the human language from other forms of non-human communication systems. I remember reading about a tribe language called Pirahã, and how it apparently lacks grammatical recursion, which goes against the theory that recursion as a universal property across human languages. I wonder if this language is just not understood/studied enough for it to be significant as a counterargument.
ReplyDeletePoverty of the stimulus is the core of UG in which there is an absence of linguistic mistakes, thus the absence of the related corrections. The reading mostly focuses on the lack of positive evidence when discussing POS, yet the main issue with UG revolves around the lack of negative evidence, where we never hear or produce UG-noncompliant sentences, thus a “Skinnerian” feedback or reinforcement for it does not exist. I was left wondering about the evolutionary perspective on UG. If we view evolution as a form of supervised learning, then how can we examine the adaptive advantages of UG, given that it is not learnable. If linguistic capacity is the core of our cognition (as it gives us the capacity to think), how can UG be a candidate for answering the “why” question?
ReplyDeleteThe text shows children's insufficient language input by highlighting the need to examine real language samples thoroughly. But this will be very challenging since the input can vary and be unstable. Generative linguists supporting APS’s position should engage in both corpus linguistics, analyze natural language use, and experiment on exploring children's learning abilities. These two ways, considering both input nature and learning capabilities, is suitable for a comprehensive evaluation of claims about input poverty and having a better overview of how language input affects children.
ReplyDeleteThis reading has me really confused on how the poverty of stimulus argument relates to Universal Grammar and Ordinary Grammar. From what I can understand, the POS argument is trying to argue the existence of UG due to the fact that children often are able to show grammatical competence beyond their mere linguistic exposures. These linguistic inputs are often ambiguous and can even be incorrect, but children are still able to acquire the highly complex and systematic understanding of their language. Therefore, it has to be said that an innate component like UG must exist in order to fill this discrepancy. My confusion mostly comes from the fact that the Pullum reading conflates UG and OG and thus does not address the problem correctly.
ReplyDeleteProfessor Harnad in many of his replies above relates the Pullum reading to ChatGPT in that both Pullum and those who believe GPT learned UG from the Big Gulp are wrong. GPT makes no UG errors and thus I can see why Pullum was wrong, but I am unable to understand why we are unable to say that GPT learned UG. What is the difference in learning UG and OG for GPT? How is one just merely considered aping what other do and don’t say while the other is considered to be “learned”? Just because OG errors appear plenty in the Big Gulp and are not re-created by GPT, can we say that it learned OG? Could it not be that the OG errors are overshadowed by the portions of Big Gulp that do not contain the OG errors?
DeleteThis comment has been removed by the author.
ReplyDeleteOne thing I've noticed when trying to express my ideas in a second language is that I can never get my ideas across in the shortest possible length so as to still contain all the key information I want to convey. The reason I can't do this is because I'm trying to avoid one of the incomplete sentences that POS contains. Even though POV situations play out on a daily basis, it is still more difficult for me to catch the meaning referred to in incomplete sentences and weaker content stimuli when listening to second language speech than it is for native speakers. Therefore, except for blurring the line between OG and UG, a point that this article also fails to make is that most of the POS scenarios they consider are more suited to native speakers than to second- or third-language speakers, even though in the APS they do mention that the first language is their focus.
ReplyDeleteOG is based on grammar under a humanly prescribed and modified language system, and as such is subject to being learned and modified. But UG, as the basis of OG and the root of all language, is a concept that does not need to be learned or can be learned. In a sense, hypotheses in favor of UG, such as high growth rate neural connections serving the amazing ability to learn language at an early age, are a mechanism to support UG in order to acquire language ability.
ReplyDeleteThis reading at its core is discussing the nature vs nurture of language acquisition. From the reading and from my prior personal opinion, I think that APS is a good theory to continue to research in order to find what might be left out from data-driven learning. Those things must be the nature aspect of language acquisition. I think that every cognitive ability comes from both nature and nurture. Because our brains start developing only using the data in our genetic code which is the nature aspect. After we’re born it’s mostly nurture aka. data-driven learning. I do think that most research done to test APS will yield negative results as indicated in the article which will provide amazing insight on language acquisition and its specifics.
ReplyDeleteThis seems to be a level headed article that pushes back on invalid arguments for the “poverty of the stimulus”, which suggests that humans possess innate linguistic knowledge because certain linguistic facts cannot, or is overwhelmingly unlikely to be, learned from their personal experience alone.
ReplyDeleteOne particularly insightful aspect of the paper is its emphasis on empirical verification. The authors propose an approach to evaluate the arguments for linguistic nativism. They suggest five steps in a logical argument:
1. Identify the language skill someone learned.
2. List sentences that usually teach this skill.
3. Explain why these sentences are needed to learn the skill.
4. Show that the person didn't hear or read these sentences.
5. Prove the person still learned the skill without them.
And thoroughly argue that, for many examples in the literature, this argument is not empirically justified. One thought I had during the reading is the extent to which the “poverty of the stimulus” hypothesis is a hypothesis that would be incredibly exciting, if confirmed. It seems plausible that this was part of the reason it was adopted so quickly!
Pullum argues in this paper against linguistic nativism and attempts to demonstrate how APS is subjected to criticisms. One criticism he offers is Sampson’s example of a linguist. Sampson shows that if the linguist Angela can know fact F about language L, then anyone else must can as well. Sampson argues that this is a vicious cycle and shows that nativism is self-contradictory. However, I cannot see why this is the case. Sampson argues that Angela would be using presupposed nativism to argue for nativism, but I don’t think knowing L natively is a presupposed nativism. Other comments have discussed how Pullman confounds UG and OG, and I think this mistake is present in this example as well. The “innate linguistically-specific information” is UG and fact F learnt with the aid of it is OG. The two is not the same thing and thus this example does not show that nativism is self-contradictory.
ReplyDeleteOne of the points made by Pullum and Scholz is that linguists have to specify a criterion regarding how much evidence is necessary to learn a specific rule of grammar. I would argue that one method that could be used to test whether infants have an innate or learned understanding of a rule would be to assess them at different intervals in time and compare how well they execute a certain rule over time. Then we might be able to relate that curve to the corpus they have access to at different points in time, as well as brain maturation. Essentially, we should find a stronger correlation with corpus evolution if the rule is learned than if it is not.
ReplyDeleteThe article "Empirical assessment of stimulus poverty arguments" by Geoffrey K. Pullum and Barbara C. Scholz and the case of Genie intersect intriguingly in their examination of language acquisition. Pullum and Scholz challenge the linguistic nativism theory, which posits that certain aspects of language are innate and not entirely dependent on environmental stimuli. Genie's case, on the other hand, provides a somber real-world example of the effects of environmental deprivation on language development.
ReplyDeleteGenie, who suffered extreme social isolation and abuse, missed the critical period for language development, resulting in her impaired speech and language skills. This aligns with the nativist viewpoint, which asserts the existence of a critical period after which language acquisition becomes significantly more challenging. However, Pullum and Scholz argue for a more nuanced view, suggesting that environmental stimuli play a larger role in language development than nativist theories traditionally acknowledge.
This comparison highlights the complexity of language acquisition. While Genie's case seems to support the critical period hypothesis, Pullum and Scholz's critique encourages a broader exploration of how children learn language, emphasizing the need for a balance between innate abilities and environmental experiences. Their analysis prompts a reevaluation of cases like Genie's, suggesting that her language deficiencies might also reflect the lack of linguistic stimuli during her formative years, rather than purely being a result of missing the critical period.