Monday, August 28, 2023

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.
Alternative sites: 1, 2.



The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is HardBehavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de ChomskyÀ babord: Revue sociale es politique 52.

144 comments:

  1. Pinker argues that throughout evolution, special adaptations have occurred in the human mind and body to be able to use and acquire languages. The vocal tract has changed shape, leading to disadvantages in feeding and breathing. However, those disadvantages are outweighed by the advantages of communicating through language, which permits learning through instruction (“tell” rather than “show”). It is also argued that it’s easier to learn languages in childhood, as the child’s brain is more plastic. With cognitive decline in adulthood, it becomes more difficult to learn, memorize, and apply new vocabulary, especially when it cannot be used in a daily context. For example, my mother sometimes teases me about the fact that I used to speak fluent Russian with no accent as a child, but that now I struggle to find simple words and grammar. Since I never lived in a Russian-speaking country, I didn’t have the opportunity to use the language as much as French and English. Also, although I always practiced the same amount of Russian daily (ie: at home), I gradually lost fluency after my 10th birthday. This reflects the existence of an optimal language acquisition period, as argued by Pinker in his study.

    ReplyDelete
    Replies
    1. Anaïs, yes, and all of this is Baldwinian (lazy) evolution: the critical periods are evolved to make you more motivated and capable of learning and retaining a language when it is most needed, and to lose or loosen it if you no longer use it as your main language. Compare that with the duckling's imprinting to follow the first moving thing when it has hatched, not 4 days later, and not when it has grown up and just given birth.

      But that's not the main thing the article is about: It's about language learnabililty: pronunciation is learnable, vocabulary is learnable, and Ordinary Grammar (OG) is learnable. But Universal Grammar (UG) is not.

      Ask GPT (preferably GPT4 , if you have access, because it does a much better) the following questions:

      1. Short answers, please: What is Universal Grammar (UG)? How does it differ from Ordinary Grammar (OG). Why did Chomsky conclude that OG is learnable but UG is not?

      2. Give an example, in English, of a violation of OG and a violation of UG, and what OG rule and UG rule each example violates.

      3. Why is UG not learnable? (What is the Poverty of the Stimulus?)

      4. In particular, what are the 4 ways OG is learned? What is it about UG that makes it unlearnable in those same ways? (That is the POS.)

      5. Can you explain POS in terms of errors and error correction for UG?

      6. But do they hear or make UG errors at all?

      7. please give a few examples of UG principles.

      8. What would be positive evidence and negative evidence for a rule or principle? What does that have to do with POS?

      9. How do adults detect OG errors? How do linguists detect UG errors? Can non-linguist adults detect that UG errors are errors?

      Here's how GPT4 answered these questions.

      Delete
    2. Hi Anais, thank you for sharing your story. I think your experience with Russian is a great example showing that the plasticity and maturation of the brain during infancy provide the necessary conditions for language acquisition.
      As mentioned in 2.3, the number of synapses peaks around 9 months to 2 years, and synapses keep withering during childhood until adolescence. The plasticity allows you to build more synapses and master Russian in your childhood. Since Russian is not your main language, these synapses are less frequently used, and keeping them would become a burden. Adults are significantly slower to form new synapses, so it’s difficult to reach a native level again.

      Delete
  2. According to the Learnability Theory explained in part 4, children learn their first language starting from "hypotheses" about the target language. It means that language acquisition at an early age should be more than mimicking the caregiver or unsupervised learning. With a vague impression of the target language from what they listened to and saw from the environment, children make it more concrete by errors and corrections. This is crucial for the UG, right? Otherwise, it would be problematic to answer how children could have such capacity to grab the critical part (subjects, verbs, etc.), build a rough framework about the target language, and then conduct the possible speech. Hope I have not simplified the problem too much, and look forward to hearing further thoughts or corrections.

    ReplyDelete
    Replies
    1. I also found the portion of this reading on the “learnability theory” especially interesting. From my understanding, this theory aims to answer the fundamental question of what language acquisition consists of, highlighting four main elements to understanding learning with language acquisition, such as class of languages, environment, learning strategy and success criterion. Furthermore, the learnability theory focuses on negative evidence described with examples of OG. However, I do not believe that this theory alone can be used to explain how we develop language since we do know that UG is innate, and cannot be learned. Indeed, the unlearnable quality of UG stems from the fact that UG errors are not made by children or adults, which means there is no way to correct them, due to the absence of negative evidence.

      Delete
    2. Evelyn, "errors and corrections" apply to OG, not UG, and Pinker does not make the distinction.

      Melika, you're right. That's the distinction.

      Delete
    3. The role (or lack thereof) of negative evidence in language acquisition reminded me of the limitations of unsupervised learning. When Pinker et al. broke down the four logical possibilities for how a child’s language could differ from the adult language, case (d), where the child’s hypothesised language is a superset of the target language, is the only one that cannot be corrected from positive evidence. Without negative evidence, there is a lack of feedback so the child can never be sure that what they hypothesise is actually the adult language. However, while imitation is not a way to learn categories, it has been demonstrated that children are able to successfully learn the adult language even in the absence of negative evidence. I wonder if this (i.e., Poverty of the Stimulus) is a strong enough to argue for innateness of UG.

      Delete
    4. Jocelyn, please read the other replies about the UG/OG distinction.

      Delete
    5. Melika, I appreciate the distinction you are presenting: it helps me understand better the difference between UG and OG. It also made me think about a comparison we could make with music, another form of language theorized by Darwin as being a protolanguage for all types of languages. Indeed, the acquisition of absolute pitch seems to be innate (like the acquisition of UG), while the acquisition of relative pitch is learned, through biological evolutions of the ear, of the basilar membrane and of some rules implicitly learned by the listener. Therefore, maybe these innate/acquired forms of language can be generalized more broadly to all forms of language, even non-spoken languages (such as music).

      Delete
    6. Another difference with music that I would like to add is infant-directed speech being comparable to infant-directed music, which in both cases, don't necessarily expose infant to the same proportion of music/speech than adults, which can, once again, affect the development of their UG.

      Delete
  3. “Children with Japanese genes do not find Japanese any easier than English…” (p.13).
    I found this point particularly interesting. While it emphasizes the obvious fact that a child’s environment will determine the language that they learn (as they learn OG), it also emphasizes the fact that the set of linguistic rules we are born with are truly universal (hence why we call it UG), and not modified by our lineage at all. I would have thought that throughout many generations we would start to be born with some linguistic rules more attuned to our ancestors’ language so that we have less OG to learn for our specific language (although we could no longer call the innate set of linguistic principles universal).

    ReplyDelete
    Replies
    1. In response to myself, one thing I can think of is that language is constantly changing, and in modern times, it is common for successive generations to learn a different first language than their parents, thus it isn’t as useful to be equipped to learn a specific language.

      Delete
    2. Jessica, OG (and pronunciation, and vocabulary) are equally easy for babies to learn, for any 1st-language. UG is the same for all languages and it is innate.

      To launch UG the only thing you need to learn is the "flavor" of the OG of your first language. This amounts to "setting" a few of the parameters on UG: "Is my OG Subject/Verb/Object (S/V/O) order or S/O/V order?" "Does my OG drop pronouns?", etc., for a finite number of parameters.

      It is a bit like this for "Universal Phonology too": Every baby is born able to hear and produce all phoneme contrasts, such as R/L, but Japanese babies lose L, because Japanese does not use L, and Chinese babies lose R because Chinese does not use R.

      You can't learn a perfect UG for second and later languages, because it produces some conflicts between parameter settings. Something similar is true for phonemes. The exception seems to be when you learn several languages simultaneously as 1st-languages. But the critical period for 1st-language learning is soon over, and after that later languages are not quite perfect.

      Delete
    3. What do you mean by parameter settings for UG; if UG is an innate mechanism that does not distinguish between languages, shouldn’t it also help in acquiring a second language? Does it imply that UG also has a critical period, as it would only be useful for the language learned in early life? And if two languages were learned simultaneously, would we also benefit from UG for only one of them?

      Delete
    4. Natasha, I was trying to convey what parameter settings are with the analogy to "flavor": Vegan ice-cream is vegan ice-cream, and UG is UG, but you can have vanilla vegan ice-cream or chocolate vegan ice-cream, and you can have SVO UG or SOV UG. The UG is innate, the flavor is picked by learning. Chomsky uses the analogy with a radio: The structure of UG is the radio. The parameters are just like dials, tuned to different stations. All the different languages are just different parameter-settings on UG (different channels that UG can tune to). The analogies are imperfect.

      Second and third languages are also potential parameters on UG, but there is a critical period for the setting of those parameters. It should be very young, and only the first language setting is perfect; the only kind of UG error children ever produce or hear is when UG has been set to, say, German as first language and then, a few years later you learn Japanese. You can speak Japanese, but your parameter settings are still more German, so you make a few kinds of UG errors in Japanese that you would not make in German. There could be 2nd language UG errors in a later language even if you have have not used German for years, and almost forgotten it.

      (Don't mix this up with OG errors, which also occur in later or weaker languages, but they are just a matter of learning. In English, possessive is apostrophe-s, and in German it is just s. If a German speaker is just learning English, they might leave out the apostrophe, but then the learn to put it in.)

      About multiple simultaneous 1st languages, I think the answer is that UG adapts to all of them, so you can switch, but I'm not sure. This too has a critical period (and I'm not sure how many different languages can fit within the 1st language parameter "window.")

      Delete
    5. Hi Natasha! Learning OG of a second language is done by using positive and negative examples in our environment. Their rules are known, which allows us to provide explanations when errors are made. However, UG doesn't have clear rules like OG due to the POS (as we spoke about last week) and are therefore unlearnable. UG errors can't be made by those who are using their first language, but these errors are more likely to be made by those speaking their second language.

      The UG settings set by your first language don't change by the corrective feedback you receive when learning a second language. UG helps you learn any first language during the critical period, but after then, people more likely to make UG-compliance judgements in their second language.

      Delete
    6. Hi Natasha! From my understanding, I think that Professor Harnad is referring to the parameter-setting that takes place as children learn languages (discussed in 9.3 of the reading), where the language input a child gets from their surroundings (such as their parents) influence the parameters (or rules) they use in order to learn what is, and what is not, a proper use of language. Through this process the child learns what is grammatical, and deduces what is not, during a period of rapid language development. I think that UG has a critical period in that the parameters a child learns during their early life guide how they learn and understand language, and later attempts to learn another language will be hindered by these earlier parameters. I also think that if a child is raised bilingually their UG would benefit their language acquisition for both languages. In previous courses I've learned that children that are raised bilingual from birth acquire the two languages simultaneously, and in parallel. I think that in a similar way the parameters set by UG would not be in conflict if they are learned at the same time in early life.

      Delete
    7. Thank you for your replies. If I understood correctly: there is a broad structure to UG, with different possible rules. Different languages are tuned to different rules of UG, which can be referred to as the language's parameters. These parameters are different from OG because, even though they differ between languages, they are still not learned by the child but are innate. Is that an over-simplification?

      Delete
    8. Miriam, it's not that UG doesn't have rules! It's that, unlike for OG, children never violate them -- so they must already know them.

      Shona, all language learning is just OG learning, in the form of parameter-settings on UG (like SVO vs. SOV); UG itself is not learned but innate.

      Languages differ in their learned UG parameter-settings, and in second and later languages there can be a little uncertainty about some of the later-language parameter-settings (just as there can be some errors in later-language OG rules). But with more use, these errors become minimal.

      (A similar sort of thing happens with later-language pronunciation. And the pronunciation of a first language can even be lost almost completely if it is not used). But speakers manage to "satisfice" with the later languages they use, even if they never speak them quite perfectly.)

      Natasha, it's the parameter-settings -- "Is it SVO or SOV?" -- that are learned through the learning of OG. Chomsky's says UG is like a radio, which you are born with, and all you learn OG is what station to tune the dials (parameters) to: SVO or SOV.

      Similarly, Japanese and Chinese children, like all children, are born with the perceptual and vocal capacity to hear and produce R or L, but Japanese children lose the L and Japanese children lose the R after the critical period, because their 1st language does not use it. If they learn the other language later, they can, but their respective R's or L's are never quite perfect.

      Delete
  4. I found the experiment done by Michael Maratsos to be quite interesting, where he conducted an experiment in which he gave children passive sentences with missing embedded subjects and asked them who was leaving. The children had to demonstrate comprehension of simple passive sentences in a separate test to participate in the experiment. The results showed that the children interpreted the sentences in accordance with the Structural Principle, where the subject of the sentence controlled the missing subject of the embedded verb. This interpretation was in violation of the Minimal Distance Principle, which suggests that the closest noun should be the controller. The experiment demonstrated that young children have an understanding of the abstract structural relations in language, which I found very interesting.

    ReplyDelete
    Replies
    1. Marie-Elise, yes, this is a typical example of a UG effect (dependence on the tree structure of UG rather than the serial order of the words in the sentence).

      Delete
  5. Firstly, I really like the fact that linguists and psycholinguists consider "sentences that sound natural in colloquial speech, not necessarily those that would be deemed "proper English" in formal
    written prose" as grammatical as I think it's an important thing to keep in mind since a big part of language use is colloquial in nature. I was very surprised to learn that parents do not provide feedback to their children through means of approval or disapproval when they speak incorrectly. I know that UG's poverty of stimulus argument accounts for feedback that children may get, but I didn't think that the parents' feedback factor would be statistically insignificant.

    ReplyDelete
    Replies
    1. I also found this to be a surprising finding at first. Contrary to what I would have expected, it's been concluded that "parents do not understand their children's well-formed questions better than their badly-formed ones". It's true that adults tend to speak colloquially, so in some sense it's understandable that children would develop colloquial speech patterns in their early years. For example, we tend to use "gonna" more often than "going to", and I expect that many children would go on to use "gonna" as if it's a real word. They probably wouldn't learn that they're speaking incorrectly until they are corrected in school by a schoolteacher, or in some similar scenario, and by that point they'd be far beyond the early stages of language acquisition. A parent who uses formally incorrect grammar would be unlikely to correct their toddler-age child when they make the same formal errors.

      Delete
    2. To add to this, it makes sense to consider sentences that sound natural as grammatical even if they are not "proper English", because if we want to understand how humans developed capacity for language, it is more important to understand how we learn and use language in our day to day lives than the formal structures we create to teach it to each other. This paper gives the example of British English using colloquial sentence structues like "he might do" that don't appear in American English, and aren't necessarily instructed directly to children (because it is not "proper English") but rather modelled for them in the speech around them.

      Delete
    3. Aashiha, some OG/UG confusion again: OG errors are all eventually corrected by four means (what are they?).

      The fact that parents don't bother to correct some OG errors doesn't change that. A language's OG just has to "satisfice" (Herbert Simon's term for aligning just enough) so that people can communicate. No need for a stern schoolmarm enforcing slavish conformity.

      If OG divergences are shared widely enough, you have a dialect, eventually even a different language if the dialects become mutually unintelligible,

      But UG does not change; it has to subserve (or enable) all possible languages, and it does not get -- or need -- any correction at all, because it is innate, not learned or learnable, and never violated (in a 1st-language (L1)),

      Adam, you're right. But the fact that OG is learnable and learned is undisputed, and the speed and rigor of the feedback is simply irrelevant; it just has to satisfice. The pseudoproblem only arises if the OG/UG distinction is not made and POS is misunderstood (what is POS?) and applied, mistakenly, to OG errors.

      Adrienne, correct, but again, slang is strictly an OG matter, with people satisficing, not a UG matter: Can you explain this difference now, and the underlying misunderstanding?

      Delete
    4. To my understanding, the four means by which OG errors are eventually corrected are induction, imitation, trial and error with corrections from others, and formal instruction. Within these four, I am not sure if I fully understand how induction is different from trial and error with corrections. I would think that trial and error would make use of induction as someone would attempt to follow a rule they saw used in a different context and then would be corrected by another speaker if they had made a mistake. Therefore, I am not sure what the distinction is.

      Delete
    5. Megan, induction is the same as trial/error/correction (i.e., supervised or reinforced learning). The four are imitation, unsupervised learning, supervised learning and instruction. The only difference between unsupervised learning is that language is a mirror capacity (so learning can be both sensory [hearing] and motor [imitating] ). But as long as UG is not trivial (and it isn't) it can't be learned without mistakes and correction -- and no UG mistakes are made, or corrected. None of this makes sense if the distinction between UG and OG (in which there are mistakes and corrections) is made.

      Delete
    6. Slang is an OG matter because it is related to the superficial rules of grammar in these dialects that have changed over time, not the more fundamental aspects of grammar which we inherently know and do not make mistakes in. The misunderstanding of these two is done by chat gpt as well, I asked it to give me a sentence with a UG error, and it came up with "He don't like to eat vegetables" which is an OG error (and is understood perfectly well as slang).

      Delete
    7. Adrienne, good point. Slang becomes slang if so many people make an OG error that it becomes widespread enough that some people think it's correct. And eventually errors can become accepted as correct in the standard language itself, either as an idiom or a change in grammar. (Plenty of examples in English, such as "It's me" (why was that once wrong?) and maybe one day even "between you and I" (shame on you if you are an English L1-speaker and you think that's already correct!). French example: « viens-t'en » already accepted in Québec, but not in France...)

      Delete
  6. From what I’ve understood, Universal Grammar (UG) is the basic understanding of language structures that is inborn and automatically picked up during development which allows us to acquire an Ordinary Grammar (OG) as our first language. If UG is true, innate, and automatic, wouldn’t this mean that we cannot, in fact, reverse engineer cognition (or a T3 thinking robot) because we cannot place this innate processing module into a robot since we don’t know HOW it works? If cognitive science is 1) learning and 2) language, then the T3 robot would need to be grounded in sensorimotor experience AND ALSO have the UG module.

    ReplyDelete
    Replies
    1. Kristi, why would you think that a T3-robot (or an organism) can only have learning and learned components and capacities, rather than inborn ones, built in by genetic change through evolution (including the capacity to learn)? And why could the products of evolution not be reverse-engineered? UG linguists are busy reverse-engineering the rules and structures of UG.

      Delete
    2. I believe that it is possible to reverse engineer UG such that it can be given to a T3. We do not understand the mechanisms of UG or how it came to be, but that does not mean they do not exist, and cannot be replicated. However, prior to this week I assumed T3 would be given the ability to use language, but now I am considering whether we need to instead give it the tools to learn language by itself, using the four learning methods. This would require exposure to speaking, motherese, prosody and more. It may also take a couple of years to acquire OG(s), the same as for a human infant. I wonder how similar of an experience the T3 needs to have to be similar enough to a person to pass the TT. Regardless, the largest obstacle is solving the means of acquisition of UG.

      Delete
    3. Nicole, Turing's criterion is clear: Indistinguishable cognitive performance capacity, including learning capacity. Eventually that would have to include the child's learning capacity (but embryological growth and development is probably mostly vegetative...)

      Delete
  7. One section I found to be particularly interesting was section 6.3 on “motherese”. Motherese describes how in some cultures and situations, parents or caregivers will speak to children in a very specific style. As described in the paper, it is often slower, more exaggerated, higher pitch, and reflects what is happening in the present moment. Pinker points out that this specific way of child-directed talking does not however allow children to reach language milestones any faster than children not exposed to this kind of speech. Thus, learning of a language is not attributed to any specific grammar properties that are different in motherese compared to adult-adult speech. While motherese might have some benefits in other areas, it is interesting to note that it does not account for how children are learning language.

    ReplyDelete
    Replies
    1. Hi Jenny, I also find Motherese fascinating as there not only seems to be an innate capacity to develop language, but an innate reflex to speak in this way to infants(or anything that is 'cute', as we see this with animals as well). Motherese has a sort of song/singing-like quality which is also interesting to me as learning and listening to music has been shown to aid with cognitive abilities in children(and adults), and is very effective at getting and maintaining infant attention. However, I did also note that the article said it does not help infants reach their language milestones earlier, suggesting that it developed for other adaptive reasons(perhaps social bonding between the mother and child).

      Delete
    2. Hi Jenny, that was also the most interesting bit for me and I had no idea that speaking slower didn’t help a kid learn a language. But surely it helps capture their attention and help maintain their interest. Because don’t we learn faster in a more interactive and fun environment? I’m guessing it also has to do with building an emotional connection and maybe help reassure the kid ?

      Delete
    3. Jenny, you are forgetting the OG/UG distinction (partly Pinker's fault). If motherese were relevant to anything, it would be to OG not UG. So insofar as UG is concerned, motherese is a red herring.

      Josie & Marine, if we focus on the only thing that makes language learning a problem -- UG and POS -- motherese is not relevant. Why?

      Delete
    4. Motherese is irrelevant in regards to UG and POS because motherese is something that is learned and differs from language to language (grammatically - maybe it's the case that tone and pitch are similar). Because motherese is learned, it is an aspect of OG. UG on the other hand is innate and cannot be learned. This innateness is demonstrated through the POS. Because of the POS, a speaker could not possibly process enough stimuli and rules to perfectly construct the rules of UG purely by learning.
      Reverse engineering a learned OG capacity like motherese is a simpler task than reverse engineering UG, which is innate and has an elusive evolutionary path; thus, making UG and the POS the true barriers for reverse engineering language learning.

      Delete
  8. I found the section about context very interesting. This section underscores the need for context and interaction during language exposure. As the author says: “Children do not hear sentences in isolation, but in a context”. It makes sense that children cannot learn just by listening to the radio or by watching TV because there is no sort of feedback coming from these media. However, through live human interaction children can use the context in which they’re in to understand the meaning of what is being said. Yet, in this section, the authors make a distinction that was particularly interesting to me. In fact, they say that children who were exposed only to radio or TV broadcasts did not learn any language from this, but children who are exposed to human interactions do manage to learn language. The authors even say that these children can be “mind-readers” because they can guess what they speaker might have meant during their speech. This is interesting because it makes me wonder how is watching an interaction of two characters on TV so different from passively witnessing a conversation between two humans (or a human speaking)? I understand that to learn a language children need feedback to correct their mistakes (like in supervised learning). But then, if a child is just watching a human interaction between two adults without receiving feedback, how is this better than watching an interaction between two characters on TV?

    ReplyDelete
    Replies
    1. I think the reason that observing human interaction is better than hearing language through the radio is that when observing two adults speak, there is more information being afforded to the infant than in the case of hearing language through a radio where the language is somewhat removed from the "here-and-now". It's about grounding. How could you ground words you hear over the radio without having sensorimotor interactions with their referents. When observing language in person, the language is tends to refer to the "here-and-now" and is thus the infant hears language can directly ground it. Otherwise it'd be akin to trying to learn language in the same way ChatGPT does, just by counting the co-occurrences of words you hear through the radio.

      Delete
    2. Hi Valentina! I think Stephen addressed your question very well in his response. I can’t speak to infant language acquisition, but I just wanted to add from my own experience learning a second language both as a child and as an adult. I think what Pinker said about context and what Stephen said about grounding are the main important factors in learning language. I recently spent a couple months in Italy, and I tried to passively learn Italian in a few different ways, like watching TV, listening to podcasts, and other similar methods. It wasn’t until I started working as an au pair with a local family and got to hear actual adult conversations that I was able to pick up the content of words, mostly because they would relate to things in my surroundings, or events that I was present for. For example, they would discuss what we were eating for dinner, what I did with the kids that day, and other such topics. I think this is what Pinker was referring to in section 6.5 when he discusses how children hear, and subsequently learn, sentences in context rather than isolation. By being present for the discussed events, I was able to connect the words to their meanings, like at dinner I was able to understand what words meant what food. Importantly, my host family wasn't always monitoring and providing feedback for my understanding, or quizzing me to see if I could correctly name each item, it was just something I learned over time. As Stephen explained, it came down to the sensorimotor interactions. For instance, when I watched a TV show and they talked about a “medusa” (jellyfish), I didn’t have any real context for what that might be, but when I was at the beach and heard kids yelling that there was a “medusa” in the water, I was able to start to build an understanding of what the word meant (and I fully understood once I got stung). Obviously my experience with second language learning is very different from babies’ first exposure to words (especially since I already have an idea of what a jellyfish is) but at least for me it was helpful to connect this adult experience to Pinker’s description of context’s role in language learning.

      Delete
    3. Valentina, about language learning, think of (1) the difference between unsupervised and supervised learning (what is it?), (2) the active and passive kitten and (3) trying to learn edible/inedible from watching which mushrooms the natives do or don't eat (if there had been natives).

      To learn by trial and error, you need to try, and make errors, and get corrected. (How is POS relevant here?) If it's all passive, all you get is correlations.

      (Context is fine; look at how much ChatGPT can get from it, in its Big Gulp of words, in context! But to learn what to DO in the world -- including what to call what kind (category) of thing in the world, we need to DO and not just to gawk, passively. We need to try to DO, actively and sometimes succeed and sometimes fail, so that our neural nets can learn to correlate the correct features with DOing the correct thing. With unsupervised learning they can only correlate features with features, not features with DOing.)

      Stephen, good points. But whether on radio, video, or live, it's less about "here-and-now" than "DO-or-die." (What is categorization?)

      Lillian, to connect words to their referents in a 1st language you need direct, active trial-and-error interaction, not just passive observation of pairing between words and referents (how do you know what the referent is unless you have grounded the category?). In a 2nd language it's possible (since you already have the category and its name in your 1st language and all you need is its name in your 2nd language), but even there, you must try to speak and not just to listen, otherwise all you get is glossary rather than a language.

      Delete
  9. his reading gives us an in-depth exposé of language acquisition and what it reveals about the human mind. I really like the part about prosody. In fact, the words of a mom when speaking to her young child have a melody to them. It is also true that when overhearing strangers speaking a foreign language to babies, for example, the same tone comes up. A good experiment would be to ask participants to listen to tapes of people speaking and decide whether the person was talking to an adult or a child (not text-based obviously). This text also made me think more in-depth about the critical period at which children are able to learn any language followed by a loss of this ability when they only “focus” on the ones they were taught. As we studied in Affective Neuroscience, this has to do with neuroplasticity – also it seems quite evolutionary evident that once a language is targeted it is no longer of best interest to be able to learn every single other takes too much brain power on one task (no very useful as pretty much every culture communicates in only one language).

    ReplyDelete
    Replies
    1. Garance, about motherese and critical periods, see the other replies. In the current environment, knowing more languages may mean money in the bank, but in the ancestral environment, when language first began, the main adaptive challenge was that everyone should be able to learn any language.

      Delete
  10. I was particularly interested by the discussion on abstract words in section 9.1 of the paper. Pinker writes, “Abstract words, such as nouns that do not refer to objects and people, -- can be learned by paying attention to where they sit inside a sentence.” Specifically, I thought about this in the context of our discussions on word grounding in class. If Pinker is right and that we learn this category of words only by where they are situated in a sentence, could we say that abstract words are not grounded at all (since the location of a word within a sentence is not sensorimotor?)
    I also wondered if this was why Professor Harnad often warns us against using weasel words - they are abstract words that, according to Pinker, are not grounded: our understanding of their meanings do not come from sensorimotor interactions with the words’ referents, but instead only from the words they are used in accordance with.

    ReplyDelete
    Replies
    1. Hi Ohrie,

      I don’t think saying that abstract words are not grounded at all is correct. As we’ve seen in class, grounding can either be direct through sensorimotor learning or more indirect via word of mouth/communication by building on what we already know (is already grounded by sensorimotor experience or not). So I think that in this case, even though abstract words cannot be grounded through direct sensorimotor experience, they can still be grounded indirectly and it doesn't make them less grounded. Maybe I misunderstood something so let me know what you think!

      Delete
    2. Ohrie, the weasel-word here is "abstract". What does it mean? "This-Apple-Charlie-Now" is concrete. (Think of Funes, who says that in the next instant there's something completely different.) "Apple" abstracts some of the features of apples (red-or-green, round) and calls whatever has them "apple". "Fruit" abstracts other features. "Peekaboo Unicorn" (what's that?) abstracts others. "Tachyonic" abstracts other features (which?)

      So it's not "the place in the sentence" that grounds the referent of "abstract" words but the fact that they're in a proposition that described their features (grounded directly or indirectly). To abstract is to select some features and ignore the rest. Try that with a proposition that defines "democracy."

      Lili, you're exactly right.

      Delete
  11. Many aspects concerning the acquisition of language in children struck me in this paper. First of all, the complexity of language increases with age, but it’s not a simple matter of being capable of understanding what the adults are saying, but it seems like the speech complexity evolves with brain maturation, as the synaptic pruning and myelination are taking place. Secondly, as we spend more and more time practicing our main language, our ability to learn another one decreases. As explained in this paper, this is due to the language circuity’s plasticity being stronger at a young age. But what is even more impressive, is the ability for children to recover from brain damage. And finally, the mechanisms involved in the production of sounds as well as the distinction between phonemes, are the first language features to arise. As production and understanding of the words aren’t appearing before, we can conclude that language, or the ability to develop a form of verbal communication, is pre-wired in the brain.

    ReplyDelete
    Replies
    1. Adrien, there are those evolved pre-wirings, for learning and for speech, but there's one you left out, and it's the most important and controversial: What is it?

      Delete
    2. I was also really interested in the ability of children to recover from brain damage! Section 2.3 about the maturation of the language system discussed how children with damage to their left hemisphere could still learn and recover language as opposed to adults with left hemisphere damage, which almost always results in permanent aphasia. In all of my other classes that discussed the effects of left hemisphere damage on language, it was never discussed that there is such a large discrepancy in recovery rates depending on age, but it makes sense due to plasticity!

      Delete
  12. "Language Acquisition” explores how kids naturally learn to speak and understand language in a way no other animals can. It seems like every child can do this easily, and so the question of where this skill comes from arises. Is there something in our brains from birth that helps us learn language, like the scientist Chomsky thinks? Or do we just get smart enough to handle language as we grow and learn from what's around us? The article leans a bit towards the first idea, suggesting that both our genes and how we're raised work together to help us become good at using language.

    Now, if language is indeed a separate faculty, if it is not just something that comes from intelligence, would it be the foundation that helps us develop awareness and deeper thinking? Or is our ability to be aware and think complex thoughts something that comes first, with language just making it richer? If language works on its own, maybe it's a key part of how we went from just surviving to creating things like art and culture.


    ReplyDelete
    Replies
    1. Amélie, as P&B showed, many components of language are learned, others are evolved and inborn, but only one of them is controversial (which and why?).

      Chomsky thinks language evolved as a way to communicate thought in words.

      Delete
    2. If I am correct, I believe UG is the controversial component of language because it questions that certain structural elements of language are innate and hardwired into the human brain, which challenges the idea that language acquisition is solely a product of environmental interaction and learning. The debate centers on whether this specialized, innate language faculty exists exclusively in humans, separate from general intelligence. Critics argue that language can be explained through general learning mechanisms and social interactions, while proponents of UG, like Chomsky, contend that the rapid and uniform way children acquire complex linguistic structures across cultures suggests the presence of an inborn linguistic framework. This debate is interesting to me as it goes into deep questions about human language and cognition, highlighting the complexities of how we acquire and process language.

      Delete
  13. I found the concept of Motherese and its possible impact on language acquisition quite interesting. Motherese, also known as infant-directed speech, is a type of speech that parents and caregivers use when talking to young children. It is characterized by a slower pace, higher pitch, exaggerated intonation, and simplified grammar compared to adult-directed speech. Parents also tend to use more repetition and provide more feedback to children when using Motherese. Research has shown that Motherese has a positive impact on language acquisition in children. By using simplified language and emphasizing key words, parents make it easier for children to understand and learn new words and grammatical structures. The exaggerated intonation and repetition also help children identify important information in the speech stream.

    ReplyDelete
  14. I found the discussions in section 9.2 to be very interesting, in particular the experiments by Gropen and Naigles that illustrated the relationship between context, syntax, and the development of ordinary grammar from universal grammar principles. In particular, I understood the Naigles experiment in which children learned a new word's meaning based on if it was used in a transitive or intransitive context to show that there is a UG principle in which transitive verbs have an object while intransitives do not, resulting in the understanding of these sentences and the construction of OG. This ties into section 8.3, which I think ties into the idea of grounding using language after having a set number of categories already grounded in a sensorimotor manner. In this case, it's a set of certain rules that are grounded by selecting certain UG parameters, and then the rest of the rules of OG are codified from there.

    ReplyDelete
    Replies
    1. Omar, I think the transitive/intransitive distinction itself is in UG, but the way any language uses it is OG.

      Delete
  15. I found the idea that children of deaf parents do not acquire spoken language through radio or television very interesting. Pinker points out that dealing with speakers in the child's immediate surroundings allows them to better “read their minds”, and understand what is being talked about. It’s fascinating that having a strong understanding of other minds appears to be necessary for language acquisition. It is almost as though the child’s ability to reference the mental states of others as they generate language is critical to the children’s ability to ground words in meaning. This is a stretch, but could poorer mentalization capacities in autistic children be linked to language deficits in autism? Maybe the same reason autistic children struggle to acquire language is the same reason why children can’t acquire language through television - an inability to adequately mentalize the speaker…

    ReplyDelete
    Replies
    1. Dani, see the replies about passive learning vs active learning (DOing). Pinker's remarks about mind-reading are conjectures. Not clear how this may be connected with autism.

      Delete
    2. I think what Daniel is getting at, which I am partial to, is that the grounding of others is incredibly important to the child's own grounding, and doing the mentalization of others is one of the important doings to acquire language, not just "what do I do with this thing" but also "what do they do with this thing", and in autism, a deficit of understanding what other's internal states are might weaken some "grounding by other".
      Although yes your replies on passive vs active (DOing) are also very useful, and I agree they are likely the *primary* means grounding.

      Delete
  16. Pronunciation, vocabulary, and Ordinary Grammar is learnable. These are best learned at a young age, and they’re responsible for variation in languages and dialects. Universal Grammar (UG) is not learnable. But couldn’t the ‘genes’ for UG be the same genes for motivation of learning and pattern recognition? The patterns like hierarchical structure and standard word order (subject-verb-objects in English) are so common and unlikely to be heard in error that children seem to catch on very quickly and naturally, as if it were innate. It seems like common sense to me, a consequence of language, environment, learning strategies and success criterions, not something special like UG. Am I missing something? Perhaps it’s difficult for me to understand without having studied linguistics.

    ReplyDelete
    Replies
    1. Well, this paper outlines a few reasons to posit an innate Universal Grammar as opposed to just saying that we learn the Ordinary Grammar of a language by recognizing patterns in the sentences we hear. First, Pinker argues that you can't analyze the syntax of a sentence using word order alone - you need some concept of phrases, because it is phrases that fill grammatical positions (for example the subject of a sentence is not necessarily a single word, but all the words that form a noun phrase). These phrase structures are purely syntactic phenomena and can't be inferred from context the way categories like "noun" or "verb" can, so Pinker says that we must have an innate sense of them. Also, learning grammatical rules through pattern recognition (unsupervised learning) can lead you to overgeneralize. The example Pinker mentions is a child who picks up that you can add -ed to a verb to form its past tense and applies this rule to "break" to make "breaked." Once the kid hears the irregular past tense "broke" they might fold it into their grammar as well, but there would be no reason to eliminate "breaked" unless someone told them it was ungrammatical, which Pinker argues does not happen reliably enough. The only way to explain, then, why children do end up dropping "breaked" is that they have some innate rule that tells them that irregular forms of verbs should supplant their corresponding regular forms.

      Delete
    2. Aya, good reply to Csenge, except "breaked" is OG and can be learned and unlearned. This is Pinker conflating OG and UG.

      Delete
    3. (all the asterisks around "would not" came from me copy and pasting italics)

      Delete
    4. William, exactly. There's no way neuroscience or experimental psychology could have reverse-engineered UG and figured out what it is and what it's for in the way they might find the neural correlates and functional mechanisms of motor-coordination or even visual feature-detectors. In this sense, generative linguistics is reverse-engineering one of the core capacities of language, hence cognition; and it's not as "easy" as reverse-engineering the capacity to learn categories or OG.

      And this is one of the reasons why Chomsky's 1959 critique of Skinner's "Verbal Behavior" is considered to be one of the formative steps in the birth of CogSci.

      Delete
  17. Fiona, words come out in linear order when spoken or written, but in UG they have a tree structure, so words close to or connected to others may not be close and connected in a spoken sentence. (To understand this fully you need to take a linguistics course.)

    ReplyDelete
  18. My understanding is a lot of the problems with language acquisition boil down to a few things we have talked about in class, the difference between supervised and unsupervised learning being the largest. Children must have some UG because their language learning is not supervised in the way we might expect and can not be simple unsupervised learning. Most importantly some mistakes simply are not made at all (UG mistakes) so can not be corrected in any direct or indirect way. Unsupervised learning is not strong enough for structures as complex as language because of over-generalization problems. This means that there must be some structures that have evolved through evolution that lead children to make certain assumptions they could not get just from context. These assumptions are then coded and modified by data of a first language. The linguistics of how we can find out what mistakes are never made are particularly interesting to me.

    ReplyDelete
    Replies
    1. Marie, correct. And POS is much more than "over-generalization" (which is a relic of the debates about Gold's theorem about the unlearnability of "context-free grammars") it's not mistaking toadstools for mushrooms because you are born knowing the difference.

      Delete
    2. Hi Marie,
      Your comment helped me contextualize what Pinker’s saying here about UG with the learning algorithms we use to train LLMs, and more broadly, the rules we use for supervised learning across AI systems. I think the fact that Pinker doesn’t clearly separate UG and OG is especially interesting in this context – the idea of OG and UG not being completely separate, while wrong for humans, who demonstrate an innate grammar from birth, seems plausible for LLMs, which are driven only by learning, without any innate understanding of the structure of language. Trying to develop a form of UG that could be implemented in LLMs would be an interesting avenue for understanding more about how it develops, and the ways that an artificial UG would differ from a natural one.

      Delete
  19. Here is a summary of my understanding of Pinker’s article about Language Acquisition. Did I miss something important?

    Problem: How can children acquire language, despite the constraints of the Poverty of the Stimulus (POS)?
    - Caregivers typically do not provide explicit negative evidence, leaving children to navigate language acquisition without corrective feedback.
    - The widely-used "motherese" has limited impact on language learning.
    - Prosody, while influenced by syntactic structure, is also shaped by a myriad of factors such as emotional state, intent, word frequency, and syllabic structures. This complexity renders prosody alone insufficient for deducing syntactic structure.
    - Extracting correlations between words and their positions is hindered by the challenge of distinguishing spoken words and the vast number of potential intercorrelations without prior constraints.

    Solutions:
    - There must be innate principles that constrain the grammar of human language, encapsulated in Universal Grammar. Differences between languages represent different settings of the parameters contained in UG.
    - Context plays a pivotal role, enabling children to infer the syntactic category of unfamiliar words (and thus their meaning) based on their knowledge of an initial set of rules and words. Conversely, the meaning of an unfamiliar word can also be inferred from its syntactic properties.
    - Finally, a combination of cues provided by prosody, correlation, and some parental feedback allows children to learn new rules and words from previous syntactic and semantic knowledge.

    ReplyDelete
    Replies
    1. Joann, that's about it. But you leave out Pinker's failure to clearly distinguish UG from OG and the fact that it all boils down to POS (no UG errors produced, heard, or corrected [except by MIT generative linguists!]).

      I think Pinker also mixed up "context-dependence", which is really the structure dependence of UG, not reflected in the serial ordering of speech or writing. (I'd also say it's only now, in the era of LLMs and GPT that we are getting a sense of what "context" really means when it comes to natural language syntax.)

      Delete
    2. Hi Joann,
      Your summary makes a lot of sense and helped me better grasp what Pinker is saying here. I think one of the especially interesting thoughts your comment (and Prof Harnad's reply) raised for me is the idea of how prosody may guide human learning but is lacking in the learning that LLMs like GPT are doing. I wonder how this absence , as a missing aspect of language learning for these models, may shape the way these programs interact with language. I would imagine that it would render the language learned a bit closer to following a UG (rather than OG rules) in terms of lacking culture-specific prosody and intonation, but may also limit the model's comprehension for spoken language.

      Delete
    3. Madeleine, I'm not competent to say, but I think prosody is mostly learned, hence OG, but perhaps some UG structural rules may also influence prosody, and those rules would not be learned, but "tacit" or "implicit", the way all UG rules are. (At some point, searching for what word to call it, generative linguists preferred to call them structural "constraints" rather than "rules," although it comes to much the same thing...)

      Delete
    4. Hi Madeleine,

      I agree with prof Harnad about prosody falling under OG, I want to expand as to why I think so! First, it is important to acknowledge that prosody involves the patterns and intonation of speech that are specific to a *particular language*. It encompasses elements such as rhythm, stress, pitch, and tone, which are used to convey meaning, emotion, and emphasis in speech. I wouldn’t say they are innate (UG) because these aspects of prosody vary across different languages and play a crucial role in communication and understanding within a specific linguistic community. Therefore, prosody would better be classified as part of the grammatical rules and structures that are specific to a particular language.

      Delete
    5. Since the UG can be seen as the hardware for acquiring language skills and the OG as the software that can facilitate the acquisition of different languages, children can still acquire language even in the case of POS since the operation of the hardware is not affected in any way. However, as Pinker points out, the language acquired by children in this case cannot be a true language, i.e. a language that follows a common grammar, because they are more likely to use their own complex grammar in order to express and achieve a certain level of understanding.

      Delete
  20. one sentence that I think summarizes well the role of cognitive science in language acquisition is the fact that we are not trying to find which of the environment, or an already existing system dedicated to language acquisition. Rather we want to figure out how the two interact. An interesting experiment that highlights this question was the one where children, if exposed to only some words will create their own complex grammatical language. But they needed to have some sort of input, even if limited. This experiment is in favor of the Universal Grammar hypothesis. Multiple languages show variances in their grammar rules, but they all have grammar. Accepting the UG doesn’t necessarily equate saying that environment doesn’t have an impact in language acquisition. It might require both a pre-existing internal system and some type of interaction with the environment for it to be functional and lead to language production. In the example about children creating their own complex system, I think it’s worth asking the question of the role of negative input in language acquisition or creation.

    ReplyDelete
    Replies
    1. Mitia, some of that is right, but it is a bit vague and not really explanatory; the punchline should be: "If you want to know what has to be innate in language, and why, study some generative linguistics."

      Kid-sib did not understand your last point about the role of negative input in the collective invention of Nicaraguan sign language by deaf children... (All children today are born with UG in their genes and brain, but language still needs a "triggering" stimulus, and it seems practical home-signing with their hearing family are not enough for deaf children, whereas communicating with one another in a signing ommunity is.

      Delete
  21. If UG is innate and the parameters need to be set early in childhood for native fluency, I find the part about maturation of language circuits very interesting. Pinker explains that babies are born with almost all of the neurons that they will ever and these neurons have migrated to their appropriate cortex locations. The brain maturation that occurs after birth is synapse strengthening and formation; the UG is produced by the neurons and the UG parameters are being fine-tuned by the synapses. This tells me that UG is coded in the 1.2% difference in human DNA from chimpanzee DNA. Obviously we can’t use knockout mutations on humans to find the exact sequence for UG, but maybe with advanced machine learning we can find it. This would be interesting to see how much human behaviour would change without UG.

    ReplyDelete
    Replies
    1. Kaitlin, it's not just gene knockout that we can't do: we also can't rear children without communicating with them with a vocal or gestural language (except when they are deaf and we treat them as if they were cognitively handicapped). But neither of these "experiments" would help us reverse-engineer UG. Only generative linguistics is doing that.

      (Review Week 1 on reverse engineering and Week 4 on "where/when" vs "how" in the brain -- or, for that matter, the genome -- when it comes to reverse-engineering cognitive capacities.

      Delete
    2. Genes won't tell us how the brain is using UG but I still think it is important to know the genetic component if possible because generative linguistics won't answer the whole question.

      For the part about rearing children, isn't that referring to OG? Or are mirror neurons part of UG?

      Why can't we use machine learning to reverse engineer cognitive functions? I thought that TTs use machine learning and that is what we want to use to reverse-engineer cognition?

      Delete
    3. Kaitlin, you're right that generative linguistics so far tells us only a small part of what we want to know about language and language capacity, including its clinical and developmental aspects.

      Language input is described as"triggering" language and setting UG parameters through OG learning; but language is a mirror capacity, so what is heard "mirrors" what is said and vice versa. Not only would it be ethical but I don't even know if it would be possible to expose and interact with a child with input that is language-like but not language-like enough to trigger some default form of UG parameter-settings.

      Machine-learning models (including deep-learning neural nets) are being used to try to reverse engineer cognitive functions, including language and language-learning (and category-learning). Generative linguists are also trying to reverse-engineer UG (and succeeding); machine-learning has been of some help (via optimality theory (especially in generative phonology) and Charles Yang's statistical learning theory, which deepens the analysis of OG learning.

      Delete
    4. SH:
      Does Charles Yang make a distinction between UG principles (unlearned and unlearnable because of POS, and unchanging) and OG (Ordinary Grammar: learned, learnable, changing, and not bound by POS)?

      ChatGPT_4+: (Big Gulp updated to April 2023)
      Yes, Charles Yang does make a distinction that aligns with what you've described. His distinction between Universal Grammar (UG) and what could be termed "Ordinary Grammar" (OG) is a central aspect of Yang's challenge to the strong version of the innateness hypothesis.

      Universal Grammar (UG): In Yang's framework, UG represents a set of core principles or mechanisms that are innate and unlearnable. These principles are thought to be universal across all human languages and are not subject to change. They serve as the foundational cognitive structures that make language acquisition possible. The role of UG is crucial in addressing the "poverty of the stimulus" (POS) argument, which posits that the linguistic input children receive is insufficient to acquire language solely through learning. UG is seen as providing the necessary innate structure to overcome this deficit.

      Ordinary Grammar (OG): OG concept refers to the specific grammatical rules and structures of individual languages that are learned through exposure and experience. Ordinary Grammar is dynamic, subject to change, and varies from one language to another. It encompasses the aspects of language that are shaped by cultural, historical, and environmental factors. In Yang's view, the acquisition of OG is heavily influenced by statistical learning and the analysis of linguistic input.

      UG provides the foundational cognitive capabilities for language acquisition, OG represents the actual linguistic knowledge that individuals acquire and use.

      Yang's distinction is significant because it suggests a more nuanced view of language acquisition and the innate-endowed language faculty. While acknowledging the role of innate cognitive structures (UG), his work emphasizes the importance of learning and environmental input in shaping the specific grammatical knowledge (OG) that individuals acquire. This perspective maintains the necessity of some form of UG to address the POS problem but also highlights the adaptability and variability of language as a learned system.

      Delete
  22. I understand Learnability Theory as the idea that learning is a situation that has four components, specifically in the example given focusing on language. While it makes sense when reading it at a surface level, after thinking about it further, I think stage 3 has some conflicting ideals. While things not involved in the concept of universal grammar such as sound patterns and expressions such as idioms can be learned utilizing these “rules,” it conflicts with the idea of universal grammar since it is largely believed to be innate. As the theory suggests it is learned by trying out a hypothesis and seeing if it matches the environment around them. Is this in tune with universal grammar as the child may just be seeking confirmation, or is it suggesting universal grammar does not exist and a child must learn universal grammar from negative and positive feedback from their surroundings?

    ReplyDelete
    Replies
    1. Hi Delaney. I think that the point Pinker is trying to get at with Learnability Theory and UG is as follows.

      Language acquisition cannot be a matter of purely supervised learning, and this is because children don’t have access to negative instances of language or feedback for their language use in the way we might expect (Pinker gives some evidence for this). On the other hand, unsupervised learning cannot be the sole source of language acquisition either because it leads to overgeneralization, which can be detrimental to correctly learning a grammar and all of its intricacies and dependencies (Pinker highlights several examples for this). I think the argument Pinker is making, even if not explicitly mentioned or hinted at, is that language acquisition and learning is a matter of an interplay between UG and unsupervised learning, and this interplay is exemplified by Learnability Theory.

      To be more specific, Pinker proposes various mechanistic/algorithmic theories as to how OG can be learned through the interaction of UG and generalization from positive instances in the linguistic environment. In other words, UG guides language learning, which is necessarily unsupervised because we only have access to positive instances (where negative instances and feedback are requisites for supervised learning, as mentioned above) and we can only learn the rules of a grammar by abstracting them from those positive instances (in accordance with UG). By guiding unsupervised language learning using UG, we avoid the problem of overgeneralization problem described above.

      To finally answer your question, this relationship is exactly what Step 3 of Learnability Theory attempts to capture. Step 3 says that the learning strategy, which tests out hypotheses on the positive instances of language in the child’s environment, is their “language acquisition device”. Well, a LAD is a cognitive mechanism that applies and leverages UG in order to learn OG, which is exactly the interplay/relationship I attempted to describe above between UG and unsupervised learning (where the LAD is the unsupervised learning mechanism guided by UG). Thus, succinctly, the candidate hypotheses generated in Step 3 of Learnability Theory are actually deeply informed by UG, and a process of unsupervised learning through the language acquisition device tests whether the hypotheses align with the positive instances of language from the environment.

      Prof. Harnad, please correct me wherever I am wrong! I’m sure this is riddled with errors.

      Delete
    2. Stevan, the only mistakes a language learner ever makes are OG rule-violations (which are supervised, i.e., corrected). The "rules" of UG are obeyed without errors. This is not unsupervised learning -- it is not learning at all. That's why it is inferred that UG is innate.

      [Some of the confusion about this may be because language is a mirror-capacity: it is not just seeing mushrooms and learning, by trial and error, which ones are edible. It is also "producing" mushrooms -- yet never producing an inedible one. So there are two kinds of inedibility: OG-inedibillity (learned, under supervision by trial and error) and UG-inedibility (obeyed without error, hence not learned but innate).]

      Delete
    3. Yes, I think I understand that. I think maybe my answer was confusing cause I was using language and grammar interchangeably....all the learning I was talking about had to do with learning OG.

      Delete
  23. From Harnad's Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard, it seems that the hard problem of UG is not poverty of stimulus but rather how and why we have UG considering the poverty of stimulus and the fact UG has no evolutionary advantage. In the final paragraph, a hypothesis posited by Chomsky is mentioned : perhaps UG is necessary for thought, or for cognition at all. This is an interesting theory but I would ask exactly what type of relationship we are establishing here. I don't believe you could have UG without cognition, but UG produces cognition? If your response is that when we say thought, we mean something smaller than cognition, just voiced speech in the head for instance, then I will ask : what is the evolutionary advantage of THAT?

    ReplyDelete
    Replies
    1. Emma, good questions, and I can't answer. But I think Chomsky's idea is that the reasons UG errors make no sense is that language evolved to convey thoughts, and that UG violations don't convey thoughts.

      (Hard to think of thinking as "smaller" than "cognition," since presumably cognition IS the same thing as thinking. But this is still a very fuzzy area. Language "fits" cognition. Having a means of conveying any and every proposition is a kind of universality in itself. So if UG evolved as a vehicle for conveying thoughts from head to head, this universality, the "structure" of thought, might have imposed itself unto language, as a syntactic constraint on its form. But it feels as if I'm trying to do hermeneutics here, like interpreting scripture...)

      Delete
    2. Hi Emma, this was something I was also quite intrigued by, as it puzzles me how Chomsky's notion that language is what makes it possible for anyone to think at all would explain those with language deficits that are otherwise intellectually intact. It is hard to imagine what thinking would be if we didn't have language as a way to convey it, but it was discussed several times in the reading that language is seperate from general intelligence, and there are well-documented cases of people with severe language deficits who are otherwise cognitively capable. How would Chomsky explain this? Can there be no thought if there is no language?

      Delete
    3. Paniz, there's no doubt that there can be cognition, and thought, without language: Nonhumman species think; and so do humans, even before they master language. But a human with a cognitive deficit so severe that they cannot learn a language at all is unlikely to be able to pass T3.

      For Chomsky, language is the means of expressing thought. I can't imagine what it would mean to "have" the capacity to think propositionally, but be unable to express the thoughts. Remember we are not talking about someone who is paralyzed, or cannot talk. Nor are we talking about deaf children who have not yet been taught sign language (as in the Nicaragua report). We are talking about whether it's possible to have the thought that the cat is on the mat, but be unable to express (or understand) it verbally in any modality (including gesture). (Maybe it's possible, but it sounds to me as contrived and speculative as the notion of a zombie.)

      ("Stevan Says" there can be (and is) thought without language, but not propositional thought. The reason we find it so hard to conceive of thought without propositionality is because our genes have evolved to make our brains so language-imbued that we cannot help "subtitling" everything we see and hear and think with a verbal caption that describes it. We are a "narratizing" species; our thinking is highly narrative. But we can still think non-propositionally -- or pre-propositionally -- too. Challenge: why is there no such thing as a "protolanguage"?)

      Delete
  24. The text discusses how children acquire language and presents different theories about language acquisition. It emphasizes that children do not simply imitate their parents' language but actively learn and internalize the rules of their mother tongue. It is so interesting to me that a child can master a language in such a short period. The text then provides examples illustrating how young children seem to follow innate rules of language, specifically regarding word structure and sentence structure. It discusses the concept of derivational and inflectional affixes in word formation and the ordering of rules in language. In the following section, “Acquisition in Action”, the author mentions the process of how children acquire language rules from parental speech. But it then kind of invalidates this process by highlighting the importance of innate guidance. If one is a native speaker in a certain language, and talking is automatic, and stuff that doesn’t make sense grammatically is because of the rules, but we cannot know all of the rules, it just makes sense or it doesn’t. Is this related to language acquisition being innate, or is this being used to talking in a certain language, and the gut feeling of it making sense or not is just out of trial and error, and not innate?

    ReplyDelete
    Replies
    1. Selin, you seem to be forgetting the UG/OG distinction. The rules of OG are learnable, and learned, because mistakes are made, and corrected. The rules are unlearned and unlearnable, because UG errors are not made. So UG must be inborn.

      Delete
  25. This paper argues that language acquisition cannot be all learned and that humans must have some sort of built-in understanding of basic grammar of human languages (Universal Grammar) to be able to recognize the patterns in language, and pick up on the syntax. It also mentions that context is important in categorization and to be able to derive the semantics of certain words. It is also better for kids to learn actively so that they are able to make connections between the words and referrents, as well as receive feedback. This makes sense as it is true that the best way to learn a language is to actually engage in conversations and practice in real life rather than just reading a textbook. I guess that also explains the phenomenon of receptive bilingualism, where someone is able to understand a language because they naturally picked it up from their surroundings, but they cannot speak the language.

    ReplyDelete
    Replies
    1. Andrae, although learning to distinguish well-formed utterances from ill-formed ones is a form of category learning, what makes it important to speak as well as to hear language is the fact that it is a mirror capacity. Please read the other replies to understand why this matters. And don't forget to distinguish UG-compliance from OG compliance, as well as 1st-language learning from 2nd-language learning.

      Delete
  26. In the Child's Language-Learning Algorithm section, Pinker addresses the challenges in children's language acquisition, focusing on the complexities during the early stages of learning. He shows the difficulty of the "bootstrapping problem," where children must navigate from raw linguistic input to the construction of a coherent grammar.

    Pinker suggests that children might create correlations between words based on their positions, affixes, and surroundings. However, he points out the limitations of this idea due to the absence of easily identifiable features in parental speech for recognizing grammatical categories. Another idea involves relying on prosody, where children use pauses and intonation changes to identify phrase boundaries. Pinker argues that the complex interplay of emotion, stress, and word frequency complicates the connection between prosody and syntax, making it unreliable as the primary method for language acquisition. Pinker then suggests that children may use the connection between syntax and semantics to understand language. By identifying semantic traits in words and phrases within context, children can infer their syntactic categories. This strategy enables them to build phrase structure trees and gradually grasp more complex linguistic ideas.

    ReplyDelete
    Replies
    1. Julide, tree-structure is UG, and not learned -- except by generative linguists, who, because they too are humans, have learned a 1st-language's OG, and already have UG in their heads, guiding what they say and don't say, and (important for generative linguists) also guiding them in what sounds right and wrong. That's how those linguists manage to figure out the structure of UG

      Some people (including me) have thought that meaning might be part of what guides our compliance with UG syntax. (Meaning is certainly part, but not all, of what guides our compliance with OG syntax.) But no one has worked out yet what the causal process underlying that guidance would be (if it's true).

      Delete
    2. Hi Professor, your comment touches upon the intriguing proposition that meaning could play a role in directing adherence to UG syntax. It’s interesting that while it's acknowledged that meaning is a factor in adhering to OG syntax, the exact causes underpinning this possible guidance are still unclear. According to recent studies, there is increasing interest in examining the complex interplay between syntax and semantics in an effort to understand the ways that meaning affects our adherence to these patterns.

      Delete
  27. This paper by Pinker explores language acquisition, emphasizing that children do not merely imitate their parents but internalize complex grammatical rules. It introduces the idea of rule ordering in word formation, using derivational and inflectional affixes as examples. The discussion then shifts to sentence structures, particularly the application of control principles in embedded clauses. A section I find particularly interesting is an experiment by Michael Maratsos suggesting that children follow a Structural Principle, considering grammatical structure over linear order which challenges the Minimal Distance Principle. It addresses Chomsky's argument on UG, as the text suggests that children may have an innate capacity to grasp and apply abstract grammatical rules, aligning with the idea of a universal linguistic structure inherent in the human mind.

    ReplyDelete
  28. What I found most interesting about UG is that, 1) if a child grows up away from human society, they are always mute, showing that there is a critical period in developing language; 2) intellectually impaired individuals can also understand languages, showing that language is not purely a general intellectual development. Moreover, adults are almost never be able to master a language other than their mother tongue, and they can also make UG errors, showing that UG also has a critical period, but not purely tied to general intellectual (if it is, then adults with perfect intellects should have no problem mastering another language).

    ReplyDelete
    Replies
    1. I am always so caught up in thinking about how aging play a role in language development. If someone has been exposed to two languages at a very young age, do they still make UG mistakes? But what is the difference between learning a new language before age 3, and learning a second language as an adult?

      Delete
    2. Hi Tina, I also find the role of age in language acquisition to be an interesting topic. From my understanding, someone who learns two languages at a very young age will make close to no UG errors, however this depends on the degree of their exposure. I think that the difference between learning a language before the critical period of language acquisition and after, is brain plasticity. I believe adults have a more difficult time learning second languages as their brains are less plastic and this involves much more effort.

      Delete
  29. One part I found particularly interesting in the paper was the separation between general intelligence and language abilities. For example with the Spina Bifida syndrome where children have intact language but may be talking about events that are from their imagination. Or inversely when people have a very high or at least intact intelligence yet struggle with language. I am curious on whether this may be related to deficits in an innate UG or a learned OG; with stroke patients for example, they are unable to produce any sentences at all, thus both OG and UG are impaired (or at least some connection between them and motor coordination/speech production and our thoughts is impaired). However, especially with recent novel treatments such as those in brain plasticity, stroke patients are able to regain speech skills. Does this mean they are able to rebuild connections with their UG; if so could this tell us something about the processes involved in why we have UG (knowing what connections in the brain are affected in a stroke that lead to the inability to verbalize thoughts)?

    ReplyDelete
    Replies
    1. I think viewing language deficits from an OG or a UG perspective is a super interesting lens. I went back to some videos of different type of aphasia I've seen in some of my past classes to try to figure out if people affected by either Wernicke's or Broca's aphasia violate the rules of UG, and while I'm not completely sure, I actually don't think they do! I had expected that Wernicke's aphasia, where patients speak fluently but without any real meaning to their sentences, would be the most likely case for disconnection from UG, but it actually seems like patients with WA still mostly avoid UG errors.

      Delete
    2. Rebekah and Elliot, clinical neuropsychological research is still too far from generative linguistic findings to yield coherent cogsci conclusions.

      Delete
    3. Tina, please read the other replies.

      Maria, that's correct.

      Delete
  30. A nice summary of what is UG would be like:
    UG is a complete set of innate structures for kids to master languages.
    During the critical period, by exposing to wither one or multiple languages, kids "favor" one or more certain parameter settings: by using the analog of having a complete radio, you create certain radio channels corresponding to the number of languages exposed to. This is also comparable in phonology: kids learn Chinese first and will not be able to pronounce R sound naturally. In specific, for those get expose to two languages, they may create sentences both in SOV and SOV (as parameter settings), and in contrast with those expose to one language only, they only create words in SVO and will make UG mistakes in SOV structure as their second language.
    In conclusion, none of the parameter settings are lost or dropped out from UG, even after critical period, the exposure to which language favores corresponidng PS through the process. And the evidence is that because you can still learn any languages at any time.

    ReplyDelete
    Replies
    1. And for people learn secondary language, they can access and perform more parameter setting from UG (having more radio channels). The difference is that for those learn languages within critical period, they make OG mistakes sometimes but do not make UG mistakes, and they tell UG mistakes when appear to them; and for those who learn as second languages, they make OG and UG mistakes and do not aware of UG mistakes when appearing.

      Delete
    2. Eugene, that's approximately correct.

      Delete
  31. This week’s paper discussed how children acquire language and how that process changes throughout life. It is my understanding that UG is an innate set of deep structures that allows any child to learn any language despite the poverty of stimulus, and OG is learned and accounts for the mistakes that people make in their grammar. There is a critical period for language acquisition, as it is not possible to reach native-like competency after this period. When I think about how language is taught in school, it usually consists of memorizing grammatical rules and vocabulary words. I wonder if language acquisition for second languages would improve if teachers implemented a more naturalistic and inviting way of learning language that mimics how children learn their first language.

    ReplyDelete
    Replies
    1. Megan, good summary. You are right that school instruction in second languages is not as effective as immersion in a language through the daily life experience to which language evolved as an adaptation.

      Delete
    2. This is quite an interesting point that both the environment and the learning strategy changes with the second language curriculum. Also, at least with my experience in the US, they push second language learning until you are "older and more mature" while missing major critical/sensitive periods. I think its a good hypothesis that keeping the environment and strategies as close to the way children learn their mother tongue may exhibit greater success.

      Delete
    3. That’s very interesting! This reminds me of having French immersion that starts in grade 6. As someone who was in French immersion starting in kindergarten, I am confused why it is still pushed on students to start French immersion in grade 6 because at that point it would be much harder to learn the language. I remember those students always had a tough time with the accent and pronunciation and that never really went away as there was always a difference between late and early French immersion students even after years and years of practicing.

      Delete
  32. I found the part about the relation between language and thought interesting, because I do think language’s main function is to communicate with others and that thinking is not a subset of language. I realized when I was a kid that my conscious dreams were language-less; I understood what was happening by mental imagery only, such that dialogue was done without the use of a specific language. This also relates to people who visualize, have an inner monologue, and/or use self-talk aloud. In all these cases, language is able to solve the other minds’ problem.

    ReplyDelete
  33. I was wondering if a comparison could be made between Chomsky's concept of UG and Gestalt psychology. Gestalt psychology is a theory of perception that studies the way humans perceive groups of visual stimuli. We make certain automatic judgments about objects such as distinguishing a darker object as a figure and a lighter one as a ground. So certain innate assumptions structure the way we perceive and interact with the sensory world around us. Chomsky's UG refers to an innate set of functions that help humans learn about languages. All languages share the UG principles, because all human languages are arbitrary applications (OG) of UG principles. To me, it looks as though the innateness of Gestalt principles could translate well, we could call it "universal perception", or something like that. This, in turn, structures the way we perceive objects in the world, which are delineated by strong edges, etc.

    ReplyDelete
  34. I find it absolutely fascinating how young children, without formal teaching, intuitively grasp and apply the rules of language. This in itself shows that there is some sort of inborn predisposition towards language acquisition which is shaped by surroundings and exposure to a specific spoken language. One aspect that really stood out to me was the ‘bootstrapping’ method that children utilize. By using the context and meanings of words, children begin to form hypotheses about language structure. It led me to the following question regarding language acquisition in children. How could the language acquisition process in children be influenced by the digital age? With increasing daily exposure to media and tv, would this change traditionally observed language learning dynamics? If so, what could the potential implications be for cognitive development and linguistic skills in children? I think it could make it somewhat easier with a large variety and exposure to a given language, but could there be any downsides?

    ReplyDelete
    Replies
    1. Your insight on intentionally breaking UG rules in language exposure to children presents a thought-provoking scenario. It raises important questions about the intrinsic nature of UG in language acquisition. If children are exposed to language structures that deviate from UG norms, it might initially pose a challenge in language acquisition. However, the resilience and adaptability of the human brain, especially in young children, could potentially lead them to adapt and even integrate these non-UG structures into their linguistic repertoire. This situation highlights the dynamic interplay between innate language faculties and the environment's influence. It also poses a broader question: How flexible is the human language acquisition process in adapting to atypical linguistic inputs, and what does this tell us about the limits or extents of UG in shaping our linguistic capabilities? Fascinating stuff.

      Delete
    2. Hi Nicolas, what do you mean by "non-UG" structures? UG is all and only the genetic features of humans that specifically give us the capacity for language. Non-UG structures cannot be integrated into "linguistic repertoire" because by definition everything outside of UG is outside of the human capacity for language. Non-UG sentences only appear in the linguistics literature and are consciously thought up by non-language processing parts of the brain to exemplify a point about a specific linguistic phenomenon. UG doesn't "shape" our linguistic capabilities so much as it completely bounds them.

      Delete
    3. This is a great point Stefan! Access to TV shows, media and movies in different languages opens up a whole new realm of language acquisition because people can get constant exposure to language even if they aren't necessarily being taught it. This is interesting when considering language acquisition in children but also when considering those who have passed the critical period. I remember once my friend said that she picked up a bit of Spanish by simply watching the Spanish show “Elite”. I always thought this was cool and it brings me back to my point earlier about second language teaching in school. Watching TV in a different language, in a way, mimics how we learn our first language and this concept could be applied to teaching.

      Delete
  35. The introduction of the child’s language-learning algorithm in the article reminds me of the course I am taking in another course, PSYC 304 CHILD development, about children learning languages. When children learn a language, they will have a sensitive period. Learning a language during this period will make a qualitative leap in language mastery. If they miss this sensitive period, it will be difficult for them to learn and master the language such as grammar in the future. This is related to the structure of the brain, because it takes time for a certain area of ​​the brain to form. If there is frequent guidance from parents or caregivers during this sensitive period, the children will learn more directly and learn the correct grammar consciously and unconsciously.

    ReplyDelete
  36. I found Pinker's paper extremely fascinating, especially to someone who knew extremely little about language coming into this week and course as a whole. I was surprised when he noted that by the age of three, children tend to closely adhere to grammar rules, despite their speech not always being deemed "correct". He emphasizes the essential role of interaction with others in language acquisition, noting that a child isolated in the woods would likely not develop language (relating to the poverty of stimulus as far as I am concerned), while a child among peers might even create new languages like Creole or sign language. Pinker also intriguingly argues that genetics don't predispose learning specific languages an argument that I never thought about, but I believe makes sense (Especially under the assumption the UG is inborn).
    However, I noticed some disparities between Pinker's depiction and application of Universal Grammar (UG) compared to our discussions. Pinker links UG to "the mental mechanisms children use in acquiring language," suggesting these mechanisms prevent individuals from making rational but false generalizations about language. Yet, his example of Darwinism vs. Darwinsism doesn't seem directly akin to the violations of UG we explored in class; it might be more aligned with violations of Ordinary Grammar (OG) rather than UG (although I may be making the same error).

    ReplyDelete
  37. In this reading, Pinker discusses the acquisition of language in infants and children. This process is truly incredible, as children are able to rapidly learn new words and understand unspoken grammatical rules which govern their use. One part of the reading that particularly interested me was the experiment Pinker discussed in which babies who only spoke in single words were shown two screens, one with big bird tickling cookie monster and the other with cookie monster tickling big bird. Despite only using single words, when directed to look at big bird tickling cookie monster, children tended to look at the correct screen. Given that the same characters and action were involved in both screens, this suggests that the children understood the structure of the sentence such that the character doing the tickling came first and the one being tickled came second. This is baffling to me because I would have expected that only children who already spoke in simple phrases, and thus had learned syntax first hand, would be able to do this. This experiment also got me thinking how much children are able to pick up from their environment even at very young ages. If babies are able to understand the rules of universal grammar at such a young age, are they already beginning to form categories? Are there other aspects of human ability which begin to develop this early?

    ReplyDelete
  38. Like others, I, too, have developed an interest in the "context" part 6.5, particularly with how it relates to ChatGPT. The language learning process of children involves a considerable degree of word guessing, which is often predicated on specific contexts. Hence, if the context here is completely ungrounded, without understanding the actual meaning associated with sensory or multimodal categories, then any assumption or guess is imprecise. However, generative models such as ChatGPT can use statistical features to predict potential words that can complete phrases. This is a distinguishing characteristic of contemporary deep learning based on stats.

    Therefore, in my intuition, GPT operates like a blind and deaf child. Its learning process doesn't significantly diverge from that of a child, the significant difference is grounding. I believe if we substitute word prediction from statistical "Google" (widely searched from big Gulp) to limited category combination prediction with sensory modern features. Could GPT replicate how a child learns the language?

    Moreover, language acquisition is a process of continuous learning through trial and error. But does the reinforcement learning segment of GPT not also learn biases and correct mistakes from human responses, called the RLHF technique? So, is grounding the only problem we encounter?

    ReplyDelete
  39. I've read many articles like Pinker's that summarize overall very well how children acquire language and POS, but almost all of them (like this one) make the mistake of not including anything about second language acquisition (especially in adults). As adults reading articles like these, especially if you do not have a background in linguistics or cognitive science, a lot of people don't understand the critical period for LA. I've explained the critical period as evidence for POS to many of my friends who are immigrants and learned English very close to puberty. Although this article does state that successful LA occurs by 4-6 and is compromised until puberty and rare after, I think that it's important to show in what different ways exactly, after 6 years old, we begin to learn other languages (differences in readily observable learning strategies, differences in functional localization in the brain, etc.). This way, people can see how exactly, even if they learned a language after the critical period, they learned that language in a much different way and will not fully acquire the grammar the way that they would had they begun learning it as a child.

    ReplyDelete
  40. About 5 pages into this reading I began to search through for the word “mute”. What I found was a positive evidence example of mute children. Children who have been severely neglected or abandoned were found to be mute and this was used to prove that verbal language had to be observed to be learned. That even if there was an underlying structure of UG these rules are too abstract to be able to sufficiently generate vocal language. It then went on to describe how children placed in communities where there is communication are able to generate their own new languages whether that be verbal or not. I've spent some time working at a therapy center for autistic children, many of the kids I worked with were mute and so I would do activities with them where I would have cards with animals on them and tell them the names of the animals and then they would match the card to whatever word I said, they also often generated their own sort of babble as well

    ReplyDelete
    Replies
    1. Thank you so much for sharing this information. It definitely adds on to the discussion of the paper. Since this is suggesting that language is learned, it begs the question of how language originated. Social creatures need to have a way to communicate whether it be verbal or not. Also you're awesome. Just wanted to say that.

      Delete
  41. This comment has been removed by the author.

    ReplyDelete
  42. I thought an interesting part of the reading was regarding grammatical gender. Specifically, children have no trouble with different gender markings when acquiring languages such as French, and Hebrew. This is no surprise since it’s established that they are extremely susceptible to LA (rapid language bloom into fluent grammatical conversation from mid-2 to 3 years, successful LA from 4 to 6 years, brain plasticity). The Whorf hypothesis asserts that speakers of different languages conceptualize the world in different ways. I know that this is not favorable in modern cognitive science, but in terms of gender assigned to nouns, I wonder how that would (if it does at all) influence a child’s perception or attitude toward objects or people. Would learning grammatical gender as an adult in a different language pose a challenge not only in its structure, but also in their previously learned categories between gender and a specific object? Are children just as susceptible to these linguistic stereotypes (given that they have no difficulties with gender markings)?

    ReplyDelete
    Replies
    1. I thought of the same thing too. I personally find it hard to learn French all because of the gender assigned nouns since this concept doesn’t exist in all the languages I know. My friend who is a native French speaker says that she learned grammatical gender through mimicking and receiving input/feedback from others when she was young. She still finds it confusing from time to time despite it being ‘all she’s known’ and that she noticed that she has definitely gendered or "re-gendered" things that don't need to be gendered or “re-gendered” e.g. referring to specific bus lines as feminine despite “bus” being masculine/gender neutral in french

      Delete
  43. Pinker provides a comprehensive overview of our learned language acquisition, innate linguistic capacity, and especially how brain maturation is critical for language acquisition as we age. Yet, there is not really a distinction between UG and OG, in which UG is immune to negative evidence, since there are no mistakes possible to be made (either by children or adults). We discussed in class that language is “a mirror capacity”, in which we both hear and produce mistakes. This is clear in OG, where most ordinary grammar rules are invented by us to facilitate efficient communication, and negative evidence happens in terms of learnability through corrective feedback. However, with UG, we don’t know these mistakes/violations since they are not produced or heard. The part that confuses me is due to the POS, UG is unlearnable (innate), and we do not produce UG mistakes. However, we mentioned in class that we recognize UG violations, but we "don’t know" exactly what they are. I’m struggling to understand how can we recognize violations but have no clue for pinpointing the "recognized violations", how can this co-occur together?

    ReplyDelete
    Replies
    1. Hi Can, an example sentence where you can see this (which comes from an answer to an assignment I had to do in my syntax class) is "Wilbert might have to the barn but they didn't sleep". Essentially, it is so wrong that you don't even know what to point out that's wrong about it unless you study generative syntax. Another example that some people might be familiar with: "What would you do if when you okay so he said yes would go?". An OG error could be something like "saw John Mary", because an second language English speaker could say this if their native language has VSO structure (such as Irish). To pinpoint the problem here, we would only need to think a little bit before saying "you just have the order wrong, say 'John saw Mary' instead". And you don't even need to have examples like these to know the difference between UG and OG, they are very different concepts by definition: UG: whatever is in our DNA that specifically gives us the capacity for language. OG: the features of each language that make them different from one another.

      Delete
  44. In addition, I was also wondering another aspect of UG: We discussed that only generative linguists that study advanced syntax have a clue about the UG rules. Therefore, I was curious about whether UG is all about syntactic structure, or can there be a semantic element that is inherent in our UG rules. Or is it just limited to syntax?

    ReplyDelete
    Replies
    1. To linguists, generative grammar is all syntax (formal symbol manipulation). There is "phonology", which is formal symbol manipulation that takes an input "sound"(phonemes: whatever the mind's representations of "sound" categories are) and the "sounds" in its environment and changes the "sound" based on that environment. There is "syntax" which is formal symbol manipulation of constituents(words and groups of words) that takes an input constituent structure and either moves a smaller constituent to another part of the larger constituent or adds a new constituent. Then there is "semantics", in terms of generative gramar, it is formal symbol manipulation which shows through formal rules how you can judge a sentence true or false based on the meanings of the parts of the sentence. There is another part of semantics which is outside of generative grammar which deals with the question of how a proposition can be true at all on the basis of what it refers to in physical reality. Obviously that is all simplified so it doesn't go into detail but hope it makes sense.

      Delete
  45. The text prompts a significant philosophical inquiry into the distinct traits or capacities that set humans apart from other animals. Traditionally, language ability has been considered a unique human attribute.

    Yet, the text wisely highlights the fallibility of past assertions of human exclusivity, such as tool use, as our understanding of animal cognition has evolved. This serves as a valuable reminder.
    Even though we've long seen language as this superhuman thing, the text points out that past claims of our uniqueness, like saying only humans can use tools, have been proven wrong. Regarding language, humans can do some fancy tricks with words, creating endless meanings and slang. Different languages exist in different races, neighbourhoods, or friend zones.

    ReplyDelete
  46. In PSYC 304 (Child Development), we encounter a study looking to test whether bilingual learning begins in the womb. It is done with a preferential high amplitude sucking procedure testing 2 groups of newborn infants (one with bilingual English-Tagalog mothers and the other with monolingual English mothers). Infants were exposed to Tagalog and English sentences rates of sucking on a pacifier were measured with more intense sucking indicating preference for one language. The results indicated the English monolingual newborns preferred English whereas English-Tagalog bilingual newborns had no preference for either language. They even tested using a habituation paradigm whether bilingual infants could differentiate between two native language and found that they could. I think it is already clear that UG is innate but with this study in mind, I wanted to know whether OG could be learned already in the womb as well. Having no preference or being able to differentiate between languages does not really indicate “learning grammar” but it is still the very basis of language acquisition. Could OG be learned in the womb? Or is it just UG applying in this instance?

    ReplyDelete
    Replies
    1. Hi Hyeokbin, OG begins to be learned in the womb, but obviously most of OG is learned postpartum and in the following 6 years. The reason why babies show these preferences is because they can hear the stress patterns of the mother's speech. It doesn't have anything directly to do with UG (other than every OG is a subset of UG).

      Delete
  47. When pinker says that children can make good guesses about the meaning of their parents' speech based on their knowledge of the order in which they speak and the meanings of many other words, this follows the bottom-up process that occurs when listeners process the speech of others. Could this process, which is in a sense innate because it is not learned from anyone, or through trails and corrective feedback, and is an ability that everyone possesses, be considered an example of universal grammar?

    ReplyDelete
    Replies
    1. I believe Pinker’s discussion suggests that kids figuring out what their parents mean from context and known words does show an innate ability but it’s not exactly UG. Children can make good guesses about speech based on context which is impressive and maybe part of an inborn skill set. However, UG is more about a built-in grammar framework, as some kind of internal guide for kids to learn grammar rules without being taught. For instance, in Gordon’s experiment, children demonstrated knowledge of language structures that they could not have learned from environment, suggesting an innate understanding of certain grammatical rules. The “bootstrapping problem” thanks to which children correlate certain properties of words and their usage to form syntactic categories might also align with the concept of UG as an innate framework that guides language acquisition for example.

      Delete
  48. The more I read about language the more I realize how computational it is. Language acquisition, the way it’s described in this paper is so similar to computer programming, which corresponds to the learnability theory. First you define classes such as people, animals, objects, etc. The more specific names you learn about them like “a cat is an animal” and then even more specific “Garfield is a cat”. You start to put them in the right categories. While you’re doing this you are also assigning attributes to them. There are some actions for example like “cats eat”. Eating is now something that you know cats do which is defined as a method in the cat class. But wait, people and animals also eat so now eating is a method in those super classes. Then you learn that cats eat fish, so you now associate fish with food for cats. Even trying to explain it in the most simple examples starts getting complicated which is one of the biggest reasons why, in my opinion, language is so hard to dissect and reverse engineer. I also really admire how psychologists and linguists have been able to do so much research despite all the drawbacks that come naturally with the field as expressed in the conclusion.

    ReplyDelete
  49. I apologize for the late skywriting.

    This paper by Pinker was fascinating! It went into detail about possible mechanisms for children’s universal language learning abilities.

    An element that particularly surprised me is that children, at some peak that occurs between 9 months and 2 years old, have 50% more synapses in their brain than an adult human does. Over time, this number of synapses goes down, and synapses increase in speed. Though the parallel is clearly imperfect, the case of training in modern large language models indicate a similarity: it is often advantageous to train a model with more parameters, analogous to brain synapses, so that it can learn faster, and later use various methods for shrinking this model down, and making it faster. 

    An element of children’s language learning process discussed in the paper is the presence of positive evidence, and the frequent absence of negative evidence. Positive evidence is the set of all grammatical sentences a child hears, that informs the child about some minimal set of sentences that make up the language, with the obvious hope that they would generalize this. However, they may overgeneralize, if they do not receive information about what sentences **are not** grammatical sentences of the language; this is referred to as negative evidence. Though this distinction is insightful, I think it ought to make sense that absence of evidence is weak evidence of absence: the “holes” in the space of possible sentences they could hear, every time they don’t in fact hear a specific sentence, is weak evidence that that sentence is not valid in the language; and over time these add up, and plausibly explain the fact that negative evidence seems less necessary for language learning.

    ReplyDelete
  50. I found Section 5: What is Learned as informative as entertaining. Pinker explains that to understand how Language is learned, we must understand what language is. Part of this examination is Universal Grammar, which is an innate set of language rules that every child is born with. As Professor Harnad puts it in skies above, UG is flavoured by the Ordinary Grammar of the language learner’s first language. This then allows the UG to take upon some characteristics and drop others, as they relate to the language. Chomsky essentially says that language is used to communicate thoughts. Thus, UG is the framework through which these thoughts are structured and expressed. The examples that Pinker provides—which were the source of my entertainment—help illustrate how certain grammatical constructions are universally avoided, despite seeming logical.

    ReplyDelete
  51. In section 7, when Pinker talks about Gordon's experiment, he claims that since children were willing to say "mice-eater" but not "rats-eater," that is good evidence to show that the constraint on level-ordering is innate. I disagree with that. I believe it is possible that they inferred that rule from other rules. Consider a 9-piece square puzzle of a scenery. It might take a bit of effort, but you might be able to recreate the image without ever looking at it, simply by using the fact that each piece should fit into another and create a coherent scenery. Now, let us assume that you managed to connect 8 pieces, but due to a manufacturing error, the central piece of the puzzle is missing. Without ever seeing that piece or the completed picture, you would still be able to infer with high accuracy what the missing piece is supposed to look like. I believe the same thing can be happening here with children and language. They might not have access to the missing piece but are still capable of using the correct rule because it might make sense within the overall grammatical context.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...