Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin
Arbib, M. A. (2018). In support of the role of pantomime in language evolution. Journal of Language Evolution, 3(1), 41-44.
Vincent-Lamarre, Philippe., Blondin Massé, Alexandre, Lopes, Marcus, Lord, Mèlanie, Marcotte, Odile, & Harnad, Stevan (2016). The Latent Structure of Dictionaries. TopiCS in Cognitive Science 8(3) 625–659
Organisms’ adaptive success depends on being able to do the right thing with the right kind of thing. This is categorization. Most species can learn categories by direct experience (induction). Only human beings can acquire categories by word of mouth (instruction). Artificial-life simulations show the evolutionary advantage of instruction over induction, human electrophysiology experiments show that the two ways of acquiring categories still share some common features, and graph-theoretic analyses show that dictionaries consist of a core of more concrete words that are learned earlier, from direct experience, and the meanings of the rest of the dictionary can be learned from definition alone, by combining the core words into subject/predicate propositions with truth values. Language began when purposive miming became conventionalized into arbitrary sequences of shared category names describing and defining new categories via propositions.
Subscribe to:
Post Comments (Atom)
PSYC 538 Syllabus
Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place : Arts W-120 Instructor : Stevan Harnad Office : Zoo...
-
What is a Turing Machine? Computation is Symbol Manipulation What is a Physical Symbol System? Optional Reading: Pylyshyn, Z (1989) Com...
-
Harnad, S. (2017) To Cognize is to Categorize: Cognition is Categorization , in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorizatio...
-
Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20 , in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press ...
Categorization, doing the right thing with the right kind of thing, is the foundation of thinking (everything we can do). Evolution has expedited category learning with UG by genetically providing humans the nuclear power of language. The nuclear power of language is indirect learning through a teacher, which is much faster than direct learning through trial and error, as long as enough symbol grounding is done prior. Language allows humans to be more efficient with our time, which is obviously limited, and do more things with the right kinds of things (living). I really liked this excerpt, “After all, direct sensorimotor experience -- rather than just indirect verbal hearsay -- is, at bottom, still what living is all about,” because it shows that language is powerful for evolution, but I don’t really care about what it’ll do for 100 generations from now, it gives us the ability to do more with what little time we have in our short lifetime.
ReplyDeleteKaitlin, all good points, but there are things that language makes possible in the here and now that would be impossible without it. (Science, technology; What are some other examples?) And although living is about direct sensorimotor experience, not words, and we evolved to survive and reproduce, we can see that the things language made possible certainly have effects on our survival and reproduction, as well as our sensorimotor experience -- effects that can be both positive and catastrophically negative.
DeleteThis reading mentioned some of the notions we saw in previous lectures (symbol grounding, kernel-core dictionary, to name a few) and used them to argue that language went from "show" to "tell". I found the part regarding chimps missing "not the intelligence but the motivation" to use language especially interesting; it revealed to me that I have been subconsciously carrying the biased opinion that natural language is specific to humans because other organisms are incapable of producing and using it as we do. Given that chimps have the ability (and intelligence) to use language as we do, I wonder why they aren't doing so; why aren't they "motivated"?
ReplyDeleteHi! from my understanding, the reason would be that they don't have the strong will to express something other than hunger. I got some hearsay from my cousin, whose kid born three years ago, that if you satisfy everything to that baby at the age of learning to speak, they would even later to open their mouth and talk something meaningful. Human is social animal which could not live without communicating with others, so language is mandatory and also the only way to express our will. Chimps have no such need, so I think this should be why they aren't motivated.
DeleteAashiha & Evelyn, chimps seem to have the cognitive capacity to learn language, but they have not evolved the Baldwinian motivation to use it. The Baldwinian motivation (what is that?), too, would have been shaped by adaptive challenges and advantages that chimps, unlike hominids, did not face. But don't forget UG: If there can be no language without UG, and UG is an innate genetic trait, then that too is missing in chimps. But I would say that explaining the evolution of UG is a harder problem than explaining the non-evolution of the Baldwinian language-motivation in chimps. (Why?)
Delete
DeleteAs I understand it, Baldwinian evolution is when a behaviour which was learnt in order to respond to or cope with environmental pressures is then selected for and becomes innate through Darwinian evolution. In the case of chimps, this would mean that there has not been an environmental pressure which has required chimps to learn language. This is easier to explain than the evolution of universal grammar, because it is understandable that chimps did not face limitations or pressures which required them to develop a complex language system. However, are we to assume that human beings acquired language through Baldwinian evolution? If so, universal grammar is harder to explain because it is an innate genetic trait which must have superseded the Baldwinian evolution of language, and why, in that case, would humans have that innate genetic trait if chimps do not?
I agree with Zoe and feel that this phenomenon could be likened to cultures that don’t have conceptions of time/ numbers. It is not that they don’t have the cognitive capacity to understand time passing or perform arithmetic, but more so that they have not experience the need to develop/ integrate these concepts in their daily lives (most likely due to lack of faced challenges where such concepts could serve to be solutions).
DeleteExplaining the evolution of UG is a harder problem than explaining the non-evolution of the Baldwinian language-motivation in chimps because the former presupposes UG which is itself debated. If the existence of UG itself is not proven, how is one supposed to explain its evolution if we don’t even know what is was and what it is now (or if its even real)? For the latter, we know how Baldwinian language-motivation (or at least what we have defined it as) has evolved in humans, so we at least have a starting point on the basis of comparing humans to chimps. This could be in terms of the essential cognitive capacities or their respective environments, which evolutionary psychology could provide insight on.
- I was struck by the line in the second section saying that it is unlikely that language began with word of mouth. I was immediately struck by wondering what word of mouth really means – whether this is just another term for vocal language. If so, then when we talk about language more generally, the limits of what can be considered language broaden quite a bit. Language, to my understanding, is anything that conveys something meaningful to another person, and could reasonably be expected to be understood by members of a given group (the limits of this group could vary quite a bit as well, as some meaning is only meant to be conveyed to those in our in-group, while some bits of language, verbal or otherwise, are almost universally understood. The narrow definition described in Blondin-Masse, that it also required a universal grammar, is a challenging one for me to wrap my head around (as it seems to have been for many folks over the years) as I don’t quite understand the basis on which Chomsky established that it must be innate. To this end, when we issue commands to a pet, or to a robot, is this language because we as the issuer understand it? I would be interested to hear other people’s thoughts on this!
ReplyDeleteHi! I think "word of mouth" is language without symbol. Maybe to illustrate, cats communicate with other cats by sounds. However, without physical recording through a proper symbol system, language could not expand widely, stay longer and improve continuously (maybe off topic...)
DeleteAlso, I think pets should be considered as kind of induction-only learner to our commands. They observe what happens after we give the commands, and then learn it. Maybe UG is the crucial to be instruction learner, so Chomsky and others argues that UG is innate. If I misunderstood anything, please don't hesitate to correct me.
I would say POS is like trying to learn the difference between edible and inedible mushrooms from a sample of only edible mushrooms, because we are trying to understand UG while having only OG examples as a basis for any findings. This analogy can also refer to the fact that we never hear errors of UG, only correct uses. We are never exposed to the ‘inedible’ version. This may be implying that there is nothing to learn because for all we know there is no alternative to UG (meaning it is inevitable for us to use it), but I may be going too far.
DeleteOverall, this reading explained that learning language requires ability and motivation, and that language must be an all-encompassing symbol system that allows for the communication of categories. Communicating categories through language provides an evolutionary advantage to unsupervised learning (as seen in the mushroom simulation).
Poverty of the stimulus is like trying to learn the difference between edible and inedible mushrooms for a sample consisting only of edible mushrooms because you would not be able to categorize the nonmembers (in this case inedible mushrooms). If you’re only exposed to edible mushrooms in your environment, you have no feeling of what an inedible mushroom is or that one even exists. This is like the POS is used as possible evidence for the existence of UG because it goes against the logic of categorizing mushrooms that you are not exposed to. Children learn and differentiate between members (correct propositions) and nonmembers (incorrect propositions) without being exposed to the nonmembers in the environment.
DeleteThis reading is about the origin and development of language, and emphasizes the role of categorization and the difference between learning categories. It talks about learning categories through direct experience (induction) and through word of mouth (instruction). It presents a hypothesis which I found to be really interesting, it suggests that language’s emergence is tied to the acquisition of categories through instruction, creating new categories with propositions and definitions. The reading touches upon categorization to be one of the fundamental aspects of language, and I found the artificial-life simulation to be really cool, it shows the advantages of instruction over induction in category acquisition.
ReplyDeleteHi Selin,
DeleteI hope you are doing well. I found the artificial-life simulation very cool as well.
They created a mushroom world where they could learn either by trial and error or through instruction. What caught my attention was how the creatures that could learn through instruction performed better than those relying only on trial-and-error methods. It was interesting to observe how social learning and knowledge sharing through communication gave these creatures a significant edge in terms of survival. This simulation shows the significance of communal learning and the potential advantages of sharing information. Honestly, it changed my understanding of how instruction can influence cognitive abilities.
Selin & Julide, there was a bit of cheating -- or rather a short-cut -- in that simulation: it had to do with the difference between learning by imitation and learning by instruction: what was it?
DeleteTo my understanding, the shortcut was that the simulation creatures could overhear other creatures' findings through vocalization(ie. a creature could learn from hearing another creatures observations). This is a shortcut because we don't necessarily think that language began vocally, but we could choose to view it in the simulation as witnessing "observable actions". It makes sense that instruction would be a mutualistic advantage, as it helps each creature minimize their need for dangerous trial and error, so long as it is reciprocal.
DeleteExactly. The shortcut involved how they learned. In this case, the creatures could quickly learn by overhearing vocalizations from others. This speeds up the spread of knowledge, making learning more efficient. They gained insights by listening to each other.
DeleteThe discussion of proper names as a type of category made me reconsider the way that I thought about the idea of perceptual constancy. The way that I had learned perceptual constancy was the ability to recognize single specific object as the same object despite changes in retinal image that affect the way we perceive size, color, shape, etc. With this definition of category, where we are able to recognize Noam Chomsky as Noam Chomsky regardless of age, clothing, and position, the way that I think about perceptual constancy has changed. I think that this relation between perceptual constancy and categorization is very helpful when thinking about what it means to categorize as a whole (that is to do the right thing with the right kind of thing).
ReplyDeleteMegan, perceptual constancy is a short-term thing, while you have the object in view. It's not about identifying it but about whether it has changed (size or shape). Recognizing an individual at another time is about identifying (categorizing) it despite changes in features.
DeleteI found the passage about simulating the origin of languages fascinating! An artificial life simulation was conducted in which creatures learned to categorize three types of mushrooms, A, B, and C, in a "toy" world. Two of the categories, A and B, could only be learned through trial and error induction. The third category, C, could be learned through induction or by observing creatures that already knew the categories and acted accordingly. The creatures performed actions such as watering, marking, and eating specific mushrooms to categorize them, rather than vocalizing names. While the WATER and MARK categories could only be learned through induction, the EAT category could be learned through symbolic instruction by observing others. When induction-only learners were pitted against instruction learners, the instruction learners outperformed and outlasted the induction learners within a few generations.
ReplyDeleteYes! I loved this passage as well. It demonstrated how the “theft” of categories - learning through hearsay instead of having to go through the process of learning the category by induction - can have mutualistic and evolutionary adaptive value in a very engaging way. I’m still amazed at the prospect that developing the ability to acquire new composite categories from other people who have already known the old categories out of which the new one was composed may be why we survived as a species since it helped us become more social, cooperative, and kin-dependent. To be honest, I also liked the last section discussing some advantages of induction that we don’t have from instruction like the reaction time for categorization and the fact that we don’t need to have acquired a bunch of other categories to form a new one. Direct sensorimotor experience intuitively comes to me as more powerful in terms of grounding, it feels different to learn something experientially by yourself than by indirect verbal hearsay and having to apply it. As week 10 is approaching, I’m wondering if we'll be able to address that through the lens of the Hard Problem.
DeleteI also found it interesting how learning was faster through instruction than it was through trial and error or indution. Particularly in how this aligns with the strong social aspects and needs of humans; for example research in the social neuroscience field has shown that when humans have a shared goal they will have better task coordination and more effective task completion then when they’re working alone. Further, beyond just having a communicative power to express something, language can both affect how we perceive the world and also our own internal states (for ex the placebo effect where we hear or tell ourselves something that may not be true, but the strength of our belief in the words we hear makes it true, in that our internal states physically respond to those words). This placebo effect is something that human language can do that perhaps other forms of communication like gestures cannot. Further the words said in the placebo effect are not related or may even be contradictory to sensorimotor input (like if a doctor tells you that you’re healthy when in reality you have a cold) and may be produced with just a UG (like when a parents tells their kid that they’re ok after hitting their head, and the kid stops crying because of what their parents said rather than because the pain actually went away)
DeleteAimee, how would you do a modern a toy robotic version of that (toy) simulation? And how would you begin scaling it to T3?
DeleteMamoune, there's no doubt that even if we could, in principle, learn just one MinSet of categories and then go into a dark room with a computer and learn all the rest by being told them by ChatGPT2500, we do not and would not do it that way (through indirect learning by instruction alone): We keep supplementing with direct sensorimotor learning too. Why?
And why (and how) do you think any of this "Easy" stuff would cast light on the Hard Problem?
And what did you mean about UG? What is UG, and how is it relaated
Rebecca, that was an insightful reflection, to link language evolution to hypnotic suggestibility! For lazy evolution, a Baldwinian defult assumption -- that when you are told something, it's probably true -- would have been adaptive for our species when language first evolved.
(But, like paleolithic candy and contemporary tooth decay, that default assumption has since come back to haunt us today. How? Think of cheating male peacocks, false advertising, fake news, antivaxers, antitaxers, the climate crisis -- and the lie that the food industry and our credulous parents have been telling themselves, and us, about the biological necessity to eat meat to survive and be healthy...)
You did make three mistakes, though:
1. A gestural language is just as much a language (with the same "hypnotic" power as a vocal one. There is surely a placebo effect for deaf speakers of ASL, as there would have been for Helen Keller with finger-spelling and Braile.)
2. And what does the placebo effect of language have to do with UG (other than the unanswered question of how and why UG evolved).
3. And don't forget that the words in hypnosis, have to be grounded, whether indirectly or directly. So they can never be completely divorced from direct sensorimotor grounding.
My best guess for why we keep supplementing with direct sensorimotor learning too is the fact that we live in a material world full of objects that interact with our senses and motor abilities and we need sensorimotor grounding to live in this world, to apply the actual instances learned purely symbolically.
Delete“How” we FEEL about the categories we acquire and how we can ground them seems to influence the categories we selectively want to acquire through hearsay and this ultimately relates to how and why we do the things we do with the right kinds of things. Maybe there’s a process of natural selection for categories as well and a solution to the HP could help us know more about it.
I don’t think I mentioned UG but I would guess that UG implies that this MinSet is already acquired by children, facilitating learning through instruction without having to experience the world with their senses at first.
I’m not sure if this is a stretch but I wonder if one could leverage the Baldwinian language-motivation to address why we do not and would not learn indirectly through instruction alone. If we learn just one MinSet of categories and then go into a dark room with a computer and learn all the rest by being told them, I think we would eventually be unmotivated to continue if there was indication that what we were learning would be necessary, let alone beneficial, at any point in the rest of one’s lifetime. We could in this case be likened to the chimps scenario in that it’s not that we wouldn’t have the capacity to learn (indirectly through instruction alone), but more so in that we would not need to. Continuous supplementation through direct sensorimotor learning is rewarding; being able to correctly apply your knowledge in real scenarios not only reinforces what you know but also provides instances of when your knowledge is useful, serving as motivation to continue to learn.
DeleteJocelyn, what an incredibly insightful comment. I think you hit the nail on the head. The Chimps scenario provided the insight that the capacity to make use of the nuclear power of language is not enough, but that the motivation to use it is an additional requirement to get the "ball" that is language rolling. A human in a dark room - much like what you said - would lack said motivation, and would likely never use the hypothetical minset to build out to every single possible category. If to categorize is to do the right thing with the right kind of thing, then a human in a dark room would have no reason to create more categories with the minset since there are no "right kinds of things" to do the "right things with"
DeleteI particularly liked this article as it was clear and easy to follow. Language was defined as a formal symbol system, for which you can express any preposition with such symbols. Furthermore, the authors suggest that learning categories (placing the right thing with the right kind of thing), through language is not entirely the same as learning them by sensorimotor induction. Despite both sharing similar brain processes, the results of category learning by induction vs. instruction are not equivalent as categorization learned through induction is much faster than categorization learned through instruction. Furthermore, even if categorization rules may be acquired symbolically, the implementation of these rules require sensorimotor elements.
ReplyDeleteI think that the distinction between induction and instruction learning is very interesting as well. Particularly the fact that it is highlighted that with instruction techniques, categorization itself can take longer, but the amount of trials needed for training is lower. Within the paper they state that categorization using induction learning is faster than instruction learning. I would in turn assume that a reverse effect is seen in comparison with instruction learning for the number of required trials for induction learning. Namely that while eventually it is faster to categorize using induction learning, the amount of time it takes to reliably learn the category is much higher than with instruction learning. This to me has interesting implications as far as how we are best able to apply this knowledge to current learning strategies.
DeleteMelika, good summary (but it's proposition, not preposition. "To err is human" is a proposition; but "To" is just a preposition...)
DeleteJenny, I think your points were right, but you may be mixing up the speed of responding (with induction it can be faster once you've learned it) with the speed of learning (i.e., the number of trials needed for learning with instruction).
One of the topics this writing explores is differences in learning and how they impact rates of survival. Not surprisingly, having more ways to access this essential tool is better and increases those rates. Indeed, one of the findings of the experiment was that entities with only trial and error as a means of learning died in fewer generations than the ones that also had instruction (required fewer training trials). One other big point to take away from this text is the evident transition from show to tell, as telling superseded showing. My question is, can we consider writing as the next big transition that happened?
ReplyDeleteGarance, yes, writing is the next big transition, but unlike language itself, it did not need a change in our genes or our brains, so it was not a (bioological) evolutionary change: the invention "memes" (pictographs or letters) was enough. (Think of the laziness of evolution.)
DeleteThis paper discussed category learning through induction (direct learning through sensorimotor experience) and instruction (“overhearing” other creatures who know categories that have yet to be learned). I found it interesting, though not surprising, that it is faster to apply categorization that has been learned through sensorimotor induction compared to instruction, as Melika commented above. Moreover, despite the fact that you only need to directly acquire 1500 category names (plus a few linguistic rules) to “get language’s full expressive power” (p. 13) (after which you can use instruction to learn new categories) we continue to learn new categories through induction throughout our lifetime. This seems to suggest that there are some significant advantages to learning categories through induction.
ReplyDeleteJessica, yes, that's right. But why is it still useful to continue direct sensorimotor learning lifelong once you have language?
DeleteAlthough language is fast and easy, "a picture is worth a thousand words". Sensorimotor learning helps ground feature-names since language only provides approximate categories. Although a verbal description can always be improved and modified with more words, sensorimotor learning is still essential. Sensorimotor grounding only becomes unnecessary once you have sufficient grounded categories that allows you to describe all other concepts.
DeleteOut of pure speculation, I have a sense that our continuation of sensorimotor learning has to do with meaning. What I mean by this is that when someone tells you that 'the cat is on the mat', you can picture a cat on a mat in your head but you're still missing the colour of the mat, the largeness of the cat, etc. Although you could ask the speaker for these extra details, I still think the way we construe meaning in our heads when we imagine a cat on a mat is a lot different than when we actually see a cat on a mat. It's like when you're reading a book; the world you will imagine in your head when reading will be a lot different from the world another person is creating in their head when reading the same book. The only way for us to be able to ensure that everyone means the same thing is to have everyone experience the same real-life, sensory experience/event. Then we can ensure that everyone understands that there is a cat on a mat in the living room vs a cat on a mat in the sky.
DeleteI got carried away, but hopefully this makes a little bit of sense?
Our ability to make propositional statements came about because of the evolutionary benefits of instructive category learning. Once you have some inductively acquired sensorimotor categories, all you need is the ability to form propositions in order to be able to state any proposition possible. This means that you can transmit categories to others who possess these basic abilities, which saves them the trouble of having to learn these categories through firsthand experience. Instructive category learning in turn confers an evolutionary advantage if the category knowledge helps you survive or reproduce. However, propositional abilities don't just let us transmit categories that are useful to survival and reproduction. Actually, in conferring natural language abilities they make it possible to express any proposition at all, according to Katz's definition of language. So they also let us say silly things like "a Gruffalo is a warty, hairy monster." Is the ability to create useless categories a spandrel of our propositional abilities, then, or are categories like Gruffalos evolutionarily useful in some way I'm not seeing?
ReplyDeleteAya, it's worse than "Gruffalos", which is just fiction for entertaining children or ourselves. See reply to Rebecca, above, about lying, cheating, and climate destruction. One might have added nationalist and religious wars. All impossible without language. (Tribal warfare is not impossible without language, but far less lethal.)
DeleteAs I see it, these are the three main questions this article is aiming to answer:
ReplyDeleteWhat is language?
According to Katz’s version of the “glossability thesis”, “a natural language is a symbol system in which one can express any and every proposition”. A symbol system is a set of symbols (arbitrary shapes) that can be combined together according to a set of formal rules based only on their shapes (not on their meaning) into semantically interpretable propositions (true/false statements about the world).
Why has language emerged? What could be its adaptive advantages?
As demonstrated by the simulation, symbolic instruction (using language) allows one to learn categories faster and to avoid the risks associated with trial-and-error learning (inductive learning from sensorimotor experience). The power of language lies in its capacity to share knowledge by attributing arbitrary names to real-world categories.
How has language evolved? And from what?
Universal Grammar (UG) is central to language and is inborn, so language must be the result of natural selection. From the comparison with apes, it seems that the essential cognitive components were present before language. It could be that language evolved from communication by pantomime, which is limited in its ability to convey new categories. Language might then have evolved from the new cognitive capacity to form propositions, which might be of genetic origin or through a process of Baldwinian evolution (those most motivated to learn through instruction benefited from this advantage and that disposition became genetically incorporated). Language was most likely gestural at the beginning, but vocal communication allowed to free up one’s hand and communicate at a distance, so the capacity migrated to the auditory modality. Thus, there have probably been more and more symbols (progressively more arbitrary) representing new categories and more rules allowing for a greater variety of combinations. But we can’t really referred to these more basic symbol systems as protolanguages because there is no qualitative difference with a natural language.
Joann, excellent understanding and summary. Now feed the article to GPT and see whether it can do as well as you. (My bet is that it can't.)
DeleteThe passage from pantomime to propositions is still not clear (though the path to arbitrariness is).
Also, the adaptive origin of UG is still a bit of a mystery. The power of language comes from the power of propositions (which would have been universal to any propositional language at all). The universality of UG is a somewhat different matter -- unless propositionality itself it somehow linked to UG: is it?
I really liked that this reading allowed me to better understand concepts that were previously mentioned in class. Indeed, I think I got to understand the difference between pantomime and proposition. Pantomime refers to the imitation of the real world through iconicity, which means that gestures are similar to what they refer to. For example, the word “cuckoo” could be considered iconic because its sound mimics to a certain extend the calls that the bird to which it's referring to makes. On the other hand, a preposition is any statement that contains a subject and a predicate with an assigned truth value (either true or false). Lastly, I also found really interesting the idea that pantomime could help ground symbols in their referents. In fact, since pantomime functions through iconicity, it can help with grounding because it uses symbol shapes that resemble the shapes of their referents (like the word cuckoo as I previously mentioned). Thus, pantomimes make the connection of symbols and their referents easier by grounding the link between the two through a similarity of the stimulus.
ReplyDeleteValentina, fine synthesis (but it's still proposition, not preposiiton ;>)
DeleteThis reading presents a convincing, and in my opinion, intuitive account of how language developed and illustrates the highly adaptive function it serves. The authors present the ability to teach and communicate categories (kinds of things) as the driving force for the development of language. Humans, as a social species, thus developed language in order to more effectively communicate categories to one’s kin or collaborators. The authors demonstrate how this would be an adaptive and highly favourable development, through an artificial life simulation of creatures that could only learn categories through induction and creatures that could learn categories through induction or instruction. The reading concludes with a discussion on the importance of learning categories through induction, as it cannot be “instruction all the way down” (13).
ReplyDeleteShona, good summary, but see replies to the other commentaries.
DeleteI found this article to be very interesting, in particular with regards to the points of the migration of language from gestural to speech. As I was reading the article, I was often intrigued at how exactly the shift took place. I understand how the device for propositions formed to advantageously pass on learned categories, and how it is significantly more advantageous to have it via speech compared to gestures, due to certain environmental pressures favoring speech. Still, the process of that shift keeps puzzling me, namely looking at the structure and capabilities of proto-language. I wonder if having more clarity on this would allow us to better understand UG and help us in reverse engineering it?
ReplyDeleteOmar, well it's all necessarily speculative, because language evolved a long time ago, probably only once, and left no fossils. Perhaps more can be tested with much more detailed computer simulation of the ancestral environment at the origin of language. And perhaps a more detailed look at its neural implementation in humans -- but not by invasive experiments on all the other species that lack language!)
DeleteBut what do you mean by "proto-language"?
(I'd say the transition from pantomime to propositionality (from "show" to "tell") was a much more radical one than the transition from gestural language to oral language, since the transition from iconicity to arbitrariness would already have occurred before gestural pantomime became gestural language. Once word shape becomes arbitrary, the medium stops mattering, and language is free to become amodal, and users can favor the most efficient medium. There was, however, a lot of neural evolution to make speech more and more efficient, in view of its adaptive advantages.)
It is argued that in a symbol system such as arithmetic, the shapes of the symbols are arbitrary because they do not represent the meaning of the symbol. For example, 0 is not shaped like nothingness, and 2 is not shaped like two objects. The shapes 0 and 2 do not have meanings, but they can be interpreted as meaningful, once learned through instruction. However, I wonder if symbols of written pictographic languages are considered non-arbitrary, as they represent objects through simplistic drawings. For example, when counting from 1 to 3 in Mandarin, we write “一” for one, “二” for two, “三” for three. Although the strokes individually do not have meaning, they gain some as soon as 1 stroke is learned to represent one object, so that the character with 3 strokes represents three objects, the number three.
ReplyDeleteThat is a really interesting point! Chinese characters evolved from Oracle Bone Scripts where the symbols are based on what the object looks like in real life. For example, the Chinese character for ‘mountain’ was basically just triangles to resemble the outlines of a mountain, and now it has evolved into ‘山’. I want to mention that, as the characters evolve in modern China, they became simpler and simpler, and they resemble its referrants less and less. The reason for the simplification is to increase the people’s literacy and improve the efficiency of language circulation. It is interesting to see how the language is evolutionary adaptive and shows how significant instruction is.
DeleteAnaïs and Andrae, good points. But remember that language did not originate in writing! It originated in either gesture or vocalization. Possibly some of the iconic pictographs appeared along with the iconic gestures, which would be further evidence for gestural origins. But iconicity is still showing, not telling. Telling began with propositions, and for propositionality, the word-shapes must be arbitrary (as we cannot depict everything iconically, and we certainly can't express every proposition iconically). Language itself certainly preceded written language, even if non-propositional pictograms preceded language.
DeleteThere is something extremely interesting, however, about written Chinese, and it is related to MinSets: Although the quasi-iconic characters no linger resemble their referents, the basic characters, or character-pairs out of which all the rest of the words are composed make it possible for children to guess the meaning of unfamiliar words in a way that is not possible for learners of alphabetic languages.
I invite the Mandarin-speakers in this course (or speakers of any of the other Chinese dialects, whose spoken forms are very different but whose written forms are the same in Chinese characters) to compare the advantages -- in 1st-language learning -- of Chinese compared to alphabetic written systems. It might cast some light on Minsets (what are those?). But remember that children learn to speak before they learn to read.
MinSet, short for Minimal Grounding Set, is the smallest set of words to be grounded so that we can use it to define the other words in the dictionary. After learning them through sensorimotor induction, we can learn the rest through instruction.
DeleteI’m guessing one of the advantages of learning the written form of Chinese is that once you’ve learned the basic radicals (elements of the characters), which can be considered as the MinSets of Chinese(?), it will be easy to build your vocabulary since you already have a solid foundation of the language. So despite the fact that there are over 50,000 characters, we don’t have to learn every single one of them since it makes it possible for us to guess the meaning of the characters by piecing it together. Unlike the alphabet system, where the different combinations can mean vastly different things, knowing one word does not mean you’d know the meaning of another. However, this only applies to traditional Chinese, as simplified characters don’t have those features anymore.
This article was very interesting and provided a great summary of much of the course so far but it still left me with the question of why UG exists. If language was encoded into genetics after humans figured out that instruction of categories was a powerful tool, an explanation I find persuasive, and humans who had more motivation and desire to both teach and learn were more successful how did UG help give us that push? It seems that UG under this explanation is still not required or potentially even that helpful for language learning and use. Why have constraints on grammar at all if those constraints aren't necessary for learning and using language.
ReplyDeleteHi Marie, this is exactly what I’m perplexed by as well. I agree with the proposed theory in this article, stating (grosso modo) that language evolved because of the power of learning arbitrary categories by instruction to increase efficiency of communication. Also, I love the idea about motivation to explain why humans developed language but not related species. But I still struggle to see the evolutionary benefit of a universal grammar. Particularly in view of something mentioned in the 8a paper – that children begin using grammar even if it does not confer any communicative benefit. What are we missing here?
DeleteMarie, Lili, & Kristi, you are right to be puzzled about the adaptive origin and value of UG, i.e., how and why did it evolve?
DeleteThere are some answers for part of UG, based on computational optimality, but that is only true for a small part of UG; it does not explain all or most of UG.
But Chomsky has conjectured that the "laws" of UG may not be grammatical at all. They may be laws of thought. (N.B., not the laws of logic, which govern all propositions.) The hunch is that propositions are thoughts: thinkable thoughts. And that language (including UG) evolved as the means of expressing thought in arbitrary symbols -- whether gestures or words (or, eventually, script).
UG violations would then be unthinkable. They do not conform to the laws of thought, Language and UG, which evolved to express any proposition, could not express these non-propositions. If spoken (or written) they truly make no sense:
"John was easy to please Mary" ("*J") would be one of an infinity of sentences that have no thought that corresponds to them (whereas "It was easy for John to please Mary" does).
"*J" is a truly unthinkable sentence, whereas Chomsky's famous "Colorless green ideas sleep furiously" is perfectly thinkable (and perfectly compliant with both UG and OG); it is just anomalous, mainly because it contains logical contradictions and contradictions to fact. Maybe it's more accurate to say that as a proposition it is either false or logically ill-formed.
[This is all far too abstract, inchoate and speculative for this course, but it's one possible way of explaining why the tree structure and rules of UG are needed to design a symbol system that can express all (and only) propositional thoughts, true and false.]
Hi Prof, the explanation may be abstract but it helped me understand the evolutionary benefit of an innate UG... because language (the symbol system) in itself needs a structure to have meaning and make sense to us. Would this also be similar to how pantomime transitions to arbitrary convention to increase efficiency of communication. So language/propositions may have evolved UG to increase efficiency and clarity for communication?
Delete1. In response to the professor's explanation: couldn't the same question be posed here? Why is it that all human thought should be structured in the same way to be evolutionarily adaptive? If the UG is just a syntactic manifestation of the way we think, meaning that UG-non-conforming sentences are thus unthinkable, why is it that we, as a species, share this collective structure, guiding the way we think?
Delete2. In response to Marie's comment: another interesting perspective might be that the UG is evolutionarily adaptive because it allows the transfer of knowledge across generations. If people ONLY learned language from their environment, we could expect strong divergences in the way that different regions of the world communicated (on an even more profound level than they do already, because Chomsky demonstrated that all dialects share UG). Information gathered by ancestors would likely not survive the test of time as well, hindering progress. This is really all just speculation, though.
You gave an exercise to the reader: "For those readers who have doubts, and think there is something one can say in another
ReplyDeletelanguage that one cannot say in English", well I would like to take you up on that, but in the reverse, wherein there are things one can say in English that cannot be said in other languages.
The Pirahã people of Brazil, before lusophone public schools arrived in 2012, had a language with only two numerosity categories, themselves being relative terms of "small quantity" and "large quantity". It is linguistically impossible to communicate the English sentence, "there are more than twelve but less than fourteen student's in my seminar" in Hiáitihí.
If we maintain the definition you set forth in the article for natural language, I wonder if this can't be an example on the continuum with protolanguage, or protolanguage itself, and I'd also like to ask if you've looked into extremly simple languages documented by anthropologists in the Amazon and New Guinea when thinking about the evolution of language.
I know we keep excluding non-content words like "the, if" from discussions about categories, and it makes sense why for now. But it makes me wonder exactly how they arose. For "if, or, when", it seems it might be related to logic. Is there a whole parallel discussion about the origin of these types of words? My intuition is that the origin is supposedly different, or we would not exclude them from this discussion.
ReplyDeleteThis is an interesting point, I was thinking that these words are likely ALL learned ("a" and "the" don't exist in every language) because they are useful in creating propositions, which the Pinker article argues is the adaptive function of language. Instruction can become more useful for learning categories if it is not just content words, but can also explain exceptions and conditions of categories (a thing that looks like an apple is edible IF it is not made of plastic).
DeleteHaving read Blondin Massé et al., one thing that I'm curious about is the role of the pre-existing vocal communication in language development. Primates had many vocal calls to communicate things such as the presence of predators or food. Obviously this wasn't language since it could not express anything (I'm not sure whether it could be said to express propositions at all) but it seemed to me that the paper was able to nicely tell the story of language going from show to tell without needing this system of communication that already existed. The story told made it seem to me as though language developed separately and then piggy-backed off the existing systems physical structures to turn from pantomime to verbal, propositional communication. Are they totally separate systems?
ReplyDeleteHi Stephen, if I've correctly understood, the existence of working vocal tracts in primates was an obvious necessity for propositional language to switch from gestural to vocal - you said it well in that language "piggy-backed off the existing systems". I imagine that the pre-existing vocal calls would serve very little function in the development of propositional language if it indeed originated with pantomiming. I'm not informed on the evolution of our vocal tracts, though I imagine that the advantages of vocal calls could explain why the vocal tracts reached the level they were at when language developed; reading 8a mentioned that our (modern) vocal tracts are tailored to the demands of speech, so maybe we can intuit that before speech, they were tailored to the demands of communicative vocal calls.
DeleteThe paper is very inspiring, which elevated the concept of showing and telling to "induction" and "instruction". To my understanding, the development of language begins with grounding the MinSet through sensorimotor induction, then forming categories and propositions by learning from instructions. As proposed, the origin of language is a shape-based symbol system. It can be manipulated and combined into an extensive network like the Dictionary, which possibly explain how learners by instruction acquire greater capacity and outperform those by induction in the experiment. Yet the latter remains a core part of language which allows it to be evolutionary adaptive.
ReplyDeleteHey Kristie. You make a lot of good points! Language's evolution seems anchored in the categorization of elements into discrete groups, forming the basis of understanding and communication.
DeleteThus, within this system, the arbitrary nature of symbols, independent of their physical shape, maybe suggests a critical autonomy of syntax from direct meaning.
As such the power of 'tell' is highlighted, for example conversing without looking. Yet, this contrasts with natural language, where symbols' manipulations involve not only syntax but also semantic considerations.
Therefore, relevant to our previous classes, I believe that the missing link, the symbol grounding problem, addresses the connection between symbols and their real-world referents which necessitates the sensorimotor system for induction learning at its origin.
Drawing the parallel of UG being the “hard problem” of language, as it is some narrow, innate property of language that is hardwired in our brains and the evolution of which is still largely unknown, I am curious about its role in making language special to humans.
ReplyDeleteIn regards to the uniqueness of language for our human species, the discussion of chimps was an interesting point to ponder. In short, it is not the intellectual capacity that chimps lack but rather the motivation and compulsion for language. The reason for this, the paper reads, is the fact that we humans are more social, cooperative, collaborative, etc.. But I also wonder, how much does this lack of motivation have to do with the innateness of UG in humans, a genetic trait that chimps (and other animals) simply lack? In the previous paper (8a) there was a discussion on Hinton and Nowlan’s simulation of the Baldwin effect (section 5.2.3) where they proved that learning can indeed guide evolution. However, they also found that there is a perpetual selective pressure to make learnable connections innate, and that this selective pressure greatly diminishes as most of the connections become innate –with more connections “hardwired” instead of needing to be learned, the less time it takes to learn the rest and the less likely that learning will fail for them.
Therefore, I wonder if that’s what UG is doing in humans; perhaps it is acting like some sort of catalyst (the missing ingredient in animals) in the process of humans gaining the motivation and compulsive need for language?
The Baldwin effect simulation illustrates that learning can guide evolution, turning a challenging trait into an advantageous one. This simulation showed that when a trait becomes more innate through learning over generations, the need for learning it anew diminishes, and the chances of failing to learn it decrease.
DeleteApplying this to UG and human language, it suggests that UG is evolutionarily important, making language uniquely human. As more linguistic connections become innate in humans, perhaps as a result of our species' highly social nature, the need for learning them from scratch decreases. This innateness gives humans a unique motivation and compulsion for language, a trait absent in other species like chimps. Chimps may not lack intellectual capacity for language but rather the innate linguistic framework and motivation that humans possess.
UG, therefore, might not just be about facilitating complex language acquisition but also about instilling an intrinsic need for communication in humans. This innate predisposition sets humans apart, making language not just a tool but a fundamental aspect of our nature and social interaction. The evolution of UG in humans could be a key factor that enhances our capacity for complex language and embeds the motivation for its use, a combination not found in other species.
I found this paper and its explanation of the evolution of language to be extremely compelling, especially since it grounds the evolution of language in tangible adaptive benefits that it confers and functions that it serves. Category learning by instruction rather than induction would be immensely beneficial to a species attempting to navigate and survive in a dangerous world rife with peril. There is a concrete benefit to survival and reproductive fitness if new offspring can learn what to do with what more efficiently, as through instruction rather than induction. Learning that you need to run away from a bear, or that you shouldn’t eat this mushroom, because you are told to do so rather than because of experience is a huge leg up in the competition of survival and likely increased our fitness as a species substantially. However, I am struggling to understand how and why language and propositions in the gestural modality eventually migrated to the auditory modality. Was it a matter of convenience to express propositions through sound? How did our ancestors settle on a common sound system to express thought? Was this via mirror neurons? I’m not convinced by the claim that naming categories and combining names into propositions to define new categories migrated to the auditory modality because the sensory modality and shape of a category became obsolete with the arbitrariness of names, but maybe I am not understanding the intricacies of the claim. Where is the evolutionary benefit in using sound rather than gestures? Is it perhaps because one can make more vocalizations than you can gestures, due to the relative machinery, thereby allowing one to express more propositions?
ReplyDeleteI think language migrated from a gestural modality to an auditory modality simply as a matter of practicality. Gesturing requires both having the thing you are talking about in front of you and free hands, which is not always available to you. Then, they went through various processes to increase efficiency. First, by elaborating arbitrary gestures to these objects instead of miming the object you are talking about (based on iconicity), and then by moving from iconic gestures to more economic arbitrary gestures. After that, because using your hands presupposed that you are not doing anything with them, they realized that vocal modality could also increase efficiency in that they were able to name the categories. All of these to convey categories to each other. The reason why language erupted in our species is rooted in our motivation to transmit categories to one another, which could be representative of a Baldwinian evolution (predisposition to learn categories favored).
Delete
ReplyDeleteI found the explanation of the mushroom categorization experiment in Section 4.2 of "Symbol Grounding and the Origin of Language: From Show to Tell" by Massé et al. (2013) to be the most intriguing, especially as someone with an interest in research. More specifically, the results of this simulation show how induction (trial-and-error) and instruction (“hearsay”) learning types differ.
To briefly summarize, virtual creatures were tasked with learning categories in an artificial environment containing varying mushrooms labeled as A, B, and C (Massé et al., 2013). Some creatures were able to learn via trial-and-error learning, while others had this ability along with this “hearsay” advantage. As well, mushrooms A and B could only be learned via supervised/feedback provided learning, while mushroom C was a category that could be learned through either instruction or induction. Overall, due to the lengthiness of feedback type learning, in comparison to the much more efficient instruction method, this simulation demonstrated how the instruction-type learners had the evolutionary advantage.
One thing that I liked in this article are the connections made between different aspects that we covered in the previous lectures. Here, the symbol grounding problem is directly associated with categorization. Understanding a word may be done by looking at its definition in a dictionary, and looking up the definition of the words we didn’t know in the first word’s definition, etc… but how can we make sense of the words included in the minimal grounding set? how can we give a meaning to a series of symbols that can’t be defined by anything else than their shapes? Maybe i’m mistaken on what kind of words we could find in the minimal grounding set, but as categories are defined by features, we can always shrink their definition to the serie of features that constitute each category, and so one. I don’t see how sensorimotor experience can give a meaning to the words that constitute “things” we can feel in the environment.
ReplyDeleteA MinSet refers to the smallest defining set of words in a dictionary. It is the minimum set of words from which all other words in the dictionary can be defined. The interesting aspect of MinSets is that they provide insights into the structure and organization of language. By identifying the MinSet, we can understand the core concepts and categories that form the foundation of a language. Additionally, studying MinSets can help us analyze the relationships between words and how they are interconnected within a linguistic system.
ReplyDeleteThe reading clearly emphasizes how learning through instruction, compared to only induction, is advantageous in terms of category learning. This is where the evolutionary (adaptive) explanation of language lies, in which it is a candidate for explaining the “why” of our language capacity. I was wondering what other advantages vocal language has besides “freeing the hands” and lifting the burden of distance during communication and category learning. Since the huge shift/migration to vocal language (“behaviorally, neurally, and genetically”) is key to human language capacity, I was curious about which other ways vocal language helps with efficient categorization.
ReplyDeleteThe article uses Katz's Glossability Thesis to determine whether something is a natural language. The thesis asserts that a natural language a symbol system that can express any and every proposition. I found this definition particularly interesting--it's not the case that a language can say or express anything, ie. every feeling or emotion... there can maybe still exist "unsayable" thoughts (though Chomsky's understanding of UG as laws of thought may undermine this idea?). What the definition really appears to be saying is that all things that are language-expressible are language-expressible in any language. What does this mean for unsayable thoughts? is there such a thing?
ReplyDeleteI thought this was a fantastic and clear discussion of the evolutionary origins of language. I found the possibility that higher apes have the cognitive apparatus to develop language but merely lack the intrinsic motivation to use named categories for communication extremely interesting.
ReplyDeleteI have a somewhat tangential question though. The paper described that there is a minimal grounding set of around 1500 words, which, once grounded, would allow all the words in the dictionary to be learned through definition alone. To tie some concepts in the course together, does this imply that if the man in Searle’s Chinese room were given the English (and therefore grounded) definitions of the correct 1500 chinese symbols, he would be able to understand the Chinese symbol manipulations he was carrying out? If so, does this mean he can now “forget” all the rules for symbol manipulation he was initially given, so long as he remembers the shapes of the symbols themselves?
Daniel, interesting point. But, even if the man was given the 1500 Chinese symbols he still wouldn't have time, assuming his time in this room isn't infinite, to learn the other symbols that were part of the manipulations he was carrying out. He would have needed to learn those extra symbols through hearsay. If the symbol manipulations he was carrying out only had the 1500 Chinese symbols in them then he would be able to understand everything, and even there were more than the 1500 symbols he'd probably be able to do a good job in guessing what him manipulations meant, he still wouldn't be fully understanding.
DeleteAs for your other question, he wouldn't be able to 'forget; the rules for putting the symbols together because then his manipulations wouldn't make sense. If you mean forgetting as more him not having the rules at the front of his mind like he did before, then maybe yes because he'd be more focused on the meaning (if he actually knew enough of the symbols) rather than the symbol syntax, but he'd still need the syntax in the back of his mind.
Hi Fiona. I was assuming that the man in the chinese room would have internalized the Chinese-Chinese dictionary, so upon learning the 1500 grounded words in Chinese, the rest of his vocabulary would become instantly grounded in the way "zebra" becomes grounded following grounding "horse" and "stripes". And yes - by forgetting, I mean I think its possible that once words are grounded, his applications of syntax might become automatic, as opposed to effortful (like consulting a chinese-chinese dictionary)... If this were the case (we will never actually know or course) this might suggest that UG is an emergent property of symbol grounding...
DeleteI appreciate how this reading synthesizes some main points we have learnt concisely while drawing out their connections, from categorization, to do the right kind of thing with the right thing, to computation, rule-based manipulations on the arbitrary shape of semantically interpretable symbols. It’s definitely a change in perspective to see computational languages as a subset of language, that other animals lack the motivation to use language, and that “every proposition is also a category inclusion statement” (Massé et al, , p.3).
ReplyDeleteThe most interesting revelation to me is the fact that formal languages like math, logic, and computer code are not separate entities, but are inherent components of natural language. I think it suggests that these formal languages have evolved from human communication, blurring the boundaries between linguistic and mathematical thought, highlighting the unity of human expression and the profound interconnectedness between language and cognition.
ReplyDeleteWhat stood out to me most in this reading was the artificial simulation which showed us that there is an evolutionary advantage to learn things via instruction (‘hearsay’) over induction (sensorimotor grounding). Further, as Harnad mentioned earlier, Chomsky thinks that the laws of UG may not be grammatical but are more laws of thought. And, since language evolved to allow us to express thought in arbitrary symbols, as thought gets more advanced our symbol systems do as well. We find special ways of condensing our thoughts into the least amount of symbols, and the most intelligent beings are seen as being able to do this the best. This sometimes doesn’t seem the case, however, with people’s use of condensed thoughts like LOL or OMG.
ReplyDeleteI was also very interested in the fact that there is an evolutionary advantage to learning by instruction versus induction. And an interesting point about LOL and OMG. I, however, don't necessarily agree that their usage goes against the idea that the most intelligent beings can condense thought into as few symbols as possible. At the end of the day, they are just acronyms representing a phrase, and although they are used more casually, they still follow the idea of condensing thoughts into symbols. They were also both added to the Oxford English Dictionary!
DeleteI watched the documentary on Washoe the chimpanzee following this reading. For those who don't know, she was raised among humans speaking sign language. She learned approximately 350 signs of ASL, showed some sentence structure abilities, and emotional intelligence like signing "tear" after someone signed to her "my baby died", even though chimps don't shed tears. It is hard to determine what level of understanding she actually had, as lots of the gestures were sure to be simple imitation and operant conditioning (in my opinion). However, some of her actions really does suggest some deeper understanding than pantomime or basic abstraction gesturing. For example, she would sign "brush teeth" with the opposite hand with which she was to be given the toothbrush. She also signed "doll in my drink" upon seeing a doll in a cup. She became quite good at chaining words in the same order, suggesting some understanding of syntax. But it didn't go far. Is this because of her lack of UG, and neurological structures for language? The idea of motivation is difficult for me to understand. Could we consider her somewhat impressive grasp of language due to external motivation from her unnatural environment?
ReplyDeleteThis is very interesting! I haven't watched the documentary, but I did some research, and what stood out to me was that Washoe was able to create new words or phrases to describe novel things. For example, she signed her own name for a swan by calling it a “water bird." The fact that she used existing words to describe something new is fascinating! I also wonder if the concept of the sensitive period for language acquisition would apply to Washoe, as she was very young during this experiment.
DeleteA keypoint to keep in mind when discussing the evolution of language is that langauge is not so broad as to encompass all of communication the defining element is its propositional ability (infinite!) which is not seen in more rudimentary forms of communication like the bee dance which was mentioned above.
ReplyDeleteHaving gone down a bit of a rabbit hole starting with a claim about Chomsky’s specific beliefs about the genetic innateness of language, I came to interrogate my own intuitions. I will say it seems incredibly unlikely that language is a strict product of genetic mutation (as Chomsky has been challenged on plentifully) and just purely based on our understanding of how long genetic evolution actually plays out vs how fast language came into the scene.
Universal grammar similarly has not been pointed to in empirical-“look there it is in the brain!”-science (though I don’t think that is necessary for insights to be made, just like how our current obsession with localization of brain function seems tangential to cognitive science but thats getting off track). A valid criticism of UG in my view is its post-hoc-ness about learning and subsequent backfilling of “this is how humans learn languages”. I found an earlier “Stevan Says” intruiging with the advent of ChatGPT and surprising success of probabilistic language models, this is compelling but admittedly, merely intuitive evidence against the importance of in-baked UG and rather just in-baked probabilistic learning (which is still not explained but rather takes away the problem of the apparent mystery of origin of UG).
While many species learn through direct experience, humans have harnessed the power of instruction, have used the ability to teach and learn from one another to greatly advance compared to other species. This ability to transmit knowledge not by actions but through language allows us to transform concrete experiences into the abstract constructs of our communication. The story of how language evolved from miming to the complex way we speak today shows how humans are good at adaptability and innovation. This origin story suggests that language is an ever-evolving entity, shaped by our need to communicate more effectively and efficiently. The article posits a grounding kernel—a core vocabulary acquired through induction—that becomes the scaffold for further linguistic construction. I wonder, how does this cognitive leap influence our modern-day language learning? Can understanding the transition from induction to instruction enhance our approach to acquiring new languages? Plus, how could language continue to evolve in a world increasingly influenced by technology. Will the digital age introduce new symbols, new structures, or even new semantics into the fabric of human communication? Could the incorporation of technology in everyday communication lead to a new form of language altogether, one that is perhaps more universal or even multidimensional? I don’t think it would be crazy to envision a future where our words may not just be spoken or written but also projected, coded, or more. Maybe technology will be altering the trajectory of human language.
ReplyDeleteHi Amelie
DeleteI wanted to add on to the grounding kernel aspect of your skywriting and a weird connection I made with UG. It might sound odd, but I see a similarity between the kernel's role in dictionaries and the concept of Universal Grammar (UG) in comparison to Ordinary Grammar (OG). While UG is innate (inborn), unlike how the kernel is learned, the kernel words act as the fundamental core (non-deducible aspect) of dictionaries, serving as the foundation without being further reducible to define other words. Likewise, UG forms the basis of language, and all other rules (OG) are derived from UG without violating its core principles. Applying this analogy to T3 robots, if we embed within them the essential basics like UG, innate CP, etc., they should have the capability to learn the rest on their own like humans.
The human capacity to name categories and comprehend propositions with truth values is fascinating. While many of our categories are learned, Universal Grammar (UG) is inborn. The question of why UG is innate and whether it has always been so is intriguing. Unlike most species that learn categories through induction, humans can learn them through instruction via propositions and truth values. This evolutionary advantage is profound; it enables us to communicate using propositions and truth values, fostering the rapid growth of knowledge and culture. The origins of UG and its role in human cognition remain captivating mysteries. How did this innate linguistic framework evolve, and what prompted its emergence?
ReplyDeleteHi Rosalie
DeleteThink of it this way, UG is innate, so we can infer it is biological, so we are born with it! but what does that mean? UG has a genetic basis, what prompted its emergence is basically rooted in evolution. Based on the the assumptions of evolution theory, a circumstance involving an attribute change that occurs in an organism is a consequence of environmental pressure. So we faced something that pressured us to improve the way we learn language, which then gradually integrates into our genome to improve the evolution of future generations. Sometimes I think about this concept in terms of animal communication and language, they have capacity to learn language but they don't need to because their current system is efficient for their survival, which is probably why they also don't possess UG.
For me the most fascinating and interesting part of the reading was that despite the intelligence of chimps and their ability to learn elements of human language, they are not motivated to use language as humans do. They clearly have different cognitive capacities than we do, one being that they don’t have UG but that still doesn’t explain why they don’t. Their communication system consists of gestures, vocalisation… maybe for them these systems are sufficient for their needs ? And that’s why their motivation is reduced.
ReplyDeleteHi Marine, I think you are right in saying that the chimps’ current communication system is sufficient for their survival and continued reproduction. They like us, have the capacity to learn language but they probably do not have the necessary environmental pressure to push them to developing a more efficient form of communication which is also probably why they don’t have UG. I think this can be tied back into the fitness methods that we read about with evolutionary psychology in last week’s skywritings in that the acquisition of language is probably not an enhancement of the chimp’s reproductive success and survival.
DeleteInitially, I had difficulty understanding why UG cannot be learned through exposure to grammatically correct sentences alone. However, Professor Harnad's analogy in “Chomsky's Universe” helped clarify the picture, and I thought I would share it here for anyone still struggling. It can be compared to a situation where one would learn the rules of chess simply by observing chess games that adhere to the correct rules. In turn, they could play chess without making mistakes, following the rules correctly, just by having watched error-free games with no feedback or instruction. Since one could not possibly learn the rules of chess just by watching games without being explicitly told what is right or wrong, we can deduce that the chess rules in that case would have to be innate.
ReplyDeleteNevertheless, I still have questions about the universality of this claim. How can we be certain that nobody has ever made a UG mistake, or that no child has been corrected for a UG mistake? Is it sufficient to rely on the “majority” of people to make these claims? I am also thinking about a farfetched situation (and please correct me if it shows a lack of understanding of what UG is on my part); let's say we intentionally speak in sentences violating UG rules, specifically designed by linguists, to a child. Would the child struggle to acquire language, only understand sentences conforming to UG grammar, or start using non-UG grammar themselves?
Arbib’s mirror system hypothesis places a spotlight on pantomime as a pivotal evolutionary stepping stone arguing for its necessity in our ancestral evolutionary communication toolkit. His idea is that pantomime is the glue that bound early human communication that led to a conventionalizing process. This process specifically is where pantomime allowed for the transformation of more stable symbols and protosigns which eventually led to the complex languages we know today. Arbib’s hypothesis in turn suggests that pantomime was instrumental in creating new ideas in addition to being able to convey already existing ideas. In other words, it propelled language complexity forward through the ability for proper expression. My question is that if pantomime did in fact serve as a catalyst for the development of complex language, how could modern forms of mime-like communication like virtual reality, memes, and even emojis influence the way we think and communicate not only today but into the future? Could mediums such as the ones listed provide a foundation for a new evolution in language, communication, and cognition as a whole?
ReplyDeleteHi Stefan, I read the Arbib article as well.
DeleteRather than “pantomime allowing for the transformation of more stable symbols and protosigns,” I think it would be more accurate to say that pantomime (which itself is a symbol) was itself the thing that was transformed INTO protosigns. Specifically, this is a result of the problem of pantomime being ambiguous: we don’t know if it’s representing an object, an action that is being done with the object, or a part of the action that is being done with the object, etc. Arbib suggests that protosigns arose in order to circumvent this issue: a set of pantomimes occurring commonly can be grouped into one simple protosign symbol. (I think an example would be something like “to eat” - an initial pantomime might be something like “pretending to shovel food into your mouth with a fork,” but once you repeat this gesture for breakfast, lunch, and dinner every day so that everyone becomes aware of this gesture to mean “eating,” you can simplify the gesture into something that something that is more efficient - like holding up one finger. This new gesture is no longer a pantomime of the action of eating, but rather a protosign and is more efficient in terms of both time and energy.)
As for emojis and memes, I’m not really seeing how those topics would be relevant to Arbib’s discussion of pantomimes here…
I really enjoyed this week's reading! It discussed the differences between learning to categorize (being able to do the right thing with the right kind of thing) by sensorimotor induction and learning by verbal instruction. Although the paper stated that there are many commonalities between these two processes, I found it very interesting that in general, categorization by induction results in faster reaction times compared to categorization by instruction. This made me think about different learning styles and how important hands-on activities are in school, especially at a young age.
ReplyDeleteI am attracted to this example, “Now suppose the category that someone lacks is not apples but toadstools, and that the person is starving, and the only thing available to eat is edible mushrooms or poisonous toadstools that look very much like the edible mushrooms. Being told, by someone who knows, that ‘The striped gray mushrooms are poisonous toadstools’ could save someone a lot of time (and possibly their life) by making it unnecessary to find out through direct trial- and-error experience which kind is which” (Vincent-Lamarre, Massé, Lopes, Lord, Marcotte & Harnad, 24).
ReplyDeleteI like this example in the article very much. But it only looks at it from one angle. If corresponding experience, knowledge, or things are heard from other places, it will be easier to make some decisions. But this need comes down to whether the knowledge and experience we have received is credible. This reminds me of an example, when faced with a glass of boiling water, our parents would remind us when we were young that the water was boiling and we should not touch it. However, some children do not listen to their parents and try to touch and feel it. Then, it got burned. However, this bad experience of being burned will always be remembered in our minds, so we will not touch a cup of boiling water.
But there are some things in life that we need to truly feel to appreciate their charm, learn relevant knowledge and gain different gains. For example, when travelling, you can’t truly experience and reap the charm by just listening to other people’s videos or watching videos taken by others.
Reflecting on various student comments and readings, it seems evident that the evolution of language is intricately linked to our species' drive for survival and social collaboration. The shift from pantomime to propositions appears to be a significant leap in this journey, enabling us to convey complex ideas and knowledge beyond the immediate sensorimotor experiences. So, I wonder, as we continue to evolve in a technologically-driven world, how might our language adapt and transform? Will the digital age introduce new dimensions to our linguistic capabilities, further blurring the lines between direct experience and abstract communication? This ongoing evolution of language, shaped by both biological and cultural forces, continues to be a captivating journey, one that underscores the uniqueness and adaptability of the human mind.
ReplyDeleteWhat exactly is language? In simple terms, it's our way of communicating using our brains, social skills, and gestures. Some say language boils down to Universal Grammar (UG), which means we're born with certain rules for language (they are innate) rather than learning them as we grow up like we do with ordinary grammar (OG). When we communicate, language connects to our ability for organizing things into categories and using symbols and rules to manipulate them. For instance, we name things using symbols and can relate them to objects by both experiencing them with our senses (like seeing or feeling) and giving them names (this is called 'induction'). Later, we can pass on this knowledge to others (known as 'instruction'). The paper also highlights the importance of experiencing life through our senses, which sharpens our ability to categorize. So, even though we can teach and learn through instruction, experiencing things firsthand remains crucial for our ability to understand and categorize the world around us.
ReplyDeleteHi Ethan, I think you have provided a good summary to the Blondin Masse et al. reading for this week. I just wanted to add on to what you said regarding the continued importance of sensorimotor experiences and learning categories through “induction”. Even though, we humans are well capable of verbally communicating and thus pass on our knowledge through “instruction”, it seems evident that induction is still the main basis through which we learn and that instruction is merely an additional way to add on top of what we already know. Without induction, instruction would be impossible and I think the authors’ reference to the kernel-core and breaking the dictionaries down to the MinSet of less than 1500 content words is an indication that although we can learn from instruction, the foundational learning stems from induction of new sensorimotor experiences.
DeleteI thought this reading was incredibly interesting, and the idea that language is an emergent property of humans having the motivation or desire to communicate is one I hadn’t encountered before this course, despite having many lectures on language acquisition. Since I’m writing this sky late, I have the benefit of some extra information from later weeks, which I found helpful in relation to Section 5, on “combining and communicating strategies.” There was a reply to one of the week 11 readings on “mind-reading” capacity by Marthe Kiley-Worthington discussing the communicative abilities of animals, that I thought was an interesting contrast to Professor Harnad’s description of instruction. This week 8 reading describes the difference between induction and instruction category learning; with the former coming from direct experience and the latter from word of mouth. Returning to the week 11 readings, Professor Harnad had discussed with Kiley-Worthington whether humans or animals are better at understanding one another. Kiley-Worthington had stated that animals were better at such tasks because they were able to do so with non-verbal communication; Professor Harnad disagreed, stating that humans’ ability to have such discussions in the first place suggests our increased ability. I hadn’t fully understood why this could be true until finishing this reading and examining the artificial life simulation; humans are uniquely better communicators, largely because of our ability to teach and learn categories via instruction.
ReplyDeleteThis paper was very interesting because it expanded on what was discussed in the last paper but explores the idea of language as an evolutionary adaptation due to its relationship to categorization. This paper points to the adaptive advantage based on correct categorization (obviously if you eat the wrong mushroom and kick the bucket, you will not have further reproductive success - it is thus helpful to know how to classify the mushrooms). It suggests a “protolanguage” may have been gestural in nature, but a need for the power of language. In species without language capabilities, the only way to find out the category (in this case poisonous) is through direct experience. Meanwhile, we as humans have the advantage of being able to learn categories through natural language. Through arbitrary symbols or sounds, we can convey or derive meaning as we have the necessary equipment to understand through symbol grounding, what exactly is meant without experiencing it. These capabilities strengthen the odds of our reproductive success.
ReplyDeleteIn terms of reverse-engineering these abilities, there are a few things to be defined. A dictionary is a set of all the words in a language. The kernel of a dictionary includes all the words that are used to define other words. The core is composed of the smallest number of words that can be used to define the other words in the kernel. We would have to ground at least the core (or the minimal grounded set) in order to ground or define all of the other words in the dictionary.
It is fascinating how human language differs from animal’s general communications, especially with the higher primates having the capacity, but still do not develop language. There must be something unique about human cognition, allowing us to abstract the referents we are interacting with, develop language and thus ground symbols with it. Could it be that humans have such a complex society, such that the complexity requires more words than the MinSet (basic words that are learned through direct grounding) to function properly? Although this seems subtly favouring the theory that cognition developed because of social needs in a group, animals living in social groups tend to have a sophisticated system of communication. Perhaps humans started from there as well.
ReplyDeleteI just realized my comment has been removed, for strange reasons, so here it is again.
ReplyDeleteThis reading summarized well what we've seen in this class so far, which makes it very useful, relevant, and interesting.
I was having a thought when reading the "Simulating the origin of language" section, when you write about categorizing mushrooms. And I was wondering if "doing" different actions with different types of mushrooms could influence how you categorize all of them, affecting the overall categorization of each category. Let me explain myself more clearly with examples, since this sentence is complicated to understand. Let's say, as mentionned in the text, that you interact with the category A by watering the mushrooms. And you interact with the category B by marking the location of the mushrooms. And you interact with the category C by eating the mushrooms. What if, instead of watering the category A, you marked or ate it instead, would it change the categorization of it? Would you give it different specificities? Or would it, otherwise, add some details to its categorization? I don't know if my question makes sense, but I am simply wondering if we can categorize more precisely if "doing" implies more actions, or if it complexifies the definition.
I am actually going to add something to this comment. I now understood that it would change the categorization of it, allowing to the generations who use the three "doing" (instruction-learners) to outperform those who only use one (induction-learners). Could this effect be explained by the fact that the categorization is more complete when using three different "doings"?
DeleteMy response is about the weasel word "granularity" and why we still need a lifelong sensor motor despite the fact we could learn approximately 1500 min set to categorize. The term "granularity" refers to the level of detail or refinement within data or various observed phenomena. For instance, consider our traditional hypothetical "Mushroom Island." Here, someone might learn from his experiences that items categorized as "blue" and "mushroom" are unsuitable for consumption. However, the Island presents an assortment of "mushrooms" with differing shades of "blue," such as "sky blue mushrooms" and "light blue mushrooms." This differentiation poses a challenge because our language might not be sufficiently granular to precisely convey which particular blue of "blue mushroom" is being referred to.
ReplyDeleteThus, like state in the paper, when a kin, who is already knowledgeable about certain concepts—like the learner's mother—takes on the role of teaching, she aims to impart her understanding of concept C to the learner. However, a discrepancy may arise between the mother's conception of C and what the child gleans through verbal communication. If C pertains to "blue" or "mushroom," the conveyed language risks shedding some of its precision, resulting in a loss of granularity. Consequently, in transitioning from A+B to C, we confront the limitations of language's ability to reinforce the need for a lifelong commitment to sensorimotor categorization to compensate for those linguistic shortcomings.
I found Cangelosi and Harnad (2001)'s experiment very interesting, as virtual simulations like this where you can clearly see the adaptive advantage of some trait phase out the have-nots of this trait is certainly convincing. I think that suggestive evidence like this and empirical evidence that attempt to show that language seems to have evolved for cooperative purposes is the best avenue for the argument that language evolved not for just thought, but for communication. It is hard to imagine how Chomsky's idea that language evolved for thought could be shown with such beautiful simplicity...although I am still not fully convinced.
ReplyDeleteThis reading covers what language is and how it developed and evolved from ‘showing’. It also introduces induction and instruction learning, the two ways we learn categories. Induction learning is by trial and error, and through direct sensorimotor experience, while instruction is observing or hearing about it through others who already know the different categories. Generally, instruction takes fewer learning trials but the reaction time for categorization through induction is faster.
ReplyDeleteI found Funes the Memorious to be a very interesting demonstration, (one where we also talked about in class) about the importance of categorization. Funes has the ability to remember every detail of his life with impeccable accuracy. An infinite memory, however, would impact his ability to form categories because every single moment is retained in its unique individual form (trapped in the succession of individual snapshots of experiences), without the ability to create overarching categories. This would suggest that categories rely on a certain level of forgetfulness, enabling one to recognize similarities and differences. He would not be able to recognize an individual (or have an exceedingly difficult time) across different contexts because– let’s say Dave–does not look identical every time he sees him (positioning, clothing, etc). Each encounter with Dave is a singular event and Funes must recognize him in each instance as a member of the same category (Dave). I can only imagine if someone had an infinite memory in real life, they must be in existential isolation.
ReplyDeleteI found this to be an excellent discussion on the origins of language. My favorite part of this paper was the suggestion that language likely evolved from the simple act of combining known categories through observational learning to more intentional teaching among hominins—like a mother teaching her child. The suggestion that this led to the development of propositions, I found both sensible and surprising. Another highlight for me was the question about whether chimps truly grasp language as propositions, noting their ability to associate symbols but lacking the motivation for naming and describing—especially curious about this.
ReplyDeleteUpon considering the notion that chimpanzees lack the impulse to name and describe, my mind turned to another mammalian species, the killer whale. Killer whales are an example of how language systems are related to sociality, as the frequency and richness of their speech depends on the size of their society. Resident whales that feed on fish have large social groups with richer languages and more frequent language exchanges. Passerine whales that feed on marine animals, on the other hand, have less frequent language exchanges because they also have smaller group sizes.
ReplyDeleteI think truth functional logic and first order logic could be protolanguages since they are systems of symbols and convey meaning. FOL is a more advanced protolanguage than FOL. These two protolanguages are the evolutionary steps of computers being able to understand and make meaningful sentences. Which could possibly explain the evolution of how the languages we speak today have come to be.
ReplyDeleteIn the categorization section, they talk about how we would associate a face with their name and things could change the next time we see them but we still would be able to recognize them, over a long time change for example. I suddenly got this weird existential crisis awakening realization: if we had no understanding of the concept of time (which we technically don’t still understand fully) then we wouldn’t be able to recognize someone ten years later. I wonder if learning more about how time works we will find other ways of recognizing people. I apologize for the out of topic commentary.
I think the experiment they conducted with the inductive learners and the instruction learners is somewhat a nice proof for language being the product of evolution through natural selection. Those who could take instructions (understand language) outlived the inductive learners. They were more fit to live in the conditions of the experiment. The “keeping our feet on the ground” section undermines my previous point a little bit but I do think it is still a plausible explanation.
You bring up a very interesting idea. I am curious about what you think about the motivational aspect of language and categorization before they existed (or at least before they evolved) and why ancestors of chimpanzees weren't motivated. I'd imagine they would have had similar advantages and disadvantages to our ancestors in terms of survival.
ReplyDeleteIn my understanding, proposition could be seen as how we, as humans, use language; in order words, proposition is the tool of expressing language. When we constructure the language in certain ways, it represents the relationship between subjects and predicates, and a new category could be learnt through the indirect grounding process.
ReplyDeleteHowever, I found the boundaries between direct and indirect grounding is blurring, after the early stage of language acquisition. After you gain enough categories, there is a tendency to deviate from direct grounding, as the words or concepts you learn is getting more and more vague – in other words, there are no direct corresponding referents in reality. Moreover, some categories that could be explained by direct grounding could be also distorted from its original meanings, which becomes what we consider as slangs. I think, what really matters, is that there should be existing a sensorimotor system for the first stage of direct grounding process, because of that is what we humans do when we are kids. What ChatGPT lacks, is the direct grounding process, and it cheats audiences by using the Big Gulp to form logical proposition of languages (i.e. using languages correctly, but without understanding it).
This paper explains how categorization is important in language acquisition and shows that humans learn language more efficiently through instruction than induction. I am especially intrigued by how language transitions from show to tell described in this paper. I find it quite interesting how this corresponds with the development of literature. Literatures, especially literary novels, began long ago as theatrical performances. The content of the works is largely delivered to the audiences through the gestures and bodily actions of the actors. Then, literature starts to evolve to tell a story using words only, and the information hidden in the texts gets more complex. The early theatrical performance is literally pantomime and the evolved literature achieves higher intellectual level using propositions. Since language is essential to literature, the resemblance of the evolvement of literature in language development reinforces the idea that language starts from show to tell.
ReplyDeleteIn section 4.2, the simulation reveals that within a few generations, instruction learners had out-survived and out-reproduced the induction learners. As mentioned in section 5.3, this is likely due to the amount of induction vs. instruction necessary to learn a category. For instruction to be selected, it should be that less instruction than induction is needed to learn a new category. However, I do not understand why that would be the case. Instructions, especially in ancestral times, were likely not as detailed as we can make them now, which would increase the likelihood of misunderstanding, potentially proving fatal.
ReplyDelete